id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
213210869
pes2o/s2orc
v3-fos-license
Phytotoxicity in Foxtail millet seed polluted batik wastewater and its reduction by ArbuscularMycorrhizal Fungi The aim of this study was to determine the effect of giving ArbuscularMycorrhizal Fungi (AMF), to the toxicity of Millet seeds (Foxtail millet) contaminated with batik waste water. This study used seeds contaminated with batik waste water with the addition of AMF and without AMF (negative control), as well as water-treated waste, Millet seeds that were not polluted with batik waste water with and without AMF added with distilled water (positive control). The results showed that AMF addition had an effect on reducing the toxicity of Millet seeds contaminated with batik waste water. Millet exposed to batik wastewater with the provision of AMF + water produced the best value of 57.78% germination, 42.22% inhibition, highest leaf length 1.13 cm, plumulae 1.89 cm, radics 1.58 cm. The provision of AMF is promising in preventing pollutants from entering plants as well as improving soil quality. Introduction ArbuscularMycorrhizal Fungi (AMF) is an association between certain fungi and plant roots by forming complex interwoven interactions. AMF is known as soil fungi because its habitat is in the soil and is in the rooting area of the plant (rhizosphere). Besides being referred to as soil fungi, AMF is also commonly said to be root fungi. The specialty of this fungus is its ability to help plants to absorb nutrients especially phosphorus nutrients or P [1]. Besides being useful in increasing nutrient absorption, AMF increases plant tolerance to degraded land conditions in the form of drought and the presence of heavy metals. This function is used as an alternative technology to assist growth, increase productivity and quality of plants grown on marginal lands [2]. AMF can protect host plants from the absorption of these toxic elements through the effects of filtration, complexation and accumulation. AMF can act as a biocontrol of heavy metal absorption, and can help plants avoid heavy metal poisoning [3]. AMF symbiosis also increases plant resistance to extreme drought and humidity, helping to accumulate substances that are poisonous to plants such as As, Cr, and Pb [4]. The AMF genus Glomus associated with plants has been proven effective in absorbing heavy metals, namely Cd, Zn, and Pb. The things above show that AMF inoculation is very important in the process of plant growth and absorption of heavy metals in polluted soils [5].Another section of your paper. Batik waste contains 0.1385 mg / l total Crom, 2.0587 mg / l Iron, 0.2696 mg / l Copper, 54.7175 mg / l Zinc, 0.0063 mg / l Cadmium, 0.2349 mg / l Lead (Riwayati). The content of this metal can accumulate in the environment, for example on the ground. Batik waste from many home industries is discharged into the environment without processing. The metal content is bad for exposed plants and also for seeds. Seed germination bioassay research shows that when a dye solution is used, the results inhibit seed germination [6]. Millet ranks sixth as the most important grain and consumed by one third of the world's population. One of the main sources of energy providers, protein, vitamins and minerals, rich in B vitamins, especially niacin, B6 and folacin as well as essential amino acids such as isoleucine, leucine, phenylalanine and threonine and contain nitrileoside compounds that play an important role in inhibiting the development of cancer cells (anti-cancer), also reduce the risk of heart disease (artheriosclerosis, heart attack, stroke and hypertension). Millet thrives in high-temperature areas, limited water availability, without the application of fertilizers and other technological inputs, and in critical lands that are difficult to plant other grains such as wheat and rice [7]. batik waste removal experiments on millet seeds need to be done because this plant is considered good on critical land. The provision of AMF is promising in preventing pollutants from entering plants as well as improving soil quality, however, toxicity is still not studied. This study aims to determine the effect of AMF administration on the phytotoxicity of F. millet seeds contaminated with batik waste water. Material AMF used is a mixture of Glomus sp., Glomusmanihotis, Glomusetunicatum, Gigaspora margarita, Acaulosporaspinosa with a zeolite carrier containing 20-30 spores / gram. The seeds used in this study were Millet (F. millet). Evaluation of Phytotoxicity Sterilized millet seeds are germinated in petri dishes that have been sterilized with sterile cotton [8], then stored at room temperature. seeds on cotton surfaces are sprayed with each treatment solution every 24 hours to reach 80%. The treatments used were seeds exposed to batik waste added with AMF and without AMF (negative control), waste given distilled water, Millet seeds were not polluted with batik waste water with and without AMF added with distilled water (positive control). The study was carried out at room temperature.Percentage of seed germination and inhibition were estimated, leaf length, shoot length (plumule) and root length (radicle) were measured after 9 days. Result and discussion Phytotoxicity is used to measure the level of plant poisoning caused by pollutants. Batik waste exposed to Millet seeds results in damage to seeds which can inhibit millet growth resulting in other variables being disrupted. AMF application reduces the toxic effects of batik waste. The results showed that AMF administration affected the level of weed puzzle poisoning as indicated by the calculation of germination, inhibition, measurement of leaf length, plumulae, and radicle. The AMF + Water + waste treatment results in a lower level of poisoning against F. millet compared to the administration of only waste or AMF + waste. This is characterized by a higher level of germination, lower inhibition, longer leaves, higher plants and longer roots. The percentage values of F. millet germination and inhibition in the treatment of Water, Effluent, AMF + Water, AMF + Effluent, AMF + Effluent + Water 88.89%, 21%, 100%, 52.22%, 57.78% and 11.11%, 79%, 0%, 47.78 %, 42.22% (Figure 1). The length of F. millet leaves was 1.36, 0, 1,029, 0.57,1.13 in the treatment of Water, Effluent, AMF + Water, AMF + Effluent, AMF + Effluent + Water (Figure 2). Shoot lengths (plumules) of F. millet with the treatment of Water, Effluent, AMF + Water, AMF + Effluent, AMF + Effluent + Water are 6.52, 0, 6.4, 0.78, 1.89 cm, respectively ( Millet seeds immersed in water showed 88.89% germination, leaf length & roots and plumulae were higher than those exposed to batik waste showed 21% germination as well as leaf length, roots and plumulae. Batik waste shows signs of toxicity compared to water treatment. whereas seeds treated with the addition of AMF exposed to batik show a higher value in leaf and root length. The results showed that seedlings treated by AMF reduced millet toxicity, which showed that AMF was efficient in reducing toxicity. These results are in line with previously reported studies that the toxicity of pollutants can be reduced using AMF (kedia, dias, money). the efficiency of reducing the toxicity of millet exposed to batik waste and other treatments can be seen visually in Figure 5. In addition to suppressing AMF toxicity, it plays a good role in the growth of proven seedlings that are exposed to water better in germination (100%) than without AMF (88.89%). By plants, (2) increasing tolerance to heavy metal contamination, drought, and root pathogens, and (3) providing access for plants to utilize nutrients that are not available to be available to plants [9]. AMF can dissolve heavy metals in batik waste. The secretion of secondary metabolites produced by AMF can be in the form of organic acids, causing the micro elements to be easily dissolved. The situation with acidic pH causes the heavy metals contained in the medium to become soluble and actively absorbed by plants [10]. AMF plays an important role in protecting plant roots from toxic elements, including heavy metals. The mechanism of protection against heavy metals and toxic elements by AMF can be through the effects of filtration, chemical deactivation, or the accumulation of these elements in hyphae fungi. Absorption of micro elements by AMF plants depends on several factors, namely the physical-chemical condition of the soil, soil fertility, pH, type of plants, and the concentration of micro elements in the soil [11]. This occurs because AMF is known to be able to bind the metal to carboxyl groups and pectak compounds (hemiseslulose) on the AMF contact matrix and host plant, on the polysaccharide sheath and hyphal cell wall [12]. AMF can bind metal ions in the cell walls of HIV and can protect plants from these metal ions. Heavy metals are stored in crystalloids in mycelium fungi and in root cortex cells in plants using AMF [13]. Figure 5c shows millet seedlings exposed to batik waste that cannot grow properly, Figure 5d and 5e show better growth of seedlings with the addition of AMF. At low concentrations of heavy metals does not affect plant growth but at high concentrations will cause plant damage [14]. AMF can increase plant tolerance to toxic metals by accumulation of metals in external hyphae thereby reducing its absorption into host plants. The use of AMF in bioremediation of polluted soils, in addition to the accumulation of these materials in hyphae, can also be through the mechanism of metal complexing by external hyphae secretions. This shows that there is a filtration mechanism, so that the toxic material is not absorbed by plants [15]. Plants gain a lot of benefits from being associated with AMF. AMF is able to replace 40% Nitrogen and 25% Potassium. In addition, AMF can also protect against soil infectious diseases and increase plant resistance to drought [16]. AMF can produce growth regulators, be able to form physical barriers and issue certain antibiotics to block the development of soil borne pathogens, and help plants competing with weeds [17].
2020-01-02T21:48:06.991Z
2019-12-29T00:00:00.000
{ "year": 2019, "sha1": "ec8494112eafdb451b0e027ab80348bfb9621fb9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/406/1/012016", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "87166fb823d91f54aeb307680d9c6f54edeacfa3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
119254863
pes2o/s2orc
v3-fos-license
Dilepton Signatures of the Higgs Boson with Tau-jet Tagging We consider the process $p p \to t {\bar t} H$. This process can give rise to many signatures of the Higgs boson. The signatures can have electrons, muons and jets. We consider the signatures that have two electrons/muons and jets. Tagging of a tau jet and a bottom jet can help reduce the backgrounds significantly. In particular, we examine the usefulness of the signatures"isolated 2 electrons/muons + a bottom jet + a tau jet","isolated 2 electrons/muons + 2 tau jets","isolated 2 electrons/muons + 2 bottom jets + a tau jet", and"isolated 2 electrons/muons + a bottom jet + 2 tau jets". We find that signatures with two tau jets are useful. The signatures with one tau jet are also useful, if we restrict to same-sign electrons/muons. These requirements reduce the backgrounds due the process with Z-bosons + jets and the production of a pair of top quarks. We show that these signatures may be visible in the run II of the Large Hadron Collider. Introduction The Higgs mechanism of the Standard Model (SM) now seems to have been validated with the discovery of a Higgs boson like neutral scalar particle. The strong evidence has been presented by the both ATLAS [1] and CMS [2] Collaborations on the basis of the data taken in run I at the Large Hadron Collider (LHC). Because of the appearance of the signal in multiple channels, as seen by both collaborations, there is little doubt that the Higgs boson of the SM has been found. All channels of the discovery suggest a mass of about 125 GeV for the particle. The LHC is now on a long shut-down to improve the luminosity and the centre-ofmass energy. When it restarts to take data in 2015, one of its major goals would be to measure the couplings of the newly discovered Higgs boson to all the SM particles. This is specially important because of the prediction of the existence of scalar particles, sometime with properties similar to that of the SM Higgs boson, in various extensions and modifications of the standard model. To do so, one will need to identify the particle through multiple processes and measure the couplings of the scalar particle with various other SM particles. These couplings determine the branching ratios of the decay channels and also the production cross sections. Identification of the scalar particle through multiple processes will allow us to measure the couplings and confirm that the scalar particle is indeed the SM Higgs boson. In this letter, we consider the production of the Higgs boson in association with a topquark pair pp → ttH [3,4], with its subsequent decay into a tau-lepton pair or W W * . Such tagging helps in reducing the strong interaction backgrounds. One major source of the backgrounds is the production of a pair of top quarks with or without additional jets. One strategy to reduce this background would be to restrict the signature events to same-sign electrons/muons. We show the usefulness of this strategy, specially when only one jet is tagged as a tau jet. In the next section, we will discuss the signatures that we consider. In the section 3, we will discuss the backgrounds to these signatures. In the section 4, we will present numerical results and some discussion. In the last section, we will conclude. Signatures We are considering a general class of signatures "2 electrons/muons + jets". As we see from Table 1, without any tagging of the jets, the backgrounds due to Z bosons + jets and tt + jets processes would overwhelm the signal. Therefore, to reduce the backgrounds, we are focusing on the signatures with two electrons/muons and at least two tagged jets. Since the top-quark background events always have bottoms jets, so to reduce it we will require at least one jet to be tagged as the tau jet. These signatures occur, when after the production of ttH, the Higgs boson decays into a tau-lepton pair or W W * . With these considerations, at least one of the top quark accompanying the Higgs boson decays semileptonically. The possibility of a top quark decaying into jets leads to an increase in the signal events, relative to when we have more than 2 electrons/muons in a signature. For the Higgs boson with a mass of 120 − 130 GeV, the tau-lepton decay mode has a branching ratios of 5 − 7 percent; the W-boson decay mode has a branching ratio of 14 − 30%. When a tau lepton decays into hadrons, it can manifest itself as a jet -tau jet. This jet has special characteristics. It is narrow and has very few hadrons. Its narrowness is due to the low mass of the tau lepton; it has few hadrons because a tau lepton decays into mostly 1 or 3 hadrons. These properties of a tau jet help us to distinguish it from a quark/gluon jet. There is usually a 25 − 50% efficiency to tag a tau jet. The probability of a light quark/gluon jet to mimic a tau jet can be taken to be 1 − 0.1% [31][32][33]. A bottom jet is broader than a light quark/gluon jet and has more particles. It can mimic a tau jet less often. A bottom jet can be identified with a probability of about 50 − 60%, while other jets can mimic it with a probability of about one percent [34][35][36]. To manage the background and at the same time to keep the signal events to a sufficiently high level, we are analyzing the signatures "2 electrons/muons + a tau jet + a bottom jet", "2 electrons/muons + two tau jets","2 electrons/muons + a tau jet + two bottom jet", and "2 electrons/muons + two tau jet + a bottom jet". In the signal, the bottom jets appear due to the decay of the top quarks; a tau jet can occur due to the decay of the Higgs boson, or the decay of the on-shell/off-shell W bosons from the Higgs boson or the top quarks. Electrons/muons can appear due to a decay chain of the Higgs boson, or the decay of the top quarks. Presence of electrons/muons in a signature is important to reduce the background. Recently, we had considered the signatures with three or four electrons/muons [38]. We saw that the presence of a bottom jet with four electrons/muons and the presence of one additional tau jet and a bottom jet with three electrons/muons help in keeping the background low enough to be able to detect the signal. In the case of two electrons/muons, as we will see, it will be useful to have either at least two tau jets or one tau jet with only same sign electrons/muons in the signatures. Either of these two strategies will reduce the signal events, but will reduce the backgrounds even more. We can have same sign charged leptons in the signature because there are three/four on-shell/off-shell W boson in the production and decay chains under considerations. Two of these W-bosons can produce the same-sign electrons/muons. Sources of off-shell W-bosons can be tau-lepton, which can come from the decays of the Higgs boson, the top quark, the W-boson, or the Z-boson. This strategy of observing same-sign leptons will significantly reduce the large background from the production of a top quark pair with or without jets and Z + jets. Z + jets backgrounds are significantly reduced or eliminated due to the the tagging of at least 2 jets as tau and/or bottom jets. This tagging also reduces the top-quark pair production background to the same-sign lepton signatures. This is discussed more in the next section. Backgrounds All the signatures under consideration will get contribution from the signal events, i.e. the production of the Higgs boson, and other SM processes which do not have a Higgs boson. To establish the viability of the signatures for signal detection, we shall first identify the main background processes and then estimate their contributions. We will consider both types of the backgrounds: direct backgrounds and mimic backgrounds. In the case of the direct background, the background processes produce events similar to the signal events. They have same particles as in the signal. On the other hand, mimic backgrounds have jets, which can mimic (fake) a tau jet, a bottom jet, or even an electron/muon. These mimic probabilities are usually quite small -less than a percent. So even if a background has large cross section, it becomes smaller when folded with mimic probabilities. Tagging efficiencies and mimic probabilities were discussed in the last section. One important type of background occurs when a B-meson in a bottom jet decays into an electron/muon and this lepton is away from the jet. This leads to an extra lepton in the event. Possibility of such backgrounds has been explored in the literature [37]. As we have argued [38], such backgrounds which can occur due to the top quark production is not significant for the signatures under consideration. This is mainly due to two facts -(1) we have at least one tau jet in the signatures, so backgrounds are to be folded with the tau jet mimic probability; this reduces the backgrounds significantly, (2) Among the direct backgrounds, the most significant backgrounds would be due to the production of ttZ and subsequent decay into leptons. Because of similar structure, ttZ will always be a significant background to the signal. This background can be reduced by requiring appropriate M ℓ 1 ℓ 2 to be away from the mass of the Z-boson. But the background when a Z-boson decays into a tau-lepton pair and the subsequent decay of the tau-leptons into electrons/muons cannot be reduced in this way. The major mimic background is the production of a top-quark pair. Even with the folding of mimic probabilities, it remains large enough to make the signature almost not useful. However, when we consider the subset of events with same-sign electrons/muons, this signature becomes quite viable. This is because now the tt process is no longer a significant background. 2. "2 electrons/muons + two tau jets" : In this case, the direct backgrounds are the processes ttZ, W W Z, ZZ. The main sources of mimic backgrounds are: tt, W Z + jet, ttW, Z + 2 jets, W W + 2 jets. Presence of two tau jets will be crucial to reduce the mimic backgrounds. Numerical results and Discussion In this section, we are presenting numerical results and discussion of the results. The signal and the background events have been calculated using ALPGEN (v2.14) [44] and its interface with PYTHIA (v6.325) [45]. Using ALPGEN, we generate parton-level unweighted events. Using the PYTHIA interface, these events are then turned into more realistic events by hadronization, initial and final state radiation. We have applied following kinematic cuts: We are presenting the results for four signatures: "2 electrons/muons + a tau jet + a bottom jet", "2 electrons/muons + two tau jets","2 electrons/muons + a tau jet + two bottom jet", and "2 electrons/muons + two tau jet + a bottom jet". For the bottom jet, we have used the identification probability of 55% [34,35]. For other jets to mimic a bottom jet, we use the probability of 1%. For a tau jet, we consider two cases. This is because of a trade-off between higher detection efficiency and higher rejection of the mimic-jets. In the first case of LTT, low tau-tagging, we have taken the low value for the tau-jet identification, 30%, and low mimic rate of 0.25% [31]. The second case of HTT [32], high tau-tagging, has high identification rate of 50% and higher mimic rate of 1%. To reduce the Z boson related backgrounds, we have required the missing transverse momentum to be more than 25 GeV and applied a cut on the mass of same-flavour and opposite-sign lepton pair by requiring |M ℓ 1 ℓ 2 − M Z | > 15 GeV. We have smeared the jet/lepton energies using the energy resolution function where ⊕ means addition in quadrature. For the jets a = 0.5 and b = 0.03. For the electrons/muons we take a = 0.1 and b = 0.007. Since we are not using the mass of two or more jets, inclusion of jet energy resolution does not affect the results significantly. Lepton energy resolution is quite good, so the results are also not significantly impacted. In Table 1 illustrates the importance of jet tagging and observing same-sign electrons/muons. First we note that there is marginal differences in the two-electrons and two-muons events. This is primarily statistical, i. e., due to the finite event sample. We also notice large backgrounds due to Z boson processes and top-quarks only processes. A missing p T cut and a cut on the mass of the lepton pair will help in reducing these backgrounds. Fig 1 illustrates the importance of the missing p T cut. We also notice the virtual elimination of the background due to a top-quark pair production for the same-sign electrons/muons. However, it will come at the cost of reducing the signal events by a factor of about 3. In the case of only one tau jet in the signature, one will have to adopt this strategy. For the two tau jets case, the extra rejection factor, due to the observation of the second tau jet, can reduce the backgrounds by about a two orders of magnitude, so the restriction to same-sign electrons/muons is not necessary. In the Tables 2-5, we present results for various signatures for the integrated luminosity of 300 fb −1 . This is the expected luminosity for the run II. We have included only the major backgrounds. We have also taken into account Next-to-leading-order (NLO) contributions to the signal and background processes. To do so, we have multiplied the leading-order (LO) results by appropriate K-factors. The K-factor is taken as 1.20 for the ttH [47] process; the K-factors for the ttZ [48], ttW [49], and ZZ [50] are taken to be 1.35. The K-factor for the W Z + jet [51] is chosen as 1.3, while for the W W Z [52] production, it is 1.7. For the processes tt [53] and tt + jet [54], K-factors are taken to be 1.5 and 1.4 respectively. For the Z + 2 jet, the K-factor is 1.3 [55]. Because of the smaller K-factor for the signal, as compared to the backgrounds, its inclusion increases the significance only marginally. In Table 2, we present the results for "2 electrons/muons + a tau jet + a bottom-jet". So we wish to identify a bottom jet and a tau jet. We note that for the different masses of into only three leptons, it gives rise to an additional combinatorial factor that increases the signal events. This signature has very large background from the ttW and tt processes. The significance is not good for both the LTT and HTT cases. However, if we restrict to the same-sign electrons/muons in the signature, the signature's significance becomes more than 6, making it a pretty good signature. In Table 3, we present the results for the signature "2 electrons/muons + two tau jets". The major backgrounds are ttZ, tt, ZZ, and Z + 2 jets. Significance for the 125 GeV Higgs boson is 4.0 for the HTT case. Because of the reduction in the signal events, LTT case is not as useful. As we see, restricting to the same-sign electrons/muons is again not useful due to a paucity of events. We can also identify an additional bottom jet. This reduces the number of signal events, but this also leads to a significant reduction in the Z boson backgrounds. As we see from Table 4, this signature of "2 electrons/muons + two tau jets + a bottom jet" has a very good significance. In the Table 5 we display the results for the signature "2 electrons/muons + a tau jet + two bottom jets". Here signal events are smaller as compared to that in Table 2. This is due to the identification of an additional bottom jet. As there, here the background due to the production of a top-quark pair is quite large. However, if we observe only the same-sign electrons/muons, the significance may reach the observational value within the run II of LHC. with the integrated luminosity of 300 fb −1 with the cuts and efficiencies specified in the text. Let us now comment on the possible uncertainties in the above results [3]. Theoretically, the main sources of uncertainties are choices of parton distribution functions, factorization and renormalization scales. In obtaining our results, we have used the NLO cross sections. These cross sections have the uncertainties of the order 10 − 15%. Furthermore, when these choices increase/decrease the signal cross section, they also correspondingly increase/decrease the background cross sections. Therefore, there is a further reduction in the uncertainties due to the cancellation when we compute the significance -a ratio. Overall, one may expect only a few percent theoretical uncertainty in the significance of the signatures. Similarly, there will be cancellation of uncertainties due to experimental limitations. Therefore, our results about the significance are quite robust. Conclusion In this letter, we have analyzed the signatures with two electrons/muons for the process pp → ttH. In particular, we have considered the signatures "2 electrons/muons + a tau jet + a bottom jet" "2 electrons/muons + two tau jets", "2 electrons/muons + a tau jet + two bottom jets", and "2 electrons/muons + two tau jet + a bottom jet". The major backgrounds are from the process tt (and jets) and the processes with Z bosons. The signatures with two tau jets have decent significance and may be observed in the run II of the LHC. The signatures with only one tau jet are overwhelmed by the backgrounds due a top-quark pair production. However, restricting to the same-sign electrons/muons events, these signatures may also be visible. So it appears that to observe the ttH process using two electrons/muons, one may need to either tag two tau jets or tag one tau jet but observe same sign electrons/muons. More detailed analysis of various other signatures will be presented elsewhere.
2014-05-09T04:59:55.000Z
2013-08-29T00:00:00.000
{ "year": 2013, "sha1": "3d8f02867f6475c05dadbc665ed993d875577b57", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3d8f02867f6475c05dadbc665ed993d875577b57", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251466028
pes2o/s2orc
v3-fos-license
The L-NAME mouse model of preeclampsia and impact to long-term maternal cardiovascular health The L-NAME mouse model of preeclampsia mimics key aspects of disease in pregnancy, but does not demonstrate the increased long-term risk of cardiovascular disease seen in individuals following preeclampsia. 1. This manuscript has investigated whether a common mouse model of pre-eclampsia, induced by inhibiting nitric oxide, can induce lasting changes in cardiovascular function in the mouse following recovery from pre-eclampsia. The paper demonstrates that the model induces hypertension in the absence of vascular dysfunction and that the effects are transient, where blood pressure is fully recovered by ~1-week post-pregnancy. There were really no lasting effects on the cardiovascular system other than a tendency for increased TNFalpha expression in hearts of pre-eclampsia mice. 2. For each main point of the paper, please indicate if the data are strongly supportive. If not, explicitly state the additional experiments essential to support the claims made and the timeframe that these would require. a. L-NAME convincingly increased blood pressure in pregnant mice when delivered from day 7. However it is noted that there is a large spread of data with a lot of cross-over between groups (i.e. around a quarter of the control mice had higher mean blood pressure than L-NAME treated mice). With n>40 used in the L-NAME group in this study, it may not be a practical model for labs that are limited in resources-can you comment on the power / sample size required to undertake an experiment with significant blood pressure reduction by an intervention as a primary endpoint? b. There were no differences in albumin:creatinine-can this please be presented like the other graphs with individual data to give the reader and idea of the data variation. The histology panels in figure 2 show potentially interesting changes in the kidneys, but it is not clear on how consistent this was across animals. Can quantification of this data be provided? Or images from multiple animals? Can the figure include arrows to the relevant points on the representative image to match description in figure legend? Does renal damage correlate with blood pressure? c. For figure 3 E-F the total number of pups have been reported-when in fact pups should be avaergaed across a litter so that n is equivalent to the number of different mothers. d. In figures 4-5 selected samples have been used for analysis of gene expression and circulating factors, but it is not clear how these samples were selected. Given the large degree of variation in blood pressures in the groups, it is important to show in a supplemementary file the physiological parameters for the samples used in further analysis, and justification for why only a small subset of samples were used. e. In the same vein as above, vessels froma sub-cohort of mice were assessed for vascular function ex vivo-can you please report on the physiological parameters for those particular individuals used. f. A significant limitation of the model appears to to be that ex vivo vascular function does not match with blood pressure measured in vivo-i.e. L-NAME has not caused a rightward shift in vasorelaxation. The authors have provided a nice discussion of this point, where they hypothesise that the effect of L-NAME is transient in the presence of circulating L-NAME and has effectively been 'washed out' in ex vivo resistance arteries. Does this indicate that L-NAME is not inducing any permanent changes during pregnancy? There are a few questions around vascular function-in that the response of controls appears to have a rightward shifted EC50 to what may be expected for acetylcholine responses-can the authors comment on this? The acetylcholine curve post-pregnancy is more what is expected with EC50 around 10-7, so it seems that the EC50 has shifted during pregnancy, is this expected? g. For figure 6A the blood pressure drop in both groups post-pregnancy appears to be counter-intuitive. Generally there is a lowering of blood pressure in pregnancy -so can the authors explain why mean blood pressure drops down to only ~80 mmHg 10 weeks post-pregnancy? And could the data be extended to include matched mice during pregnancy showing the higher blood pressure in L-NAME vs. control mice, and the rate at which it returns to normal? h. TNFalpha seems to be increased in hearts post-preeclampsia. This is an intriguing finding that deserves more exploration. Is it possible to look at other cytokines or inflammatory markers? Could the location of increased TNFa be determined using IHC? 3. Lastly, indicate any additional issues you feel should be addressed (text changes, data presentation, statistics etc.). -All bars graphs could be presented with individual points shown Reviewer #2 (Comments to the Authors (Required)): The current work by de Alwis and colleagues provides important preclinical data from an L-NAME mouse model of preeclampsia. The work shows that, despite the model inducing phenotypes akin to preeclampsia during pregnancy, this pathophysiology did not continue post-delivery. Overall, the manuscript reads well, and the data presented is informative and relevant to the field; however, there are some minor recommendations that should be considered and/or addressed. Introduction: The introduction reads well. Methods: 1. What was the rationale for the concentration of L-NAME administered? Also, what was the rationale for the timing of L-NAME administration? Are these based on previously published work? In the introduction the authors state that "However, these murine studies using L-NAME differ in their protocols, using varied concentrations of L-NAME, routes of administration, timing of exposure, and murine breeds, that could respond differently to L-NAME [37]." This is an important point that appears to mostly be overlooked throughout the manuscript. Would the authors speculate that a higher L-NAME dose that may recapitulate a more severe disease phenotype result in prolonged cardiovascular dysfunction post-delivery? Given the simplicity of this model, it may benefit future studies to examine this to determine whether the severity of L-NAME-induced insult impacts maternal physiology after pregnancy. This consideration should be expanded on. 2. For the heart, was the left ventricle dissected and analysed separately from the other chambers or was total heart analysed? Some clarification for organ processing is needed. 3. Suggestion -a table may be an easier way to present the information for the qRT-PCR primers 4. Why were different reference genes used for different tissue? It is becoming increasingly common to normalise gene expression relative to the geometric mean of multiple reference genes rather than only one -could the authors provide comment on this? 5. Can the authors provide the sensitivity range for the ELISA kits used? Results: 1. Figure 1 -specify that it is arterial blood pressure. 2. Can the authors include the statistical analysis used in each figure legend, as the authors specify that both parametric and non-parametric analyses were performed? 3. It appears that the post-delivery data has been split by each age category and analysed between study groups, but the way the data is graphically presented a two-way ANOVA would be the more appropriate choice of analysis. What was the rationale for the choice of statistical analysis? 4. The observed structural changes in the kidney is interesting, but some direction on the figure to point out these changes would be beneficial. 5. What is the rationale for expressing the Mmp data as a ratio to expression of inhibitor? Can the authors provide the expression of both Mmp and inhibitor separately and then as a ratio? 6. Did L-NAME administration induce brain sparing or just reduced birth weight? 7. In the discussion the authors state that "We did not observe significant changes in expression of inflammatory, or endothelial dysfunction-related genes in the mesenteric arteries post-delivery. However, there is limited data due to a low sample size." Is n=8/9 per group considered low? Discussion: The discussion is well written and comprehensive, albeit lengthy, which at times detracts from the importance of the findings. It is recommended that the discussion be edited down. Reviewer #3 (Comments to the Authors (Required)): Paper by de Alwis contains an impressive amount of work using a mouse model involving a vasoconstrictor, L-name to assess impacts on maternal blood pressure and vascular function in pregnancy and post-partum. The study also looks at changes in key organs like the placenta, as well as the kidney and heart of the mother. On a whole, the paper is very well written and conclusions substantiated. I only have minor comments and suggestions for improvement -mainly aimed at increasing the clarity of the data and interpretations. These are listed below 1. What is the rationale for the starting L-name treatment on e7.5? Is this equivalent to when maternal blood pressure may be expected to rise in PE women? Please include a sentence in the methods 2. Was sex of the fetuses/pups recorded? 3. Which placenta was taken from each litter? was this random or based on litter order/weight within the litter? Was the an equal representation of female and male placentas across the litters per group analysed for histological and molecular assays? 4. Was there any effect of the L-name treatment on gestational length? 5. Why were pups culled immediately after birth, rather than leaving the mums to support them through to weaning, which would be the normal situation? Include a sentence explaining the rationale in the methods. 6. Description of how pups and mums were culled post birth needs to be included. 7. Include details of the inter and intraassay CVs for the ELISAs/kits used to measuring maternal circulating factors. 8. Did the reference genes used in qPCR change per group? Please include a comment about this in the methods 9. For kidney histology were the three sections analysed per condition from different animals? What parameters were assessed in the kidney sections? Was it the medulla or cortex that was investigated? Please clarify in the text 10. How many litters per group were analysed for placental histology? What is meant by placental blood space volume? Is that fetal and maternal blood spaces volume combined? Were histological analyses conducted blind to the group? Please clarify in the text and include an image of placental cross-section to indicate the compartments measured 11. Please include details of the statistical test/s used for each data shown, in the figure legends. All graphs should show individual points (there are a few which are instead bar charts and this does not allow an appreciation of the values for all the samples per group). All graphs should start at zero or show a break in the axis (eg this isnt the case for Fig 3b) 12. Include annotations on figure 2 to show the region/compartments of the kidney analysed that help to link better to the descriptions of changes in the text 13. Each conceptus/pup is a repeat of the mother and thus all data and any statistical analyses in figure 3 should involve the average value per litter or a mixed model approach to account for this (so sample size if litter/mum not the individual fetus/pupwhich inflates the sample size) 14. Figure 4, how many placentas per litter were analysed? Include details in the legend 15. Please show individual values for Mmp9 and Timp for the maternal postpartum tissues in the supplementary data (so it is clear whether a change in both or one is driving the altered ratio between them). 16. The finding of altered vascular reactivity in L-name exposed dams post-pregnancy is subtle but interesting. I realise it would be out of scope for this study, but further work assessing the impact of a superimposed hit like a high salt diet or in animals as they get older would be valuable to see if changes in vascular reactivity may be exacerbated. Perhaps this could be mentioned in the discussion to help direct future work? 17. Given the changes in Hmox1 expression, it would be valuable if the authors measured oxidative stress levels, such as by oxyblot or MDA staining to see if the placentas may be stressed in this l-name model. Similarly it would be interesting to measure levels of ROS in the maternal blood. If L-name treated mice are not exhibiting oxidative stress, it could be the explanation for the subtle effects on maternal postpartum health 18. Given the change in Mmp9:timp, it would also be valuable if the authors assessed if there were any persistence of pathological changes in the maternal kidney postpartum, including leukocyte influx in the l-name treated dams. 1. This manuscript has investigated whether a common mouse model of pre-eclampsia, induced by inhibiting nitric oxide, can induce lasting changes in cardiovascular function in the mouse following recovery from pre-eclampsia. The paper demonstrates that the model induces hypertension in the absence of vascular dysfunction and that the effects are transient, where blood pressure is fully recovered by ~1-week post-pregnancy. There were really no lasting effects on the cardiovascular system other than a tendency for increased TNFalpha expression in hearts of pre-eclampsia mice. RESPONSE: We thank the reviewers for generously taking their time to review our manuscript. We have responded to each comment below. a. L-NAME convincingly increased blood pressure in pregnant mice when delivered from day 7. However it is noted that there is a large spread of data with a lot of cross-over between groups (i.e. around a quarter of the control mice had higher mean blood pressure than L-NAME treated mice). With n>40 used in the L-NAME group in this study, it may not be a practical model for labs that are limited in resources-can you comment on the power / sample size required to undertake an experiment with significant blood pressure reduction by an intervention as a primary endpoint? RESPONSE: We thank the reviewer for their question. A strength of this study was the large sample number in each group. We used these large numbers because we had several timepoints of interest post-pregnancy, and a subset of mice were euthanised at each timepoint for blood and other organ assessments. These large numbers ensured we would have ample numbers to assess blood pressure measurements at the final timepoint. Therefore, while it isn't necessary to run such larger numbers to achieve the convincing data in Figure 1, it was necessary to have these numbers to achieve sufficient power in the subsequent timepoints assessed. We have added this to the methods see page 5, lines 114-115; which now reads: "In order to have sufficient power for the final timepoints, a larger cohort of mice were required. Three week-old…" Further, we have provided the specific mean and standard deviation values for the observed change in blood pressure in Figure 1 so other researchers can use this information for sample size calculations. Please see page 10, lines 230-233 as below: "L-NAME administration significantly increased mean arterial blood pressure at E14.5 (Control 103.7 ± 20.89, L-NAME 126.5 ± 20.92; Mean ± SD) (p<0.0001; Figure 1A) and E17.5 (Control 111.9 ± 22.66, L-NAME 130.1 ± 18.52; Mean ± SD) (p=0.0001; Figure 1B)." b. There were no differences in albumin:creatinine-can this please be presented like the other graphs with individual data to give the reader and idea of the data variation. The histology panels in figure 2 show potentially interesting changes in the kidneys, but it is not clear on how consistent this was across animals. Can quantification of this data be provided? Or images from multiple animals? Can the figure include arrows to the relevant points on the representative image to match description in figure legend? Does renal damage correlate with blood pressure? RESPONSE: We thank the reviewer for their comments. Figure 2 has now been updated to present the individual points. Due to tissue processing and staining issues, we were only able to analyse kidneys from two animals from the control and L-NAME groups. Arrows were used to point out the regions of interest (necrosis, inflammation, tubule changes) in the figure (Figure 2). Due to the low number of samples available, descriptive analysis of these regions was more appropriate. We were unable to correlate kidney structural changes with blood pressure due to having too few samples for imaging, but this will be considered in future studies. c. For figure 3 E-F the total number of pups have been reported-when in fact pups should be averaged across a litter so that n is equivalent to the number of different mothers. RESPONSE: We thank the reviewer for this suggestion. As a result of this suggestion, we sought advice from a biostatistician, and have now analysed the data using a linear mixed-effects model. This mixed-effects model takes into consideration the variability of the fetal/pup sizes within each litter using a random effect term. This method was suggested by Reviewer 3, point 13. The methods, figure and related results have been updated to reflect this change -please see changes to statistical methods on page 9, lines 219-222, Figure 3, and results on page 10-11, lines 256-262. Our biostatistician has also been added as a co-author of this manuscript. d. In figures 4-5 selected samples have been used for analysis of gene expression and circulating factors, but it is not clear how these samples were selected. Given the large degree of variation in blood pressures in the groups, it is important to show in a supplemementary file the physiological parameters for the samples used in further analysis, and justification for why only a small subset of samples were used. RESPONSE: We thank the reviewer for this suggestion. In Figure 4, we assessed gene expression in placentas collected from the subset of mice that were culled at E17.5 of pregnancy. We were unable to collect the placentas from the mice that gave birth to their pups, as the dams consumed the placentas. We chose 1-3 placentas from each dam at random. Samples with low RNA yield at extraction were excluded. We have updated Control n=6 mice, L-NAME n=10 mice, 1-3 placentas were chosen at random from each. Samples with low RNA yield at extraction were excluded." Similarly in Figure 5A-C, the ELISA data presents the levels of the circulating factors in the subset of mice that were culled at E17.5, as we collected blood through cardiac puncture under anaesthesia at cull. In Figure 5D and E, again we performed myograph experiments on vessels collected from all dams in the subset that were culled at E17.5. The dissected vessels were assessed for smooth muscle and endothelial integrity before they were included in the experiment. Due to technical difficulties in mounting very small arteries on the myograph, damage meant that some arteries could not be used. This occurred at random due to the technique, and not due to any selection bias. We have clarified this in the methods as below. Please see page 6, lines 168-169: "Confirmation of response to KPSS, and greater than 80% relaxation was required for inclusion of the vessel." The fetal and placental parameters corresponding to this subset of mice culled at E17.5 are presented in Figure 3A-D. As suggested, we have now also added a graph of blood pressure data corresponding to this E17.5 cohort into the Supplementary Data. Please see Supplementary Figure S4. e. In the same vein as above, vessels from a sub-cohort of mice were assessed for vascular function ex vivo-can you please report on the physiological parameters for those particular individuals used. RESPONSE: At each time point, a subset of mice were culled for collection of blood, organs and blood vessels. Mice were randomly allocated for cull at each time point. As described above (point d), mesenteric arteries from all dams were assessed at each time point, but as is an important feature of validation of myograph practice, only those that had intact smooth muscle and endothelial layers were included in the analysis. This is important to confirm that the vessel is not damaged during the mounting process, and that the vessel is still functionally active and able to respond to stimuli. f. A significant limitation of the model appears to to be that ex vivo vascular function does not match with blood pressure measured in vivo-i.e. L-NAME has not caused a rightward shift in vasorelaxation. The authors have provided a nice discussion of this point, where they hypothesise that the effect of L-NAME is transient in the presence of circulating L-NAME and has effectively been 'washed out' in ex vivo resistance arteries. Does this indicate that L-NAME is not inducing any permanent changes during pregnancy? There are a few questions around vascular function-in that the response of controls appears to have a rightward shifted EC50 to what may be expected for acetylcholine responses-can the authors comment on this? The acetylcholine curve post-pregnancy is more what is expected with EC50 around 10-7, so it seems that the EC50 has shifted during pregnancy, is this expected? RESPONSE: We thank their reviewer for their questions. Regarding the effect of L-NAME on the vasculature, we did initially speculate, as mentioned, that L-NAME does not seem to have a permanent effect on the vasculature, as when it is washed out at E17.5, the vessels do not have altered vascular reactivity. However, because there are changes in vascular reactivity and signs of inflammation in the heart and kidney at 10 weeks postpartum, it suggests that L-NAME did have some effectthat possibly the effects induced by L-NAME in pregnancy become worse over time. We also have noted in our discussion that we have only looked at phenylephrine and acetylcholine pathways in vascular functionchosen because they are most often used for mesenteric artery myograph studies. However, L-NAME may have altered other pathways involved in the vascular response that we did not assessthis is something we are interested in examining in the future, especially because we noted that circulating ET-1, sFLT-1 and CRP were altered. However, it was beyond the scope of this paper. We have detailed this as belowplease see page 17, lines 471-474 and 478-479: "This would imply the administration of L-NAME may have exerted some permanent effects, either directly or indirectly (as the hypertensive phenotype may have driven other indirect changes that required time to become apparent), and further studies would be required to examine this. […] Assessing vascular responses to other agonists may provide additional insight into other pathways that may be altered by L-NAME." Regarding the shift in EC50 in the pregnant vesselsin this paper we did not evaluate the differences across different time points, but rather between the L-NAME and control groups within each time point. The mice in pregnancy were injected with either the vehicle or treatment prior to cull, which may have induced a stressed, more vasoconstrictory state, compared to postpregnancy where they were not administered any treatment before cull. It is known that pregnant and non-pregnant vessels respond differently, but because we did not control perfectly to assess changes over time, or assess pre-pregnancy vascular reactivity, we do not think we can speculate on this finding here. g. For figure 6A the blood pressure drop in both groups post-pregnancy appears to be counterintuitive. Generally there is a lowering of blood pressure in pregnancy -so can the authors explain why mean blood pressure drops down to only ~80 mmHg 10 weeks post-pregnancy? And could the data be extended to include matched mice during pregnancy showing the higher blood pressure in L-NAME vs. control mice, and the rate at which it returns to normal? RESPONSE: We thank the review for their comments and suggestions. We suggest that the maternal blood pressure (in both the control and L-NAME groups) post-partum reduction as mice aged, was largely due to the mice acclimatizing to the process of having their tail cuff blood pressure measurements performed (over the 13 weeks (pregnancy+post-partum)). Of important note, in this paper, we did not assess changes in the parameters from pre-pregnancy through pregnancy, and then post-partum. Rather, simply between the L-NAME (preeclampsia model) and control groups at each time point. Assessing changes over time and showing matched data in pregnancy and post-delivery warrants investigation in a focused paper in the future, but unfortunately was beyond the scope of the current study. h. TNFalpha seems to be increased in hearts post-preeclampsia. This is an intriguing finding that deserves more exploration. Is it possible to look at other cytokines or inflammatory markers? Could the location of increased TNFa be determined using IHC? RESPONSE: We thank the reviewer for this suggestion. In Supplementary Figure S11, we present the expression of a number of genes in the hearts, including other genes involved in inflammation, including interleukin-6 (Il-6; Supp Figure S11E), inflammasome gene Nlrp3 (Supp Figure S11K) and Tgfβ1, 2 and 3 (Supp Figure S11O-Q) which are involved in immune regulation. These genes were not altered at 10 weeks post-delivery. All hearts post-delivery were collected for RNA extraction and assessment of gene expression. Hence, we do not have the samples for IHC, but we agree this would be interesting for future studies. We have stated the above in our discussion, page 17, lines 464-467: "Further studies to determine if there is an increase in Tnf protein production and other markers of inflammation are needed to confirm this. Exploration of the cardiac structure and function may be helpful in evaluating whether our model produces the same left ventricular dysfunction seen post-preeclampsia." 3. All bars graphs could be presented with individual points shown RESPONSE: We thank the reviewer for their suggestion. We have altered Figure 2, 6 and 7 in the main manuscript figures to show all data points. Reviewer #2: The current work by de Alwis and colleagues provides important preclinical data from an L-NAME mouse model of preeclampsia. The work shows that, despite the model inducing phenotypes akin to preeclampsia during pregnancy, this pathophysiology did not continue post-delivery. Overall, the manuscript reads well, and the data presented is informative and relevant to the field; however, there are some minor recommendations that should be considered and/or addressed. Introduction: The introduction reads well. RESPONSE: We thank the reviewer for the feedback on our manuscript, and have made changes as suggested -detailed below. Methods: 1. What was the rationale for the concentration of L-NAME administered? Also, what was the rationale for the timing of L-NAME administration? Are these based on previously published work? In the introduction the authors state that "However, these murine studies using L-NAME differ in their protocols, using varied concentrations of L-NAME, routes of administration, timing of exposure, and murine breeds, that could respond differently to L-NAME [37]." This is an important point that appears to mostly be overlooked throughout the manuscript. Would the authors speculate that a higher L-NAME dose that may recapitulate a more severe disease phenotype result in prolonged cardiovascular dysfunction post-delivery? Given the simplicity of this model, it may benefit future studies to examine this to determine whether the severity of L-NAME-induced insult impacts maternal physiology after pregnancy. This consideration should be expanded on. RESPONSE: The concentration of L-NAME used in this study was based on previous studies in rats, and pilot studies in our laboratory that determined an effective dose to increase blood pressure. In line with this, we have added references to others studies that use the same doses. Mohamed, M.M. Gamal, Prenatal intake of omega-3 promotes Wnt/β-catenin signaling pathway, and preserves integrity of the blood-brain barrier in preeclamptic rats, Physiological Reports 9(12) (2021) e14925. L-NAME was administered from E7.5 to E17.5 of pregnancy, to model the human equivalent gestation of end of first trimester/early second trimester to term. This was to try to model earlyonset preeclampsia, where we see signs of hypertension prior to termwhich is what we found with the elevation of blood pressure even at E14.5 of pregnancy. We did not begin administration earlier to avoid causing fetal loss, but found that we were still able to alter fetal/placental growth with this timing. 129-131: "…was administered daily from embryonic day (E)7.5 to E17.5 of pregnancy (approximately early second trimester to term in human equivalent to model early onset disease) via 100µL subcutaneous injection. …" It is possible that a higher dose of L-NAME could recapitulate a more severe long-term phenotype. Some have used lower and higher L-NAME doses, we used a medium dose range and examined various doses in the laboratory prior to proceeding. In addition, preeclampsia is a complex condition that features the impairment of many different pathways. Using L-NAME in this study, we aimed to inhibit the nitric oxide pathway to increase blood pressure and impair fetal growth. Following this study, rather than increasing this dose, we hypothesise impairing another pathway simultaneously may be more physiologically relevant. Our future studies will focus on a second hit modelpotentially with a high fat/salt/sugar diet. Furthermore, our mice strain did not have a cardiovascular disease risk phenotype at the outset, which is a key consideration in long-term modelsthus using mice with predisposition to cardiovascular disease may be more valuable than simply increasing the dose of L-NAME, and be more closely mimicking what we think is happening in humans. Please see page 18, lines 502-507, where we have highlighted future studies: "Assessment using a strain that that is more sensitive to L-NAME, may demonstrate a more persistent chronic effect on the cardiovascular system [37]. Additionally, it would be interesting to use a strain or genetic model with increased cardiovascular disease risk, potentially an obese or high salt diet mouse model, or a strain genetically modified to enhance vascular dysfunction, mimicking predisposition to preeclampsia and longterm cardiovascular disease." 2. For the heart, was the left ventricle dissected and analysed separately from the other chambers or was total heart analysed? Some clarification for organ processing is needed. RESPONSE: We thank the reviewer for their suggestion. Whole hearts were analysed. We have clarified this in the methods, page 6, lines 175-176 as below. RESPONSE: We thank the reviewer for their suggestion. We have presented the primer information in a table for ease. Please see Table 1 on pages 7-8. 4. Why were different reference genes used for different tissue? It is becoming increasingly common to normalise gene expression relative to the geometric mean of multiple reference genes rather than only one -could the authors provide comment on this? RESPONSE: We thank the reviewer for highlighting this. Our group has previously validated the appropriate reference genes that are stable in the each tissue assessed. Further, in analysis we ensured that the reference genes were stable in our study within the L-NAME and control groups. We have added the following to clarify see. Please see page 7, lines 185-187: "All expression data were normalized to expression of reference gene (chosen based on their stability in each tissue) as an internal control and calibrated against the average Ct of the control samples, with each biological sample run in technical duplicate." 5. Can the authors provide the sensitivity range for the ELISA kits used? RESPONSE: We thank the reviewer for this suggestion. We have now added the ELISA kit sensitivity where available into the methods as below. Results: 1. Figure 1 -specify that it is arterial blood pressure. RESPONSE: We thank the reviewer for this comment. We have specified that we have measured mean arterial blood pressure in the figure legends as below. 3. It appears that the post-delivery data has been split by each age category and analysed between study groups, but the way the data is graphically presented a two-way ANOVA would be the more appropriate choice of analysis. What was the rationale for the choice of statistical analysis? RESPONSE: We thank the reviewer for highlighting this. In this study, we only evaluated differences between the L-NAME and control group within each timepoint. We did not analyse differences across time/across timepoints. Hence, it was more appropriate to do a t-test or Mann-Whitney test between the two groups at each timepoint. We have now updated the figure legends to include the statistical analysis used, and clarified that these tests were performed between the L-NAME and control groups at each time point. 4. The observed structural changes in the kidney is interesting, but some direction on the figure to point out these changes would be beneficial. RESPONSE: We thank the reviewer for the suggestion. We have included arrows to highlight the regions of interest. 6. Did L-NAME administration induce brain sparing or just reduced birth weight? RESPONSE: In this study we found that L-NAME reduced birthweight and fetal crown to rump length. We do not have data measuring fetal or pup head/brain size or weight. Hence, we cannot rule out that there was brain sparing. In future studies we will be investigating the effect of L-NAME and drug treatments on both head size, and examination of the fetal brain morphology with the help of collaborators who are experts in the field. But this is unfortunately beyond the scope of this manuscript. We have added the following as response to thisplease see page 14, lines 369-370: "Further studies are required to uncover the mechanisms behind this growth impairment, including whether there are any clinical parameters of interest including fetal brain sparing." 7. In the discussion the authors state that "We did not observe significant changes in expression of inflammatory, or endothelial dysfunction-related genes in the mesenteric arteries post-delivery. However, there is limited data due to a low sample size." Is n=8/9 per group considered low? RESPONSE: We thank the reviewer for pointing this out. In this particular case, we considered the sample size small because we had to pool the 8-9 samples we had in each group. This would have decreased the effective sample size and hence, our statistical power. We have altered the discussion for clarityplease see page 17, lines 481-482: "However, there is limited data as the samples had to be pooled, leading to a low effective sample size." Discussion: The discussion is well written and comprehensive, albeit lengthy, which at times detracts from the importance of the findings. It is recommended that the discussion be edited down. RESPONSE. We thank the reviewer for this suggestion. We have cut down on the discussion as much as possible, removing details covered in the results without losing the important context and discussion. Please see revised manuscript. Reviewer #3: Paper by de Alwis contains an impressive amount of work using a mouse model involving a vasoconstrictor, L-name to assess impacts on maternal blood pressure and vascular function in pregnancy and post-partum. The study also looks at changes in key organs like the placenta, as well as the kidney and heart of the mother. On a whole, the paper is very well written and conclusions substantiated. I only have minor comments and suggestions for improvement -mainly aimed at increasing the clarity of the data and interpretations. These are listed below. RESPONSE: We thank the reviewer for their time in reading our manuscript, and have incorporated their suggestions as below. 1. What is the rationale for the starting L-name treatment on e7.5? Is this equivalent to when maternal blood pressure may be expected to rise in PE women? Please include a sentence in the methods RESPONSE: We thank the reviewer for their comment. We began L-NAME administration on E7.5, which is the approximately the human equivalent of end of first trimester/early second trimester. Preeclampsia is defined as new onset hypertension after 20 weeks gestation. By starting administration at E7.5, we are able to induce hypertension by D14.5 which is approximately mid-way through gestation, modelling the clinical condition. The methods have been updated to address thissee page 5, line 129-131 as below. "…was administered daily from embryonic day (E)7.5 to E17.5 of pregnancy (approximately early second trimester to term in human equivalent to model early onset disease) via 100µL subcutaneous injection." 2. Was sex of the fetuses/pups recorded? RESPONSE: We thank the reviewer for this question. We did not record sex of the fetuses/pups in this study. However, this is a good suggestion, and will consider examining differences with fetal sex in future studies. 3. Which placenta was taken from each litter? was this random or based on litter order/weight within the litter? Was the an equal representation of female and male placentas across the litters per group analysed for histological and molecular assays? RESPONSE: In this particular study, we did not look at the effect of fetal sex. We chose 1-3 placentas at random for analyses. These placentas were collected on from the subset of mice culled at E17.5, as the dams that gave birth ate their placentas. We have now specified this in the figure legends. We have updated the Figure 4 legend (placenta gene expression)please see page VII of the figures, as below: "Control n=6 mice, L-NAME n=10 mice, 1-3 placentas were chosen at random from each." We have also updated Supplementary Figure S3 legend ( placental histology): "Placentas from the mice culled at E17.5 were chosen at random for analysis, each placenta from a different dam." 4. Was there any effect of the L-name treatment on gestational length? RESPONSE: We thank the reviewer for this question. We were also curious to know whether L-NAME may have induced preterm birth. However, there was no significant effect on gestational length. All mice gave birth on E19 (night), and thus all pups were culled on E19.5, the following morning. We have added the following to state thisplease see page 11, lines 263-264: "L-NAME administration did not alter gestational length -all mice gave birth by the morning of E19.5." 5. Why were pups culled immediately after birth, rather than leaving the mums to support them through to weaning, which would be the normal situation? Include a sentence explaining the rationale in the methods. RESPONSE: We thank the reviewer for this suggestion. In this study, due to the large number of dams, we did not have the capacity to keep all the pups (in this study there were 400+ pups in total). We do know that this may have altered long-term cardiovascular risk, and other studies are investigating these effects. We have addressed the benefits of studying the effect of lactation in our discussion. Please see pages 18, lines 509-512 as below: "Another important aspect would be to assess whether lactation may have an effect on cardiovascular health post-hypertensive pregnancy, as studies have shown that lactation may reduce vasocontractility, enhance vasorelaxation and increase vessel distensibility [104]." 6. Description of how pups and mums were culled post birth needs to be included. RESPONSE: We thank the reviewer for their suggestion. We have updated the methods to include this. Please see page 6, lines 147-148 for details of pup euthanasia: "Day (D)1 pups were counted, weighed, and crown-to-rump length recorded before euthanasia by decapitation." Dams that gave birth were culled at the postpartum time points in the same way as those that were pregnant. We have edited the methods to refer to these details, and make this clearer. Please see page 6, lines 149-152: "At cull (anaesthesia and cervical dislocation), cardiac puncture was performed and urine, maternal organs (heart and kidney collected in RNAlater) and intestinal tract collected as described above." 7. Include details of the inter and intraassay CVs for the ELISAs/kits used to measuring maternal circulating factors. RESPONSE: We thank the reviewer for their question. As reviewer 2 requested (Q5), the sensitivity of the ELISA kits has been provided. Further, we have clarified the inter and intra assay coefficients for the purchased ELISA kits is under 10% as below. Please see page 8, lines 195-196: "…(R&D Systems, Minneapolis, MN, USA) respectively, according to manufacturer's instructions (inter and intra assay coefficients of variation under 10%)." 8. Did the reference genes used in qPCR change per group? Please include a comment about this in the methods RESPONSE: As stated in response to reviewer 2 (Q4), we used reference genes that we determined were stable within each tissue, even with treatment. We have added information in methods to clarify this. Please see updated methods on page 9, lines 207-209: "Kidney histology was visualized and captured using a Nikon Eclipse Ci microscope and camera at 100µm magnification (n=3 sections/condition, whole kidneys from two dams were analysed per group)." We have clarified region of interest in Figure 2 legend: "Histology images of the cortex of PBS treated control mice…" 10. How many litters per group were analysed for placental histology? What is meant by placental blood space volume? Is that fetal and maternal blood spaces volume combined? Were histological analyses conducted blind to the group? Please clarify in the text and include an image of placental cross-section to indicate the compartments measured RESPONSE: We thank the reviewer for their question. We were only able to collect placentas from the subset of mice that were culled at E17.5, as the dams that were left to give birth consumed the placentas. The placental blood space volume that we've presented in this study included the combined fetal and maternal blood spaces, and was assessed blind to sample group allocation. We have altered the methods to clarify this, and have also added a citation to a paper in which we have previously published this type of analysis, including an image of placental crosssection. Figure S3 legend, clarifying number of samples as below: Please see Supplementary "Placentas from the mice culled at E17.5 were chosen at random for analysis, each placenta from a different dam. Data presented as mean ± SEM. n=3-5 mice/group." Please see clarification of histologial assessments -page 9, lines 211-214 as below: "General histological appearance was assessed with the Aperio ImageScope (v12.3.0.5056) software. The total blood space (including the combined fetal and maternal blood space area), and the total cross-sectional area of the junctional zone and the labyrinth zone was measured by a blinded assessor [47]." RESPONSE: We thank the reviewer for this suggestion. As written in response to Reviewer 1 (1c), with advice with our biostatistician we have now analysed the data using a mixed effects model to account for differences between litters. Please see updated Figure 3, and updated description of statistical analysis on page 9, lines 219- 222. 14. Figure 4, how many placentas per litter were analysed? Include details in the legend RESPONSE: We thank the reviewer for their question. As stated above in response 3, we were only able to collect placentas from the subset of mice that were culled at E17.5, as the dams that were left to give birth consumed the placentas. Of this subset, we chose 1-3 placentas per litter at random for assessment of gene expression. We have updated the Figure 4 legend for the placental gene expressionchanges as below: "Control n=6 mice, L-NAME n=10 mice, 1-3 placentas were chosen at random from each." 15. Please show individual values for Mmp9 and Timp for the maternal postpartum tissues in the supplementary data (so it is clear whether a change in both or one is driving the altered ratio between them). RESPONSE: We thank the reviewer for this suggestion. As Reviewer 2 (Q5) also suggested, we have added the Mmp and Timp data alone, before presenting the ratio of their expression. Please see Supplementary Figures S10F-H and S11F-I. 16. The finding of altered vascular reactivity in L-name exposed dams post-pregnancy is subtle but interesting. I realise it would be out of scope for this study, but further work assessing the impact of a superimposed hit like a high salt diet or in animals as they get older would be valuable to see if changes in vascular reactivity may be exacerbated. Perhaps this could be mentioned in the discussion to help direct future work? RESPONSE: We agree with the reviewer that this would indeed be interesting future direction. In our discussion, we have highlighted that second hit with a high fat/salt/sugar diet, and using animals with a cardiovascular disease predisposition could be of interest to establish a model of long-term cardiovascular disease following preeclampsia. Please see page 18, lines 504-507, where we have highlighted this: "Additionally, it would be interesting to use a strain or genetic model with increased cardiovascular disease risk, potentially an obese or high salt diet mouse model, or a strain genetically modified to enhance vascular dysfunction, mimicking predisposition to preeclampsia and long-term cardiovascular disease." 17. Given the changes in Hmox1 expression, it would be valuable if the authors measured oxidative stress levels, such as by oxyblot or MDA staining to see if the placentas may be stressed in this l-name model. Similarly it would be interesting to measure levels of ROS in the maternal blood. If L-name treated mice are not exhibiting oxidative stress, it could be the explanation for the subtle effects on maternal postpartum health RESPONSE: We thank the reviewer for this suggestion. This is indeed an interesting thought, and would be interesting to assess oxidative stress further in future studies. However, as detailed in response to Reviewer 2 (Methodsresponse 1) our next steps would be more focused on using a second hit to impair other pathways in addition to the nitric oxide pathway, or to use L-NAME an animal with a cardiovascular disease risk background, to more accurately model the complexity of preeclampsia (as specified on page 18, lines 502-512). 18. Given the change in Mmp9:timp, it would also be valuable if the authors assessed if there were any persistence of pathological changes in the maternal kidney postpartum, including leukocyte influx in the l-name treated dams. RESPONSE: We thank the reviewer for this suggestion. We did look at the kidneys at each time point post-pregnancy. However, there were no persistent changes were observed in the 10wk post-partum kidney sections, and so this was not presented in this study. Thank you for submitting your revised manuscript entitled "The L-NAME mouse model of preeclampsia and impact to long-term maternal cardiovascular health". We would be happy to publish your paper in Life Science Alliance pending final revisions necessary to meet our formatting guidelines. Along with points mentioned below, please tend to the following: -please add ORCID ID for secondary corresponding author. You should have received instructions on how to do so -please upload your main and supplementary figures as single files and add a separate section for your figure legends to the main manuscript -please add your author contributions to the main manuscript text -please consult our manuscript guidelines https://www.life-science-alliance.org/manuscript-prep and make sure that your manuscript sections are in the correct order -please use the format of 10 authors et al. in the references -we encourage you to introduce the panels in your Figure Figure S2, Figure S7, Figure S10, Figure S11, Figure S15 If you are planning a press release on your work, please inform us immediately to allow informing our production team and scheduling a release date. LSA now encourages authors to provide a 30-60 second video where the study is briefly explained. We will use these videos on social media to promote the published paper and the presenting author (for examples, see https://twitter.com/LSAjournal/timelines/1437405065917124608). Corresponding or first-authors are welcome to submit the video. Please submit only one video per manuscript. The video can be emailed to contact@life-science-alliance.org To upload the final version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plex You will be guided to complete the submission of your revised manuscript and to fill in all necessary information. Please get in touch in case you do not know or remember your login name. To avoid unnecessary delays in the acceptance and publication of your paper, please read the following information carefully. A. FINAL FILES: These items are required for acceptance. --An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs). --High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, https://www.life-science-alliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max. 200 characters including spaces). This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title. It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person. Author names should not be mentioned.
2022-08-09T21:57:20.505Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "ec990291f0ef515715e17f928963c94d0279659e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2c09bf5b184361ddced67ab63049d682e2f17b4a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257794410
pes2o/s2orc
v3-fos-license
Physiological Rhythms and Biological Variation of Biomolecules: The Road to Personalized Laboratory Medicine The concentration of biomolecules in living systems shows numerous systematic and random variations. Systematic variations can be classified based on the frequency of variations as ultradian (<24 h), circadian (approximately 24 h), and infradian (>24 h), which are partly predictable. Random biological variations are known as between-subject biological variations that are the variations among the set points of an analyte from different individuals and within-subject biological variation, which is the variation of the analyte around individuals’ set points. The random biological variation cannot be predicted but can be estimated using appropriate measurement and statistical procedures. Physiological rhythms and random biological variation of the analytes could be considered the essential elements of predictive, preventive, and particularly personalized laboratory medicine. This systematic review aims to summarize research that have been done about the types of physiological rhythms, biological variations, and their effects on laboratory tests. We have searched the PubMed and Web of Science databases for biological variation and physiological rhythm articles in English without time restrictions with the terms “Biological variation, Within-subject biological variation, Between-subject biological variation, Physiological rhythms, Ultradian rhythms, Circadian rhythm, Infradian rhythms”. It was concluded that, for effective management of predicting, preventing, and personalizing medicine, which is based on the safe and valid interpretation of patients’ laboratory test results, both physiological rhythms and biological variation of the measurands should be considered simultaneously. Introduction Since the beginning of time [1], the universe has been changing. Heraclitus, an ancient Greek philosopher (535-475 BC), strongly emphasized 'change' in the universe (universal flux) [2]. He asserted that "Life is Flux" [3], which means that the only constant in life is "change". To survive in a changing environment, living systems must adapt their internal systems to external cues [4]. This adaptation is observed in all living systems and organized at the cellular level by the molecular clocks; however, the adaptive ability is decreased during aging [4][5][6][7]. Human metabolism is a dynamic process that shows numerous rhythmic or non-rhythmic variations and consequently, variations are inseparable parts of the human metabolism. It should be noted that metabolic variations are not only necessary for the adaptations but they also are required for internal organizations and disease prevention. Moreover, the changes in the amounts of these metabolic variations could be applied for the prediction of disease or the introduction of individual therapeutic methods based on their specific symptoms [8][9][10][11][12][13]. On the other hand, rhythmic or non-rhythmic variations could be applied to 3P (prevention, prediction, and personalized) medicine. Biological Rhythms Biological rhythms are the inherent rhythmicity observed in living systems that are characterized by any behavioral, physiological, or molecular events [16]. Different rhythms have been investigated at the organismic level ranging from milliseconds of a nerve discharge to the annual rhythms of hibernation and even longer [17]. Endogenous rhythms are inseparable parts of almost all living systems ranging from photosynthetic prokaryotic cells to higher-level organisms that act as regulators to adapt internal biological processes to external environments [18,19]. In healthy individuals, biological rhythms are interconnected and cooperate like a symphonic orchestra at all levels of the metabolic organization. Rhythmicity is regulated by biological clocks, which are complex systems consisting of multiple oscillators that each has at least one feedback loop ( Figure 1). For instance, controlling the temperature is one of the most important characteristics of biological clocks, so it is critical for a biological system to have a similar oscillation period over a wide range of temperatures [16]. Various classification systems have been used for the biological rhythms [20]; among the most important of them is the classification based on the geophysical cycle day and night: ultradian (shorter than 24 h), circadian (almost equal to 24-h daily rhythms), and infradian (longer than 24 h) rhythms [21,22]. Ultradian Rhythms Ultradian rhythms are defined as all types of 'short-term rhythms' with a frequency of fewer than 24 h but commonly with periods in the range of 20 min to 6 h [23,24]. Ultradian rhythms have been detected in all types of living systems ranging from eukaryotic cells [25,26] to mammals [27] and are usually phase-coupled to circadian rhythms in healthy subjects, i.e., the ups and downs of ultradian rhythms appear each day at approximately the same time [17] and play crucial roles in the intracellular coherence [28]. Besides, for the maintenance of life, an ultradian-timekeeping is required to coordinate the biochemical events and regulate the metabolism [36,37] by increasing the efficiency of signal transmission. For example, in comparison to the high level of constant hormone release, the pulsatile release is more efficient for the regulation of metabolism. Regarding the laboratory tests, ultradian rhythms such as episodic hormone secretions [38] govern many endocrine functions. For example, the episodic secretion of insulin from the beta-cell of the pancreas and gonadotropin-releasing hormone from the hypothalamus regulate blood glucose levels and reproductive functions, respectively [38]. In addition to hormones, the expression levels of various genes [38,39] and concentrations of analytes such as glucose [40,41] are under the influence of ultradian rhythms. This is especially important in the case of sampling time of the analytes whose measurement results are used for various purposes including diagnosis, screening, and monitoring of the patients and also for the determination of the reference intervals (RIs) and clinical decision limits (CDLs) of the measurands. As described in the Sections 4 and 5, ultradian rhythms of the measurands have great influence on the diagnosis, screening, and monitoring of patients and should be considered for the safe and valid interpretation of patients' laboratory test results. Circadian Rhythm The term 'circadian rhythm' (from Latin "circa diem") for the first time was proposed by Franz Haldberg in 1959 to express the daily oscillations of endogenous biological processes associated with the 24-h rotation cycle of the earth [42]. Circadian rhythm describes the 24-h oscillations of biological processes associated with the dark/light cycle and the earth's daily rotation [19]. Various external stimuli such as light exposure and food intake play dominant roles in the regulation of circadian rhythms and the dark/light cycle is the main external synchronizer of the circadian rhythm [43]. However, it should be noted that the 24-h oscillation time can be changed slightly using different stimuli [44]. Circadian rhythms are developed prenatally and progressive maturation is observed particularly after birth [45]. The pattern of these rhythms can be categorized into two phases: activity and feeding phase and rest and fasting phase. Foods taken in the active phase provide the main substrates such as amino acids, fatty acids, and monosaccharides for energy production and synthesis of carbohydrates, lipids, and proteins. During the resting period, the stored compound are mobilized to sustain the homeostasis of the metabolism [46]. External photic and non-photic factors have a great influence on circadian behavior. Photic signals are processed by the eye and transmitted to the hypothalamic suprachiasmatic nucleus (SCN) through the retinohypothalamic tract. Melanopsin plays a crucial role in transmitting the photic signals from the eye to the SCN [47]. The SCN has a complex structure composed of approximately 10,000 neurons, which behaves as a cell-autonomous circadian oscillator, and is accepted as the master internal pacemaker, i.e., the central circadian clock [48]. The SCN has critical properties such as temperature compensation, free-running period under constant conditions, and intrinsic rhythmicity that make it an excellent biological oscillator [16]. In addition to the central clock, the presence of peripheral clocks in different organs such as the liver, skeletal muscle, heart, lungs, etc., has been reported [19]. In fact, the biological clock for circadian rhythms is present in all known cells of multicellular organisms and synchronizes the tissues with the external environment via a master circadian pacemaker located in the SCN [49,50]. The activity of peripheral clocks is synchronized by the SCN and both of them regulate the daily rhythmicity of metabolisms [51,52]. In contrast to the SCN, photic signals have little effect on peripheral clocks; however, non-photic signals have profound effects on peripheral clocks [53,54]. Although the master circadian clock in the SCN strongly influences peripheral clocks, there is evidence that peripheral clocks can also influence the master circadian clock [55,56]. the central circadian clock [48]. The SCN has critical properties such as temperature compensation, free-running period under constant conditions, and intrinsic rhythmicity that make it an excellent biological oscillator [16]. In addition to the central clock, the presence of peripheral clocks in different organs such as the liver, skeletal muscle, heart, lungs, etc., has been reported [19]. In fact, the biological clock for circadian rhythms is present in all known cells of multicellular organisms and synchronizes the tissues with the external environment via a master circadian pacemaker located in the SCN [49,50]. The activity of peripheral clocks is synchronized by the SCN and both of them regulate the daily rhythmicity of metabolisms [51,52]. In contrast to the SCN, photic signals have little effect on peripheral clocks; however, non-photic signals have profound effects on peripheral clocks [53,54]. Although the master circadian clock in the SCN strongly influences peripheral clocks, there is evidence that peripheral clocks can also influence the master circadian clock [55,56]. The molecular mechanisms behind the circadian clocks were deciphered by Michael Rosbash, Jeffrey Hall, and Michael Young, who won the Nobel Prize in Physiology or Medicine in 2017 [57]. The molecular mechanism of circadian rhythmicity is a complex procedure including transcription, translation, post-transcriptional regulation, and protein-protein interactions [49], which are regulated via both positive and negative feedback loops [58][59][60]. The feedback loops in circadian rhythms on the molecular level are summarized in Figure 1. Briefly, it is based on the complex autoregulatory transcription-translation feedback loops that control the rhythmic expression of clock-controlled genes (CCGs), leading to oscillations in numerous molecules and cellular functions. As shown in Figure 1, the complex CLOCK and BMALI1 rhythmically bind E-box and activate CCGs and other parts of the clock. On the other hand, the activity of CLOCK and BMALI1 is under The molecular mechanisms behind the circadian clocks were deciphered by Michael Rosbash, Jeffrey Hall, and Michael Young, who won the Nobel Prize in Physiology or Medicine in 2017 [57]. The molecular mechanism of circadian rhythmicity is a complex procedure including transcription, translation, post-transcriptional regulation, and protein-protein interactions [49], which are regulated via both positive and negative feedback loops [58][59][60]. The feedback loops in circadian rhythms on the molecular level are summarized in Figure 1. Briefly, it is based on the complex autoregulatory transcription-translation feedback loops that control the rhythmic expression of clock-controlled genes (CCGs), leading to oscillations in numerous molecules and cellular functions. As shown in Figure 1, the complex CLOCK and BMALI1 rhythmically bind E-box and activate CCGs and other parts of the clock. On the other hand, the activity of CLOCK and BMALI1 is under the control of PER and CRY proteins. They translocate to the nucleus and inhibit CLOCK-BMAL1. These proteins are degraded within the cell and a new cycle begins every 24 h. In addition to this loop, there is a second loop that regulates the transcription of Bmal1 (Figure 1). It is inhibited by Reverb and activated by ROR. The detailed feedback loops of the molecular clock can be found in [52,61,62]. Although circadian rhythms are based on transcription-translation feedback loops, i.e., clock proteins regulate their own transcription by a negative feedback mechanism, which produce a rhythmic clock gene expression [63], it has been shown that transcription-translation is not a prerequisite for circadian oscillations [64][65][66][67]. For instance, O'Neill et al. [66] demonstrated that in red blood cell peroxiredoxins, a highly conserved family of antioxidant enzymes that play a dominant role in regulating the intracellular peroxide levels [68,69], undergo the 24-h redox cycles, i.e., the nucleus and consequently transcription-translation are not always required for circadian rhythm in humans. The change in levels of numerous molecules within 24 h can be analyzed under the big umbrella, circadiomic [22,70]. The detailed molecular mechanism of the circadian rhythm can be found in [19]. Circadian rhythms have been observed in numerous laboratory tests including hormones [71] and various analytes such as leukocytes [72][73][74][75], electrolytes [76], trace elements [77,78], glucose [79], etc. Among the hormones, especially melatonin (N-acetyl-5methoxytryptamine) and cortisol come to the fore. Melatonin is secreted by the pineal gland and liver [80] and has strong antioxidant activity and regulates the circadian rhythms of various physiological activities. Its biosynthesis increases at night [81] and is inhibited during daytime by the light detected by the retina [82]. Similar to melatonin, the serum concentrations of cortisol show marked variation within the day with the highest level detected in the early morning [83] (Figure 2). In addition to this loop, there is a second loop that regulates the transcription of Bmal1 ( Figure 1). It is inhibited by Reverb and activated by ROR. The detailed feedback loops of the molecular clock can be found in [52,61,62]. Although circadian rhythms are based on transcription-translation feedback loops, i.e., clock proteins regulate their own transcription by a negative feedback mechanism, which produce a rhythmic clock gene expression [63], it has been shown that transcription-translation is not a prerequisite for circadian oscillations [64][65][66][67]. For instance, O'Neill et al. [66] demonstrated that in red blood cell peroxiredoxins, a highly conserved family of antioxidant enzymes that play a dominant role in regulating the intracellular peroxide levels [68,69], undergo the 24-h redox cycles, i.e., the nucleus and consequently transcription-translation are not always required for circadian rhythm in humans. The change in levels of numerous molecules within 24 h can be analyzed under the big umbrella, circadiomic [22,70]. The detailed molecular mechanism of the circadian rhythm can be found in [19]. Circadian rhythms have been observed in numerous laboratory tests including hormones [71] and various analytes such as leukocytes [72][73][74][75], electrolytes [76], trace elements [77,78], glucose [79], etc. Among the hormones, especially melatonin (N-acetyl-5-methoxytryptamine) and cortisol come to the fore. Melatonin is secreted by the pineal gland and liver [80] and has strong antioxidant activity and regulates the circadian rhythms of various physiological activities. Its biosynthesis increases at night [81] and is inhibited during daytime by the light detected by the retina [82]. Similar to melatonin, the serum concentrations of cortisol show marked variation within the day with the highest level detected in the early morning [83] (Figure 2). [84]. The within-day variation in body temperature is in the opposite direction to the variation observed in melatonin and cortisol. Infradian Rhythms The period of infradian rhythms is longer than the circadian rhythms with ranges from days to years. For biological systems, typical examples are menstruation, hibernation, migration, breeding, molting, etc. [85,86]. In humans, particularly in laboratory medicine, the most important infradian rhythms are observed in analytes that regulate the menstrual cycle and analytes under the influence of sunlight. In the menstrual cycle, pituitary gonadotropins (follicle-stimulating and luteinizing hormones) and ovarian hormones (estrogen and progesterone) show infradian rhythmicity ( Figure 3) [87,88]. These hormonal changes that regulate the menstrual cycle in women also have profound effects on other physiological functions, particularly on thermoregulation [89]. In comparison to the follicular phase of menstrual cycle, the core body temper-ature (CBT) measured in early morning is 0.3 to 0.7 • C higher in the luteal phase [89,90], which is attributed to the thermogenic effect of progesterone [91], and consequently, it can be concluded that in addition to ultradian [31][32][33]92] and circadian rhythms [33,92,93], infradian rhythms is observed in the CBT of women. Seasonal infradian rhythms play crucial roles in body functions, particularly in metabolism, reproduction, and immune responses [95]. Recent studies have shown a seasonal variation of immunity and related analytes [95][96][97][98]. Dopico et al. [96] found seasonal expression profiles in more than 4000 protein-coding mRNAs in white blood cells and adipose tissue. They found a profound profile of pro-inflammatory transcriptomic profiles and increased levels of soluble IL-6 receptors and C-reactive proteins during the winter season. Additionally, they found that FcR-gamma-associated processes, B-cell receptor signaling, lysosomes, chemokine signaling, and phagosome were all strongly associated with winter-expressed modules. Pro-inflammatory mediators are associated Seasonal infradian rhythms play crucial roles in body functions, particularly in metabolism, reproduction, and immune responses [95]. Recent studies have shown a seasonal variation of immunity and related analytes [95][96][97][98]. Dopico et al. [96] found seasonal expression profiles in more than 4000 protein-coding mRNAs in white blood cells and adipose tissue. They found a profound profile of pro-inflammatory transcriptomic profiles and increased levels of soluble IL-6 receptors and C-reactive proteins during the winter season. Additionally, they found that FcR-gamma-associated processes, B-cell receptor signaling, lysosomes, chemokine signaling, and phagosome were all strongly associated with winter-expressed modules. Pro-inflammatory mediators are associated with various pathological conditions including cardiovascular diseases [99][100][101][102], which is a major cause of mortality and morbidity worldwide [103]. Seasonal variation is observed in the analytes which regulate the bone mineral metabolism, particularly vitamin D, which is under the influence of the sunlight. The intensity of sunlight is not globally uniform and shows variations depending on the geographic latitudes and seasons. Depending on the intensity of the sunlight, the levels of 25-hydroxyvitamin D, calcium, and parathyroid hormone (PTH) fluctuate throughout the year. In other words, the concentration of 25-hydroxyvitamin D is higher in the summer and lower in the winter, while the PTH level shows an opposite trend. In the case of calcium, an elevated level is observed in the autumn [104]. Additionally, serum/plasma lipid levels also show seasonal variation [105][106][107][108][109][110] between winter and summer. For example, in comparison to summer, serum total cholesterol and LDL-cholesterol levels are increased in winter, but the opposite situation is observed in HDL-cholesterol levels. Detailed information regarding the seasonal variation of the lipids can be found in [111]. Interactions among Ultradian, Circadian, and Infradian Rhythms Although it is accepted in theory that biological systems have 'steady state' situations and 'homeostatic set points' (HSPs) for the measurands, the reality is that the concentration or the activity of biomolecules oscillate as the composite rhythms consist of multiple overlapping oscillations [124] (Figure 4). However, it does not mean that the HSPs of the measurands cannot be determined. In such cases, the HSPs of the measurands can be determined using regression and other trend analysis techniques. In comparison to circadian and infradian rhythms, the ultradian events are aperiodic and therefore they are classified as 'episodic ultradian events' [24]. Furthermore, while circadian and infradian rhythms are the adaptation mechanisms for predictable environmental changes, at least in theory, ultradian rhythms can be considered as the adaptation mechanisms for the unpredictable environmental changes [24]. It should be noted that the circadian clock behaves as a modulator between ultradian and infradian rhythms [85]. Disruption of Biological Rhythms In comparison to ultradian and infradian rhythms, the disruption of circadian rhythms has been analyzed in detail. Various factors such as shift work [125], jet lag [126], social jetlag [127,128], exposure to artificial light [129], and irregular eating time [130] disrupt the circadian rhythms. Disruption of circadian rhythms lead to numerous serious health problems such as diabetes mellitus [131], atherosclerosis [132], autoimmune diseases [133], obesity [134], cancer [135], insomnia [136], etc. Unlike circadian rhythms, limited data related to the disruption of ultradian and infradian rhythms are available [137][138][139][140]. [85]. Some analytes and physiological events are under the influence of more than one variation. For example, cortisol secretion has both circadian and ultradian rhythms and the menstrual cycle has infradian and circadian rhythms. In comparison to circadian and infradian rhythms, the ultradian events are aperiodic and therefore they are classified as 'episodic ultradian events' [24]. Furthermore, while circadian and infradian rhythms are the adaptation mechanisms for predictable environmental changes, at least in theory, ultradian rhythms can be considered as the adaptation mechanisms for the unpredictable environmental changes [24]. It should be noted that the circadian clock behaves as a modulator between ultradian and infradian rhythms [85]. Biological Variation The physiological concentration or activities of biomolecules measured in the same time interval are different in different individuals. The nature of BV is different from that [85]. Some analytes and physiological events are under the influence of more than one variation. For example, cortisol secretion has both circadian and ultradian rhythms and the menstrual cycle has infradian and circadian rhythms. Biological Variation The physiological concentration or activities of biomolecules measured in the same time interval are different in different individuals. The nature of BV is different from that of the physiological rhythms. The BV data of analytes have been derived from the data of the repeated measurements of the analytes [141] rather than the experimental models or animal studies specifically designed for this purpose. For all known analytes that have BV data in the literature, the calculated total variation of repeated measurement results was found to be higher than the analytical variation (the variation of the measurement systems), indicating the presence of additional variation, i.e., biological variation. The sampling time for repeated measurements is crucial to obtain reliable BV data. To eliminate the effect of ultradian variation, all samples should be taken at the same time of the day, i.e., measurements of analytes should not be done by combining the samples, some taken in the morning and some in the evening. Similar care should be taken to eliminate the infradian variation of the analytes. Additionally, all samples should be analyzed in a single run to eliminate the between-run analytical variation. For an analyte, BV has two main components: between-subject and within-subject biological variations [141]. These two main variations are also the fundamental elements of personalized laboratory medicine. Between-Subject Biological Variation Between-subject BV (CV G ) is the variation among the individuals' set points of the analytes ( Figure 5). For an analyte, theoretically, it is accepted that each individual has a specific set point, and the concentration of the analytes varies around that set point. It should be noted that the set point of an analyte used to calculate the CV G does not need to be under strict homeostatic control. The set point can be accepted as the mean value of the repeated measurement results of the individual's data when he/she is at a steady state i.e., the concentration of the analytes is stable. Since the mean value of the analytes is not constant in healthy and diseased individuals and in different age groups, the set point of the analytes can change with age and diseases [141,142]. Depending on the types of analytes the CV G shows a wide distribution. Based on the data given by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Biological Variation (BV) database, it ranges from 1.0% for sodium to 103.4% for cancer antigen 72-4 [143]. In comparison to healthy individuals, currently, there are not sufficient data to illustrate the variation in the set points of the analytes in different clinical situations. Within-Subject Biological Variation Within-subject BV (CV I ) is accepted as the variation of an analyte around its set point ( Figure 5) in an individual. In some analytes such as sodium and calcium, CV I is under strict homeostatic control, while in others, such as serum enzymes, this control is not so strict. The meta-analysis of CV I of more than 200 analytes can be found on the EFLM BV database [143]. Similar to CV G , the CV I of analytes shows a wide distribution range from 0.5% for sodium to 135% for adrenalin [143]. The EFLM BV database is a dynamic database, being updated when new data related to the BV of the analytes are available, and therefore the CV G and CV I of analytes may change when the database is updated [143]. Within-Person Biological Variation Although, theoretically, both within-person (CV P ) and CV I represent the same variation, i.e., the variation of the analytes around the homeostatic set point for an individual, actually they are not exactly the same. The difference between CV P and CV I is the source of the data used to calculate these parameters. The CV I is calculated using the results of the repeated measurements of a group of individuals (population) and therefore it is not specific to the individual, while CV P is obtained using the repeated measurement results of the individual and therefore it is specific to the individual. Clinical Applications of Biological Variation Data Both CV I and CV G are widely used in medical laboratory practice. The BV data have been used: (1) to calculate the index of individuality (II) to evaluate the utility of population-based reference intervals, (2) to calculate the reference change value (RCV), which can be used to make a decision regarding the significance between individual's serial measurement test results, and (3) to calculate the bias and imprecision to set the analytical performance specification (APS) of the measurement procedure. The details on how to use the BV data for such purposes can be found in [141]. Recently, the CV I and CV P data of the analytes have been used to derive the personalized reference intervals (prRIs) of the analytes [144][145][146][147]. Using the prRIs of the analytes may increase the objective interpretation of the test results. As shown in Figure 5, each individual has their own set point and within-person BV. In other words, for an analyte, the variation of the measurands is not limited to a constant set point and a constant variation around the set point. Both the set point and the variation around the set point are individual-specific parameters. Although there are not adequate data reporting the variation of CV I and CV G , from the data of the populationbased reference intervals of the analytes, it can be speculated that both CV I and CV G change with age and gender [148,149]. For an individual, this makes the variations of the analytes two-dimensional variations. One dimension is the variation of the set point and the other is the variation of CV I . To illustrate the variation of the analytes, new studies are required to measure the variation of the analytes at different ages and health statuses. As the CV I represents the variation in the measurand around its homeostatic set point, although it is not a rule, the CV I of the measurand is expected to increase in pathological conditions related to the measurand. Various studies have reported increasing CV I of specific measurands in individuals with diseases such as diabetes mellitus, chronic kidney diseases, different type of cancers, etc. [150,151]. results. As shown in Figure 5, each individual has their own set point and within-person BV. In other words, for an analyte, the variation of the measurands is not limited to a constant set point and a constant variation around the set point. Both the set point and the variation around the set point are individual-specific parameters. Although there are not adequate data reporting the variation of CVI and CVG, from the data of the populationbased reference intervals of the analytes, it can be speculated that both CVI and CVG change with age and gender [148,149]. For an individual, this makes the variations of the analytes two-dimensional variations. One dimension is the variation of the set point and the other is the variation of CVI. To illustrate the variation of the analytes, new studies are required to measure the variation of the analytes at different ages and health statuses. As the CVI represents the variation in the measurand around its homeostatic set point, although it is not a rule, the CVI of the measurand is expected to increase in pathological conditions related to the measurand. Various studies have reported increasing CVI of specific measurands in individuals with diseases such as diabetes mellitus, chronic kidney diseases, different type of cancers, etc. [150,151]. Dashed lines indicate the 5th and 95th percentiles, and the continuous line is the median value with 95% CIs. Reprinted from Ref. [152]. The within-person Figure 5. Median values with range (minimum-maximum) of platelet count for individuals based on weekly samplings for 10 weeks. Dashed lines indicate the 5th and 95th percentiles, and the continuous line is the median value with 95% CIs. Reprinted from Ref. [152]. The within-person variation and the set point of each individual are different and additionally, the variation of the set point for males is lower than that for females. Reliability of Biological Variation Data Since the BV data are widely used in the calculation of critical parameters such as RCV, II, APS, and prRI, the reliability of these data is essential [153,154]. The total variation of an analyte can be accepted as the Gaussian combination of pre-analytical, analytical, and biological variation [141]. To obtain reliable BV data from the results of repeated measurements, pre-analytical variation must be minimized; however, it is not so easy in practice and needs strict protocol for pre-analytical procedures and robust statistical techniques [155]. Recently, the EFLM biological variation working group (BV-WG) set up the European Biological Variation Study (EuBIVAS) project, a multicenter study that collected samples from five different European countries using a stringent pre-analytical protocol and robust statistical techniques [155] and updated the BV data of numerous analytes [156]. Additionally, the EFLM BV task group developed the Biological Variation Critical Appraisal Checklist (BIVAC) [157] to evaluate the quality of published BV data and select the appropriate papers for meta-analysis of BV data. The meta-analysis of BV data of numerous analytes is given on the EFLM BV database [143]. In comparison to the historical database [158,159], for most of the measurands, lower values in BV data have been observed. Biological Variation in Diseases Disruption of physiological rhythms has been analyzed in detail [19,51,80,130,[134][135][136]160] but the same situation is not the case for BVs. The BV data of measurands were mostly obtained from healthy individuals, and in comparison to healthy individuals, limited data are available for the BV of measurands in patients. Indeed, the lack of adequate reliable data for the BV of the analytes in patients limits the usefulness of BV data in clinical practice. Patient monitoring is an important step in the evaluation of the effects/side effects of treatments and the prognosis of the diseases. For this purpose, RCV is used to evaluate the significance between consecutive measurement results. It combines the analytical and biological variations of the measurands and gives the uncertainty associated with the results of consecutive measurements. If the difference between two consecutive measurements results is higher than the RCV, it is accepted as significant, otherwise, it is considered insignificant. However, the problem is that the BV component of RCV is obtained from healthy individuals but used to evaluate the significance between the serial measurements of patients' test results. This is only valid if there is no significant difference between the measurand CV I values of both patients and healthy individuals. Although a BV database for patients is not available, the data of a few papers show that the BV of analytes measured in patients' samples may be different from the BV of the analytes measured in healthy individuals. In this case, it may not be rational to use the BV values derived from the data of healthy individuals to calculate RCV to monitor patients' tests results. Parrinello et al. [161] reported the CV I of glucose, HbA1c, fructosamine, glycated albumin, and 1,5-anhydroglucitol in both elderly diabetic and nondiabetic individuals and found elevated levels of CV I for all parameters in diabetic subjects. Within these parameters, the CV I (confidence interval, CI 95%) of glucose, 9.6 (7.3-11.8), and 1,5-anhydroglucitol, 5.7 (4.2-7.2), were found to be significantly higher in diabetic patients than the CV I of glucose, 5.3 (4.6-6.0), and 1,5-anhydroglucitol, 2.9 (2.7-3.2), in nondiabetic individuals. Similarly, Rizi et al. [40] measured the CV I of glucose, insulin, total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglyceride in lean insulin-sensitive and obese insulin-resistant individuals and found elevated levels of CV I for all parameters in obese insulin-resistant individuals. However, due to the lack of CIs in the paper, the significance of the difference between the two groups could not be evaluated. In addition to the limited number of papers that measured the BV of some analytes in patients, the reliability of these data is questionable. The individuals must be at a steady state and pre-analytical variation should be minimized, and therefore obtaining reliable BV data from patients' samples is not an easy task. Physiological Rhythms and Reference Intervals Clinical decision based on laboratory tests is a comparative procedure and physicians need reference data to compare the laboratory test results. In daily practice, physicians usually use RIs or CDLs as the reference data for comparison [162]. Both RIs and CDLs are powerful tools for diagnosis and screening of the disease. RIs of the measurands can be obtained from the data of healthy population (popRI) or individual (prRI) but CDLs are obtained from patients' data [162]. To derive the popRI, briefly, samples are collected from at least 120 reference individuals [163]. The concentration of the measurands is measured and after excluding the lowest and highest 2.5% of the data using appropriate statistical techniques, the remaining central 95% of the data is accepted as the popRI of the analytes. It should be noted that if partitioning is necessary due to the covariates such as age groups, sex, ethnicity, race, etc., then nx120 (n: the number of covariates) of the reference individuals should be recruited to derive the popRI of the measurands [163]. In comparison to popRI, deriving prRI is an easy procedure. It can be derived using three or more repeated measurement results of the analytes and CV I [147]. However, it is recommended to use CV P instead of CV I and in this case, five or more repeated measurements results are sufficient [144]. RIs are obtained from the samples taken in the morning period of the day usually between 8:00-11:00 a.m. However, the ultradian variation of the analyst limits the diagnostic power of the RIs derived from the measurement results of samples taken in the morning time. Since the concentration of the analytes is not constant throughout the day, the within-day variation (ultradian) can be observed in many analytes [164][165][166][167]. Therefore, the sampling time is a critical point in the interpretation of the results of laboratory tests. For example, the measurement results of samples taken at midnight should not be compared to the conventional RI. For such comparisons, the RIs derived from the samples taken within the suitable period are required. Unfortunately, such conventional RIs are not commonly available. For sampling time and reliable comparison with conventional RIs, in addition to ultradian, the circadian and infradian rhythmicity of tests should be considered. For circadian rhythms, the time of the minimum and maximum value of the analytes and additionally the amplitude of the variation, which is defined as one-half of the difference between the minimum and maximum values of the analytes, should be considered [168]. Infradian variation should be considered particularly in estimating the RIs of the analytes showing seasonal variation such as vitamin D, calcium, PTH, total cholesterol, LDL-cholesterol, HDL-cholesterol, etc., and monthly variation such as hormones regulating the menstrual cycle. In clinical practice, population-based RIs (popRI) based on monthly variation is available for hormones [169][170][171] but unfortunately, the popRI based on seasonal data are not common, particularly in routine practice. RIs based on seasonal data for both population and individuals may facilitate the safe and valid interpretation of laboratory test results. Physiological Rhythms and Reference Change Value Although RCV is a powerful tool for monitoring personal serial measurement results [172], its clinical significance has been criticized [173]. Due to ultradian rhythms of the measurands, RCV should not be used to evaluate the significance between consecutive measurement results of samples taken at different times of the day. For reliable comparisons, the sampling time of the consecutive measurement results should be considered. It should be noted that the CV I used to calculate the RCV is obtained from the data of samples usually taken in the morning period of different days and therefore it should be applied to the measurements results obtained from the samples taken in the morning period of different days. Otherwise, circadian and ultradian variations cannot be eliminated and false positive results may be reported. Moreover, in some cases, sampling at the same time for consecutive days may not be sufficient to eliminate variations other than CV I . This can be encountered in the calculation of the RCV for the tests which show infradian variations such as gonadotropic and ovarian hormones, lipids, vitamin D, calcium, etc., especially if the time interval between consecutive measurements approximates the infradian periods of the test. Physiological Rhythms and Chronotypes Chronotype, or diurnal preference, reflects an individual's preferred time of the day for the sleep/wake or rest-active cycle [174,175]. Different sleep/wake cycle patterns exist in humans and mainly three different chronotypes can be distinguished: morning types (M-types), evening types (E-types), and neither types (N-types). M-types and E-types are also subdivided into moderate and extreme types [176]. N-types have no circadian preference and can be considered as an intermediate type between M and E-types [176,177]. In comparison to M-types, the E-types have less physical activity and more sedentary time [177]. It should be noted that although sleep is an important dimension, chronotype has multiple dimensions and other factors such as environmental and social influences shape the chronotypes of individuals [178]. The individuals' chronotypes can be determined using a Morningness-Eveningness Questionnaire [179]. The timing of food intake, which varies depending on chronotypes, has a profound effect on the regulation of biological clocks. Inappropriate timing of food intake (for example during the night) can desynchronize biological clocks and cause adverse health outcomes [180]. It has been shown that more evening chronotypes are associated with obesity [181], type 2 diabetes [182], hypertension [182], mental health issues [183], etc. The hormonal profiles of M-and E-types are different. For example, the melatonin profile of M-types differs from E-types [176] and therefore melatonin level can be used as a biomarker for chronotypes [184,185]. In comparison to E-types, the onset, acrophase, and offset of the melatonin profiles occur approximately 3 h earlier in M-types and therefore serum melatonin levels measured particularly around 9:00 a.m. can be used as an indicator to differentiate E-types from M-types since M-types have a lower melatonin level at 9:00 than E-types do [185]. Differences have been observed between the laboratory test results of different chronotypes. Vera et al. [186] analyzed the lifestyle, chronotypes, and metabolic syndrome and found that in comparison to M-types, triglyceride and insulin levels and Homeostatic Model Assessment-Insulin Resistance (HOMA-IR) were higher but HDL-cholesterol was lower in E-types. On the other hand, Lucasen et al. [187] found elevated adrenocorticotropic hormone (ACTH) and epinephrine levels in E-types compared to M-and I-types but the elevation of other analytes such as glucose, triglyceride, HDL-cholesterol, and LDL-cholesterol were not significantly different. In several chronic pathological conditions particularly cancers, sleep-wake or restactivity rhythm abnormalities, which are associated with chronotypes, are observed [188]. The rest-activity and cortisol circadian rhythms have been associated with the mortality rates of different cancers including renal [189], lung [190], colorectal [191], and breast cancers [192,193]. Furthermore, the hormonal imbalance between leptin (reduced) and ghrelin (elevated) levels observed in chronotypes with sleep-deprived individuals increases caloric intake, decreases energy expenditure, and leads to weight gain and cardiovascular disease [194][195][196][197]. Additionally, chronotherapy, i.e., optimal timing of treatment could increase the drug efficacy and decrease the side effects of chemo and other therapeutic interventions [198]. 3P Medicine and Variations As mentioned previously, both systematic and random variations (or rhythmic and non-rhythmic variations) could affect the prediction and prevention of disease and find a way for disease treatment based on the characteristics of the patient. They could significantly reduce the duration of treatment, suggest a more effective therapeutic method, and provide a more comfortable situation for the patient. Determining the pattern of physiological rhythms or circadian phenotype for each individual is a major challenge in personalized medicine [199]. A detailed questionnaire including environmental factors such as the duration of the light/dark cycle, exposure to artificial light, information about the individual's daily habits, nutritional status, physical activities, monitoring of physiological parameters such as blood pressure, etc., will increase the efficacy of targeted therapy and facilitate the planning of personalized treatment. Additionally, algorithms capable of integrating circadian phenotype, personalized RIs, and clinical decision limits will enhance the efficacy of chronobiology in 3P medicine. A lifestyle that is compatible with physiological rhythms can prevent various diseases including diabetes mellitus, atherosclerosis, autoimmune diseases, obesity, cancer, insomnia, etc. [19]. People are exposed to artificial light from different sources at different degrees and durations, which has a negative impact on circadian rhythms by suppressing melatonin and phase-shifting the biological clock [200]. Consequently, in the past 50 years, the quality and average sleep duration declined, which is harmful to the health of individuals [200]. All cells in the human body have molecular clocks that oscillate regularly by binding transcription factors to various parts of the genome [201]. Transcription factors binding to numerous genes change the level of different protein and metabolite and posttranslational modifications. Therefore, disruption of circadian rhythms has been linked to numerous diseases such as cancers, diabetes mellitus, obesity, metabolic syndrome, etc. Chronopharmacology and chronomedicine will increase the therapeutic effect of drugs and treatment procedures for personalized medicine. It has been shown that the target of numerous drugs shows cyclic gene expression [199,202] and for example, the efficacy of some antihypertensive drugs such as angiotensin II receptor antagonists and Ca channel blockers was found to be higher in the evening time [199,203]. No two individuals are identical and this is correct for their chronotypes. Environmental factors such as the duration of the light/dark cycle, exercises, and diet can change tissue specific rhythms [199]. The benefit of physical activity [204] and food intake [205] vary according to the day. Chrononutrition has a great influence on the metabolic and endocrine pathways, which regulate the homeostasis of the organism, and additionally microbiota play a critical role in this interaction [205]. Circadian clock manages the link between homeostasis and nutrition. Overnutrition disrupts circadian rhythms and obesity cause remodeling of circadian activity [206]. Targeted therapy to molecules at their peak expression time may increase the efficacy of treatment. For example, Guan et al. [206] have shown that pharmacological targeting of PPAR-α (a regulator of lipid metabolism) at its peak expression time lowered lipid accumulation in the liver effectively. The effect on circadian rhythms is not limited to endocrine and metabolic pathways and the diseases linked to these pathways such as diabetes mellitus, obesity, metabolic syndrome, etc. Since each cell has its own molecular clock, the effect of rhythmic expression of genes can be observed in almost all diseases. For instance, it is revealed that the circadian rhythm could have participated in the duration of bone formation in periodicityregulated orthodontic tooth movement by manipulating the genes related to the circadian rhythm. Indeed, more effective personalized care could be suggested for maxillofacial surgery and other types of plastic surgery via recognizing the specific environmental conditions and individual genetic and circadian rhythms that could play critical roles in these situations by affecting reconstruction of the soft/hard tissue in a more efficient way [207]. Moreover, variants of the circadian gene in cooperation with the sleep pattern have a relationship with the pathology, development, progression, and aggressiveness of different types of cancer [208,209]. Besides, circadian rhythms are useful for the treatment, prediction, and prevention of several physical and mental diseases including cardiovascular disease, myocardial ischemia, neurodegenerative disease, body microbiota, sleep-wake disorder, etc. [210][211][212][213]. In addition to the circadian rhythm, the application of other types of variations in 3P medicine was also confirmed; however, most of the data are related to the circadian rhythms. For example, evidence show the effects of the ultradian rhythms on heart rate variability, gene expression, locomotor activity, and body temperature [214]. Similarly, infradian rhythms have been associated with reproduction, sperm physiology, telomer length, aging, etc., and the coordination of physiological rhythms are essentials for healthy behavior and memory functions [30,215,216]. Seasonal variations have been observed in telomer lengths [217][218][219] and shortening of telomere length is observed in cellular senescence and accepted as a biomarker for aging [220,221]. Infradian rhythms were observed in testis function [222], semen quality [223] and some parameters of sperm physiology such as acrosome reaction (a 12 months cycle) and chemotaxis (a 6 month cycle) [30]. The cyclicity of the biological clock of somatic cells is regulated by specific genes [224]. However, the spermatozoans are transcriptionally inactive cells and therefore the presence of some molecules such as melatonin in the seminal plasma may regulate the variations observed in sperm physiology [225]. Conclusions Variations are known as one of the inseparable parts of human metabolism that is classified into two main categories: physiological rhythms and biological variations. Physiological rhythms are crucial for the metabolic adaptation of the organism to external changes, and their disruption leads to serious clinical situations such as cancer, neurodegenerative diseases, insomnia, etc. Unlike physiological rhythms, BV is the random variation of the analytes around its set point (CV I ) and the variation of set points among individuals (CV G ). The relation between the disruption of BV and disease has not been analyzed in detail yet; however, in some diseases, the elevated level of CV I has been reported. All analytes measured in medical laboratories from patients' samples are to some extent under the influence of physiological rhythms. Contrary to physiological rhythms, the BV of the analytes and their potential benefits in clinical practice are not adequately known among clinicians; however, the BV data of the measurands have been used in medical laboratory practice to calculate RCV, II, APS, and prRIs of the analytes. Correct estimations of BV data require detailed knowledge of the physiological rhythms of the analytes. Consequently, for the safe and valid interpretation of laboratory test results, both physiological rhythms and BV of the measurands should be considered simultaneously. These data are useful for effective management of predicting, preventing, and personalizing medicine. Conflicts of Interest: The authors declare no conflict of interest.
2023-03-29T15:33:42.468Z
2023-03-27T00:00:00.000
{ "year": 2023, "sha1": "abde7ad844da75c74f376bb4130649125ccbcde1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/7/6275/pdf?version=1679909460", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af411b7f6ed2f52ba1cbddb22d8184baa36d8520", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232035732
pes2o/s2orc
v3-fos-license
Quantifying Confounding Bias in Generative Art: A Case Study In recent years, AI generated art has become very popular. From generating art works in the style of famous artists like Paul Cezanne and Claude Monet to simulating styles of art movements like Ukiyo-e, a variety of creative applications have been explored using AI. Looking from an art historical perspective, these applications raise some ethical questions. Can AI model artists' styles without stereotyping them? Does AI do justice to the socio-cultural nuances of art movements? In this work, we take a first step towards analyzing these issues. Leveraging directed acyclic graphs to represent potential process of art creation, we propose a simple metric to quantify confounding bias due to the lack of modeling the influence of art movements in learning artists' styles. As a case study, we consider the popular cycleGAN model and analyze confounding bias across various genres. The proposed metric is more effective than state-of-the-art outlier detection method in understanding the influence of art movements in artworks. We hope our work will elucidate important shortcomings of computationally modeling artists' styles and trigger discussions related to accountability of AI generated art. Introduction From healthcare and finance to judiciary and surveillance, artificial intelligence (AI) is being employed in a wide variety of applications (Buch, Ahmed, and Maruthappu 2018;Lin 2019;Feldstein 2019). AI has also made inroads into creative fields such as music, dance, poetry, storytelling, cooking, and fashion design to name a few (Engel et al. 2019;Pettee et al. 2019;Varshney et al. 2019;Jandial et al. 2020). Creating portraits, generating paintings in the "style" of famous artists, style transfer (i.e. transferring the contents of one image according to the style of another image), and creating novel art styles have been some popular applications of AI in art generation (Zhu et al. 2017;Tan et al. 2017;Elgammal et al. 2017;Gatys, Ecker, and Bethge 2016). With the growing adoption of AI, a large body of work has analyzed its ethical impacts in sensitive applications such as in medicine and law enforcement (Obermeyer et al. 2019;Buolamwini and Gebru 2018;Lum, Boudin, and Price 2020;Raghavan et al. 2020). Of late, there has been considerable Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. interest in understanding AI related biases in creative tasks as well. For example, in (Prates, Avelar, and Lamb 2019), the authors investigate gender bias in AI generated translations. A recent work by researchers at Allen Institute of Artificial Intelligence demonstrates toxicity in popular language models (Wiggers 2020). In (Jain et al. 2020), the authors show that synthetic images obtained from Generative Adversarial Networks (GANs) exacerbate biases of training data. A notable instance of bias in AI generated art concerns an app called "AIportraits" that is shown to exhibit racial bias (Ongweso 2019). It was pointed out that skin color of people of color is lightened in the app's portrait rendition. In addition to noticeable biases concerning race, gender, etc., there can be several latent biases in AI generated art, especially in the context of modeling artist's style and style transfer. For example, the authors in (Srinivasan and Uchino 2020) leverage causal models to study several types of biases in modeling art styles and discuss socio-cultural implications of the same. In a similar vein, the authors in (Hassine and Neeman 2019) discuss some of the shortcomings in AI generated art and argue that such art is rife with culturally biased interpretations. Motivation Artworks have often been used to document important historical events such as wars, political developments, mythological facts, literary anecdotes, and many aspects of everyday lives of common people (Rabb and Brown 1986). For example, ancient Greek art is abundant with mythological paintings depicting Goddesses like Athena and Hera, many Indian artworks portray political and historical events such as the Anglo-Maratha wars and the Anglo-Sikh wars, and ancient Egyptian genre art illustrate culturally rich scenes from lives of ordinary people such as how women prepared food and how people measured harvest. Art movements entail a wealth of information related to culture, politics, and social structure of past times, and these aspects are often not captured in generated art. Furthermore, owing to automation bias exhibited by people (Skitka, Mosier, and Burdick 1999), an AI generated art that fails to justify subtleties of art movements can precipitate bias in understanding history. Furthermore, any generated art that claims to mimic artists' styles should not stereotype artists based on a single algorithmically quantifiable metric such as color, brush-Figure 1: Sample Illustration of Impressionism and Post-Impressionism artworks used in the analysis. Impressionism works were characterized by vibrant colors, spontaneous and accurate rendering of light, color, and atmosphere, focusing mostly on urban lifestyles. Post-Impressionism works focused on lives of ordinary people, depicting emotions and other symbolic contents strokes, texture, etc. As artist Paul Cezanne describes "If I were called upon to define briefly the word Art, I should call it the reproduction of what the senses perceive in nature, seen through the veil of the soul". Thus, several cognitive aspects such as perception, memory, beliefs, and emotions influence artists and artworks. In reality, many of these aspects can never be observed or measured, and thus the true style of any artist cannot be computationally modeled. Models like (Zhu et al. 2017) and(Tan et al. 2017) that claim to generate art in the styles of artists like Claude Monet, Vincent Van Gogh, and others are at best capturing correlation features like colors or brushstrokes and overlooking many latent aspects (such as culture and emotions) that characterize artists' styles (Hertzmann 2018). For aforementioned reasons, understanding biases in AI generated art is a necessary task. Given the prevalence of a large number of tools to easily mimic artists "styles", this task becomes even more pertinent. Prior work has mostly focused on qualitatively analyzing biases in AI generated art (Srinivasan and Uchino 2020;Hassine and Neeman 2019). In this work, we provide a quantitative analysis of confounding biases in AI generated art. In general, confounding biases arise due to unmeasured factors that influence both the inputs and outputs of interest. In particular, we quantify the confounding bias due to the lack of modeling of art movement's influence on artists and artworks. Art movements can be described as tendencies or styles in art with a specific common philosophy influenced by various factors such as cultures, geographies, political-dynastical markers, etc. and followed by a group of artists during a specific period of time (Wikiart 2020). Renaissance art, Modern art, and Ukiyo-e are some examples of art movements. Further, each art movement can have sub-categories. For example, modern art includes many sub-categories such as Dadaism, Impressionism, Post-impressionism, Naturalism, Cubism, Futurism, etc. Let us consider Impressionism and Post-impressionism as these are the art movements analyzed in the paper. Fig-ure 1 provides an illustration of artworks belonging to these movements. Although both these movements originated in France, there are marked by subtle differences. Impressionism was characterized by spontaneous brush strokes, vibrant colors, and urban life styles. Impressionists emphasized on accurate depiction of light with its changing quality, precise characterization of movement, and the atmosphere (Oxford-Art-Online 2021). Post-impressionism originated in reaction to Impressionism. Post Impressionists rejected Impressionists' concern over accurate depiction of color, instead they focused on symbolic depiction of content, formal order, and structure. Post-Impressionism artists focused on lives of ordinary people to naturally depict their emotions and lifestyles (Oxford-Art-Online 2021). Thus art movement is a dominant factor influencing both the artists and artworks. A model that ignores the influence of art movement in modeling artist's style can thus fail to capture socio-cultural nuances and contribute to confounding bias. Overview of the Proposed Method As a case study, we consider the cycleGAN model (Zhu et al. 2017) which has been used to model styles of Paul Cezanne, Claude Monet, and Vincent van Gogh. This is a fully automated method without involving human (i.e. artist) in the loop. Studying biases associated with fully automated AI methods is an essential precursor to understand biases in AI methods that aid artists in completing an art. This is because the latter set of methods can involve both artist and AI related biases, and understanding AI related biases independent of artist specific bias can thus be very beneficial. Therefore, we find the model proposed in (Zhu et al. 2017) appropriate for our case study. We consider the influence of Impressionism and Post-Impressionism in modeling artists' styles as these were the dominant art movements that influenced the artists under consideration in (Zhu et al. 2017). We evaluate the bias due to lack of consideration of art movement in modeling artists' style in the cycleGAN model across various genres such as landscapes, cityscapes, still life, and flower paintings. It is worth noting that most existing AI methods used to generate art styles largely focus on western art movements. Ideally, it is important to study the biases in generative art corresponding to non-western art movements, as these art movements are at greater risk being biased due to the already existing social structural disparities. However, due to the paucity of existing AI tools that model multiple art styles of non-western traditions, we have focused on the two aforementioned western art movements for analysis. Motivated by (Srinivasan and Uchino 2020), we leverage directed acyclic graphs (Pearl 2009) in order to estimate confounding bias. First, causal relationships between art movement, artists, artworks, art material, genre, and other relevant factors are encoded via directed acyclic graphs (DAGs). DAGs serve as accessible visual analysis and interpretation tools for art historians to encode their domain knowledge. As we are interested in understanding the causal influence of the artist on the artwork, in our DAG, artist is the input variable and artwork is the output variable. Art movement, art material, and genres are potential confounders. Next, the minimum adjustment set to remove confounding bias is determined using d-separation rules and backdoor adjustment formula (Pearl 2009). As our goal is to analyze the role of art movement in modeling artists' style, we fix genre and art material across the images used in our analysis. Thus we have to only adjust for art movement. The computation of confounding bias is based on the idea of covariate matching (Stuart 2010). Suppose the set of real artworks of an artist i is denoted by A i and cycleGAN generated images corresponding to the artist is denoted by G i . Further, let A j where j ∈ (1, 2, ...n), j = i be the set of real artworks of other artists belonging to the same art movement as artist i. First, a RESNET50 architecture is trained to distinguish between Impressionism and Post Impressionism artworks (He et al. 2015). Then, using the learned classifier's features representative of the art movement, every element of A i is matched with its nearest neighbor in G i . Next, every element of A i is matched with its nearest neighbor in A j . As there can be many artists belonging to the same art movement, we compute nearest neighbor of A i with respect to all such artists. As all confounders other than art movement are fixed across all the images in the analysis, any difference between the matched pairs should reflect the bias due to lack of modeling art movement. In an ideal scenario where the style of the artist is accurately modeled, the mismatch between A i and G i should be low, and the mismatch between A i and A j should be high, assuming any two artists have distinct styles of their own. Using these intuitions, we propose a simple metric to quantify confounding bias due to the lack of modeling art movement. We also show how our metric is able to quantify bias that state-of-the-art outlier detection methods (Shastry and Oore 2020) cannot capture. Insights Our findings show that understanding the influence of art movement is essential for learning about artists' style. This is even more important for learning the styles of artists whose works largely belong to one art movement, (e.g. Claude Monet, whose works mostly belong to the Impressionism art movement). This is because the influence of art movement is likely to be higher for such an artist than those whose works span various art movements. We elaborate these insights in Section 6. In reality, the true style of an artist cannot be modeled due to many unobserved confounders such as the emotions, beliefs, and other cognitive abilities of the artist. In this regard, we hope our work triggers inter-disciplinary discussions related to accountability of AI generated art such as the need to understand feasibility of modeling artists' styles, the need for incorporating domain knowledge in AI based art generation, and the sociocultural consequences of AI generated art. The rest of the paper is organized as follows. Section 2 reviews some related work. In Section 3, we provide an overview of directed acyclic graphs that we leverage to model confounding bias. In Section 4, we describe confounding bias with illustrations. In Section 5, we provide an overview of the method. We report results from our experiments in Section 6. We analyze and discuss the implication of the results in Section 7, before concluding in Section 8. Related Works There has been a growing interest in using AI to generate art. A good review about AI powered artworks can be found in (Miller 2019). There are a variety of AI models to generate art, generative adversarial networks (GANs) being a prominent type. Models such as (Zhu et al. 2017), (Elgammal et al. 2017) and(Tan et al. 2017) are just some illustrations of GAN based art generation. In (Gatys, Ecker, and Bethge 2016), a convolutional neural network architecture is proposed for style transfer. There are also open source platforms that lets end-users to easily create art. For example, (Macnish 2018) allows a user to convert a photo into a cartoon. With (Artbreeder 2020), one can blend the contents of a photo in the style of another. Platforms like (AIportraits 2020) and (GoART 2020) claim to convert a user uploaded photo in the style of famous artists and art movements. It is also interesting to note that art has been used to expose bias in the AI pipeline. A very prominent example in this regard is the 'Imagenet Roulette' project by AI researcher Kate Crawford and artist Trevor Paglen (Crawford and Paglen 2019), wherein biases in machine learning datasets are highlighted through art. A convolutional neural network based architecture is proposed in (Mordvintsev, Olah, and Tyka 2015) that helps to visualize the workings of various layers in deep networks by creating dream-like appearances. These visualizations can aid in understanding the functioning of various layers. Some recent works have exposed biases in AI generated art. For instance, it was reported in (Sung 2019; Ongweso 2019) that the AIportraits app (AIportraits 2020) was biased against people of color. In (Jain et al. 2020), considering synthetic images generated by GAN, the authors point out that GAN architectures are prone to exacerbating biases of training data. The authors in (Hassine and Neeman 2019) discuss some shortcomings of AI generated art and argue that such art is rife with cultural biases. The closest work to the present work is (Srinivasan and Uchino 2020), wherein the authors leverage causal graphs to qualitatively highlight various types of biases in AI generated art. We take a step further: leveraging the model proposed in (Srinivasan and Uchino 2020), we quantify confounding bias in AI generated art. This kind of quantitative analysis provides an objective measure for understanding bias. Directed Acyclic Graphs A directed acyclic graph (DAGs) is a directed graph without any loops or cycles. Variables of interest are represented by nodes in the graph and the directed edges between them indicate the causal relations. These directions are often based on assumptions of domain experts and available knowledge. DAGs allow encoding of assumptions about data, model, and analysis, and serve as a tool to test for various biases under such assumptions. DAGs facilitate domain experts such as art historians to encode their assumptions, and hence serve as accessible data visualization and analysis tools. As noted in (Srinivasan and Uchino 2020), there are several aspects that can characterize an artwork. These include the artist, art material, genre, art movement, etc. The relationships between these various aspects can be determined by domain experts. For example, a domain expert (e.g. art historian) may premise that genre can influence both the artist and the artwork. DAGs aid in visualizing the relationships between these various aspects. Figure 2 provides an illustration of a DAG encoding one set of such assumptions. It is to be noted that depending on the assumptions of various domain experts, there can be other DAGs describing the relationship of an artwork with the artist, genre, art movement, etc. However, confounding biases can be analyzed separately for each DAG, thereby enhancing the robustness of analysis. Given a DAG, d-separation is a criterion for deciding whether a set X of variables is independent of another set Z, given a third set Y . The idea is to associate "dependence" with "connectedness" (i.e., the existence of a connecting path) and "independence" with "unconnected-ness" or "separation" (Pearl 2009). Path here refers to any consecutive sequence of edges, disregarding their direction. Consider a three vertex graph consisting of vertices X, Y , and Z. There are three basic types of relations using which any pattern of arrows in a DAG can be analyzed, these being as follows. In the first case, the effect of X on Z is mediated through Y . Conditioning on Y , X becomes independent of Z or Y is said to block the path from X to Z. In the second case, Y is a common cause of X and Z. Y is a confounder as it causes spurious correlations between X and Z. Conditioning on Y , the path from X to Y is blocked. This is the scenario we will analyze in detail in this paper. For example, for the DAG in Figure 2, genre G, art movement A, and art material M are all confounders in being able to determine the causal effect of artist X on artwork Z. The Figure 2: DAG for the case study considered. X: Artist, Z: Artwork, A: Art movement, G: Genre, M: Art material. Image Source: [Srinivasan and Uchino 2020] causal effect of artist on artwork captures artist's influence on the artwork, and hence reflective of their style. In the last case, Y is a collider as two arrows enter into it. As such, the path from X to Y is blocked. Upon conditioning on Y , the path will be unblocked. In general, a set Y is admissible (or "sufficient") for estimating the causal effect of X on Z if the following two conditions hold (Pearl 2009): • No element of Y is a descendant of X • The elements of Y block all backdoor paths from X to Z-i.e., all paths that end with an arrow pointing to X. Thus we need to block all backdoor paths in order to remove the effect of confounders (which can introduce spurious correlations) in determining the causal effects of interest. With this background, we discuss confounding bias in more detail in the following section. Confounding Bias The style of an artist is characterized by several aspects. Some such aspects may be observable (e.g. art material, genre, art movements, etc.) and some others such as emotions, beliefs, prejudices, memory, etc. cannot be perceived or observed. For this reason, the true style of any artist cannot be computationally captured. Our goal is thus not to computationally model any artist's style, but to analytically highlight the shortcomings in the models that claim to mimic artists' style. As the bias with respect to unobserved cognitive aspects such as emotions, memory, etc. can never be measured, we restrict our analysis to observable aspects. We discuss confounding biases that arise due to common causes that affect both the inputs and outputs of interest. In our setting, confounding biases can arise due to factors that affect both artists and artworks. Based on the assumptions encoded in the DAG, such confounders could include art movement, genres, art materials, etc. A model that does not consider the influence of these confounders is prone to bias. For analysis, we consider the DAG provided by (Srinivasan and Uchino 2020) as shown in Figure 2. We will use this as a running example throughout the paper. Here, the variable X denotes the artist, Z denotes the artwork, G is the genre, M is the art material, and A denotes the art movement. In this setting, the problem of modeling artist's style can be viewed as estimating the causal effect of X on Z. According to the assumptions encoded in this DAG, art material, genre, and art movement are confounders influencing both the artist and the artwork. Further, art movement influences the art material. Let us assume that all of the confounders are observable. Under these assumptions, in order to compute the causal effect of an artist on the artwork, we have to block the backdoor path from X to Z, so as to remove confounding bias. In order to block all backdoor paths in Figure 2, one has to adjust for genre, art movement, and art material by conditioning on those variables. The following expression captures the causal effect of X on Z for the graph in Figure 2. , where CE denotes causal effect of X on Z. The summation g,a,m captures the adjustment across all possible art movements, art materials, and genres that the artist has worked, in order to model their style. The implication of finding a sufficient set, A, G, M , is that stratifying on A, G, M is guaranteed to remove all confounding bias relative to the causal effect of X on Z. The above instance depicted a DAG without any unobserved confounders. However, in reality, there are many unobserved confounders such as artist's memory, beliefs, and emotions. In the presence of unobserved confounders, the causal effect of X on Z is not identifiable, implying that the true style of an artist cannot be modeled. The authors in (Srinivasan and Uchino 2020) illustrate this scenario with a DAG as shown in in Figure 3. For the purposes of this work, we will consider only observable confounders and demonstrate the confounding bias that is associated with (Zhu et al. 2017) in not considering the influence of confounders like art movements to model artists' styles. Art movements introduced techniques, materials, and themes unique to the culture, society, geographic region, and the times during which these movements gained prominence. Art movements were symbolic of historical, religious, social, and political events of their times. Artists were heavily influenced by the style propagated by the art movement. By not considering the influence of art movement in modeling an artist's style, the social/cultural/religious/political significance associated with the artwork may be lost, and the intent of the artwork may be misrepresented. In the next section, we describe the proposed method for quantifying confounding bias. Method Our goal is to be able to quantify the confounding bias due to the lack of consideration of art movement's influence in modeling artists' styles. Thus, first we need to learn good representations of art movements. Learning Representations of Art Movements The first step is to learn good representations of the images under study with respect to art movements of interest. We will then use these representations to compute confounding bias (see Section 5.2). We use RESNET50 architecture (He et al. 2015) to learn classifiers for distinguishing Impressionism from Post Impressionism. Then, we extract the learned features from the penultimate layer of the trained network for representing the art movements (please see Section 6.1). In order to learn accurate representations of the art movements under study, we must ensure diversity in the artworks belonging to those art movements, i.e., we must consider artworks across genres and art material belonging to the art movement or else we will be learning a biased representation of the art movement. In fact, as part of our experiments, we tried to learn art movements fixing the genre, but this lowered the accuracy of the classifier; thus in order to learn reliable representations of art movements, we need to consider all artworks (across genres, materials, etc.) belonging to the art movement. We use RESNET50 (He et al. 2015) to learn features representative of Impressionism and Post Impressionism. Confounding bias in modeling styles of artists Monet, Cezanne, and van Gogh is computed using these learned features across multi-ple genres such as landscapes, cityscapes, flower paintings, and still life. Next we describe the procedure for computation of confounding bias. Bias Computation We fix genre, and art material across all the images considered so that we only have to adjust for art movement as a confounder. However, this does not hurt the generalizability of the method. For multiple confounders, all the elements in the minimum adjustment set have to be adjusted similar to the adjustment of art movement described below. We leverage the concept of covariate matching in order to adjust for confounders. In our problem setting, we want to be able to estimate the causal effect of an artist say X = i, in the presence of a confounder, namely, art movement. Suppose the set of real artworks of the artist is denoted by A i and the set of generated images of the artist (by the cycleGAN model), is denoted by G i . Specifically, let , where K is the number of real artworks of artist i, and let , where L is the number of generated artworks of artist i, and let , denote the R real artworks of an artist j belonging to the same art movement as i, and belonging to the same genre as considered in the analysis. Since there can be more than one artist belonging to the same art movement as i, assume there are J such artists, so j ∈ (1, 2, ..., J), j = i denotes all these artists. Typically, artists are identified with specific art movements, and such information can obtained from sources like (Wikiart 2020); the set A j can be constructed using this information. First, for each element g il ∈ G i , its nearest neighbor a ilmatch in set A i is computed based on the values of the confounders, i.e. features representative of the art movement obtained from (He et al. 2015). Note, that each element in the sets A i , G i , A j is a 1000 dimensional vector. Next, for each element in a ik ∈ A i , its nearest neighbor a jkmatch in set A j is computed. As all other potential confounders such as genre and art material are fixed to be the same across all the images considered, the difference in the corresponding matches between sets A i and G i is a measure of the variation in (lack of) modeling art movement and in modeling the specific artist's style. Similarly, any difference between the corresponding matches between sets A i and A j is a measure of variation across artists' styles and art movements. In an ideal scenario where a generative model is able to accurately learn the style of artist i considering the influence of art movement, the difference between corresponding matches between the real and generated images, i.e., (A i − G i ) should be close to 0. On the other hand, the difference between matches across artists should be significant compared to the difference between real and generated images of an artist, i.e. (A i − A j ) > (A i − G i ), this is because different artists have distinct styles of their own, assuming they do not mimic one another. Using these intuitions, we propose the following metric to quantify confounding bias due to lack of modeling art movement. Description of the Metric The aforementioned metric captures the two intuitions just described. The numerator in the above equation captures the average difference between real artworks and generated images for artist i across all the generated images. The denominator captures the average difference between real artworks of the artist under consideration and other artists belonging to the same art movement. The inner summation and averaging is normalizing with respect to the artist i, considering all real artworks of i, and the outer summation and averaging is normalization with respect to all J artists belonging to the same art movement as i. When the generated images are similar to real artworks of i, the numerator is close to 0, this happens when art movement's influence is modeled accurately (amongst other relevant factors) since we consider features representative of art movement in capturing this difference. In a similar vein, the denominator of the above metric will be high when the specific artist's style is learned correctly. So, a low value of the above metric denotes low confounding bias with respect to art movement. Note, the value of the metric can be greater than 1, in which case we assume that there is considerable confounding bias. Choice of Distance Measure We use Euclidean distance to compute matches. We also tried other distances measures such as Manhattan distance, Chebychev distance, and Wassterstein's distance. Across all distance measures, we observed that the relative order of the bias scores remained the same, thus the metric is not sensitive to changes in the choice of distance measure. Experiments In this section, we report results on computing confounding bias along with an interpretation of the same. We begin by describing experiments on learning representations of art movements. Learning Representations of Art Movements We train a RESNET50 (He et al. 2015) classifier to distinguish between Impressionism and Post-Impressionism, the prominent art movements that were characteristic of artists considered in the cycleGAN model. Specifically, we start with the model pre-trained on Imagenet dataset and fine-tune using the art dataset under study. We then use the learned features from the penultimate layer of the trained model as representations of the art movement, resulting in a 1000 dimensional vector for each image. Note, any state-of-the-art architecture could be used in place of (He et al. 2015). In order to train the classifier to distinguish between Impressionism and Post Impressionism, we need to consider artworks across artists belonging to those art movements. From (Wikiart 2020), we collected artworks belonging to artists who were identified as belonging to these art movements, and whose majority of the works (> 50%) belonged to Impressionism or Post Impressionism. This ensured collecting artworks representative of the concerned art movements. We thus collected about 5083 images belonging to Impressionism and about 3495 images belonging to Post Impressionism by crawling images from Wikiart. The dataset consists of Impressionist artists like Berthe Morisot, Edgar Degas, Mary Cassatt, Childe Hassam, Anotonie Blanchard, Claude Monet, Gustave Caillebotte, Sorolla Joaquin, Konstavin Korovin, amongst others. Post Impressionist artists included in the dataset are Vincent van Gogh, Paul Cezanne, Samuel Peploe, Moise Kisling, Ion Pacea, Pyotr Konchalovsky, Maurice Prendergast, Maxime Maufra, etc. Sample illustration of the dataset is provided in Figure 1. We used 80% of the images for training and the rest for validation. We obtained best validation accuracy of 72.1% with Adam optimizer, learning rate = 0.0001, and batch size = 50. Additionally, we conducted the experiments with other models such as RESNET34, VGG16, and EfficientNet B0-3 to check for any performance improvement. Except for EfficientNet B0-3 being computationally faster, there was not any significant improvement in validation accuracy, so we resorted to RESNET50 features. It is to be noted that Post Impressionism emerged as a reaction to Impressionism. Many artists such as Cezanne worked across these two art movements. Due to these factors, these art movements have subtle differences which are often hard to capture computationally. Quite intuitively therefore, the validation accuracy is not very high. Nevertheless, these learned features serve as a baseline in capturing representations of art movements. Specifically, we used the features from the penultimate layer of the RESNET50. Quantifying Confounding Bias across Genres We considered various genres such as landscapes, cityscapes, flower paintings, and still life for our analysis. Landscapes depict outdoor sceneries, cityscapes are representations of houses, promenade, and prominent city structures. Flower paintings represent a variety of flowers in vases, gardens, and ponds. Still life consists of images of fruits, vegetables, and other food articles. The DAG in Figure 2 can be used to depict the influence of these genres on the artist and artworks as all the relevant factors are encoded in the DAG. In Section 6, we describe why certain other genres such as portraits cannot be modeled using the DAG shown in Figure 2. For the artists under consideration namely, Paul Cezanne, Claude Monet, and Vincent van Gogh, we first obtained real artworks belonging to these genres from the Wikiart dataset. Thus, these images result in three sets A i , where i ∈ {cezanne, gogh, monet} corresponding to the three artists under consideration. In obtaining these images, we fixed the art material to oil painting so that we do not have to adjust for this factor as a confounder. Next, we used random images from existing datasets such as Oxford flower dataset (Nilsback and Zisserman 2008), and additionally crawled from Google images to obtain images belonging to various Table 1: Bias scores computed across genres with respect to Impressionism and Post Impressionism art movements. Blank entries denote cases in which there were not ample instances to compute the metric. Imp: Impressionism, Post: Post Impressionism, G: Genre, L: Landscape, C: Cityscapes, F: Flowers, S: Still life genres under consideration. We then used these images as test images to obtain corresponding generated artworks in the styles of Cezanne, Monet, and van Gogh. These constituted three sets G cezanne , G gogh and G monet . There were roughly 60 test images in each genre. Next, the sets A j were constituted using images of other artists who belonged to same art movement as the artist under consideration. As J, the number of such artists increases, we can get more reliable indicators of art movements, and thus confounding bias due to art movement will become more evident. It is to be noted that not all artists J necessarily had ample number of images in a particular genre. So, we only considered those artists who had more than 35 images in a particular genre and art movement for analysis within genres. This is because using just a few images of a particular genre by an artist does not help in quantifying bias reliably. For the same reason, confounding bias in modeling artists' styles who had too few images in a particular genre and art movement, cannot be estimated. For example, there are only two landscapes of van Gogh in the Impressionism style, and none for Monet in Post Impressionism, so it is not possible to quantify for confounding bias in landscapes with respect to Impressionism for van Gogh and Post Impressionism for Monet. So, we report results for only those scenarios in which there were at least 35 artworks of the artist in that particular genre. We then obtained feature descriptors (using the representations from the penultimate layer of the RESNET50 architecture) of the images in sets A i , G i , and A j using the learned representations of art movements. Confounding bias was then computed using eq. (5). Table 1 lists the values of this metric for various genres and artists. Blank entries denote cases where there were not ample instances to compute the metric. Observations The bias scores are mostly lower for Cezanne who had worked across both Impressionism and Post Impressionism, whereas the scores are higher for van Gogh and Monet who had largely worked in Post Impressionism and Impressionism respectively. This observation suggests that bias scores vary across artists based on the number of art movements influencing them. To verify, we conducted statistical hypothesis testing. We set the null hypothesis as: the mean of bias scores is same for artists who had worked across art movements and artists who had worked largely in one art movement. Formally, we artworks generated by cycleGAN for the corresponding artists and art movements. Bottom: Corresponding photos used to generate images in the middle row. Spontaneous and accurate depiction of light along with its changing quality, an important characteristic of Impressionism is missing in the generated version (row 2, column 1 and 2). Expressive brushstrokes emphasizing geometric forms is missing in generated image corresponding to Post-Impressionist style of van Gogh. set the null hypothesis H 0 as , where M s denotes the mean of the confounding bias scores for artists who largely worked in a single art movement, and M m denotes the mean of the confounding bias scores for artists who had worked across multiple art movements. The corresponding alternate hypothesis H 1 is set as As there were very few observations at our disposal, we used the non-parametric Wilcoxon signed ranked test. The null hypothesis was rejected with a p-value of 0.033 (α = 0.05), thus showing that bias scores vary across artists based on the number of art movements influencing them. Interpretation The aforementioned results can be interpreted as follows. If an artist had worked across art movements, then modeling the influence of art movement would be less crucial in generating artworks according to the artist's style. This is because, there are artworks across art movements for such an artist, and thus there is a greater chance of match between generated images and real images due to the greater diversity and variation in the set of real images of the artist. On the contrary, if an artist had worked primarily in one art movement, then it is likely to observe higher bias if the influence of art movement is not considered. This is because the generated images have to match with respect to specific art movement or else they will have greater dissimilarity. To elaborate further, let us consider the genre of landscapes. Figure 4 provides an illustration of real landscapes of Monet, van Gogh and Cezanne, cycleGAN generated landscapes in the styles of these artists along with corresponding photos of the generated images. There are about 250 landscapes by Monet in Impressionism style but none corresponding to Post Impressionism. Most of van Gogh's landscapes were set in the Post Impressionism style with just two in the Impressionism style; there are about 35 landscapes by Cezanne in Impressionism and 102 in Post Impressionism. Consider the photo in row 3 column 1. The corresponding cycleGAN generated image shown in row 2 column 1 does not exhibit the sharp colors of twilight shown in the photo, and alters the affect of the original photo. This is not in line with Impressionism which was characterized by spontaneous and accurate depiction of light with its changing colors. Also, the generated image perhaps does not do justice to the cognitive abilities of the artist; please see image in row 1 column 1 that corresponds to a real landscape by Monet illustrating twilight in the outdoors, with shades of red. In fact, spontaneous and natural rendering of light and color was a distinct feature of Impressionism. In a similar vein, the generated images of van Gogh row 2, column 4 and 5 exhibit markedly different brushstrokes and texture compared to the Post Impressionist works of van Gogh. Post Impressionism works of van Gogh were characterized by swirling brushstrokes, emphasizing geometric forms for an expressive effect. From Table 1, the bias score with respect to Monet is 2.52 (Impressionism) and 2.96 with respect to van Gogh (Post Impressionism). On the contrary, the bias scores are lower than 1 for Cezanne who had worked across Impressionism and Post Impressionism. Higher scores indicate greater bias thus corroborating with the fact that the bias is higher for artists who were influenced by a single art movement as compared to those who were influenced by multiple art movements. Comparison with Outlier Detection Method In order to evaluate the effectiveness of the proposed metric, we compared it with a state-of-the-art outlier detection method (Shastry and Oore 2020). Specifically, the authors in (Shastry and Oore 2020) propose to detect outliers by identifying inconsistencies between activity patterns of the neural network and predicted class. They characterize activity patterns by Gram matrices and identify anomalies in Gram matrix values by comparing each value with its respective range observed over the training data. The method can be used with any pre-trained softmax classifier. Furthermore, the method neither requires access to outlier data for finetuning hyperparameters, nor does it require access for out of distribution for inferring parameters, and hence appropriate for our comparison. First, we wanted to test if (Shastry and Oore 2020) can detect outliers with respect to real artworks belonging to different art movements. Across all genres, the best detection accuracy of (Shastry and Oore 2020) in identifying outliers with respect to Impressionism (i.e. in separating Post impressionism real artworks from real Impressionism artworks) was just 58.107%. As Impressionism and post Impressionism were similar in many aspects, we then tested if (Shastry and Oore 2020) can detect outliers across art movements with marked differences such as in separating Romanticism and Realism from Impressionism and Post Impressionism. Even in this case, the best detection accuracy was 50%. Finally, the best detection accuracy of (Shastry and Oore 2020) in separating real artworks from generated artworks was 51.735%. Unlike (Shastry and Oore 2020), the proposed metric is more effective in capturing the influence of art movements in modeling artists' styles since the bias scores corresponding to artists who had largely worked in a single art movement is significantly higher than those who had worked across multiple art movements. In the next section, we also discuss the other benefits of the proposed bias metric. Discussion In this section, we discuss a few other relevant questions in the context of the above results. What happens if images across art movements are combined in analyzing confounding bias? The very goal of estimating confounding bias is to be able to capture the drawbacks due to lack of modeling art movements. When images across art movements are combined, the fact that art movement is a potential confounder is ignored, thereby leading to biased representations. Thus, confounding bias has to be computed with respect to Impressionism and Post Impressionism separately. Computing bias across a combination of images from these two art move-ments is an illustration of "Simpson's paradox" (Pearl and Mackenzie 2018). Simpson's paradox is a trend that characterizes the inconsistencies across different groups of the data. Specifically, an effect that appears across different sub groups of data but that which gets reversed when the groups are combined illustrates Simpson's paradox. In other words, Simpson paradox refers to the effect that occurs when association between two variables is different from the association between the same two variable after controlling for other variables. The correct result ( i.e. whether to consider aggregated data or data corresponding to sub-groups) is dependent on the causal graph characterizing the problem and data. The authors in (Pearl and Mackenzie 2018) illustrate Simpson's paradox with several real examples. For example, the authors cite a study of thyroid disease published in 1995 where smokers had a higher survival rate than non-smokers. However, the non-smokers had a higher survival rate in six out of the seven age groups considered, and the difference was minimal in the seventh. Age was a confounder of smoking and survival, and hence it had to be adjusted for. The correct result corresponds to the one obtained after stratifying data by age, and thus it was concluded that smoking had a negative impact on survival. Let us revisit our example. According to the DAG in Figure 2, art movement is a confounder which needs to be adjusted for. If however, we overlook this confounder by combining images across art movements, then the confounder is not adjusted according to eq. (1). In fact, when we combined images across art movements, the resulting bias score was lower, however this result is incorrect, thus illustrating the paradox. In cycleGAN (Zhu et al. 2017), the authors propose a cycle consistency loss such that the generated images when mapped back to the original (real) images are indistinguishable from the original images. This in turn implies that the generated images are as realistic as possible. Simpson's paradox elucidates why cycleGAN that is trained on data combined across art movements and whose loss function intuitively appears sound, cannot capture the influence of art movements. Because the loss is being minimized across images from different art movements, it is not guaranteed to minimize the loss within each art movement. Results in Table 1 and Figure 4 illustrate this point further. Thus, in order to accurately model artists' styles, (Zhu et al. 2017) had to minimize the loss proposed by stratifying the data by art movement. 7.2 What about other genres such as portraits or genre art? The computation of bias is based on the DAG provided. The DAG considered in the case study is not applicable to other genres such as portraits and genre art. Portraits, for example, involve many other factors in their creation. Characteristics such as gender, age, beauty, and other aesthetics play a prominent role in the way sitters are depicted. Also, factors characterizing sitter's lineage/genealogy (e.g. race, family, cultural background, religion, etc.) can also influence the rendition. The social standing of the sitter such as their profession, political backgrounds, and power could influence the artists in the way they depict the sitter. For example, it is possible that powerful people commanded the artists to depict them in a certain way, and the artists thereby had to exaggerate certain characteristics. Genre art depicted everyday aspects of ordinary people. These artworks encompassed a variety of socio-cultural themes such as cooking, harvesting, dancing, etc. Therefore genre art involves many socio-cultural factors that the DAG considered in the case study does not entail. Thus, for computation of bias in other genres, appropriate DAGs have to be constructed in consultation with art historians, taking into account all relevant variables of interest. 7.3 What are some potential applications of the proposed metric? As discussed in the previous sections, the proposed metric is useful in quantifying the confounding bias associated with generative AI methods that fail to consider art movements and other confounders in modeling artists' styles. Such an objective assessment of bias can also be useful in authenticating artworks, i.e. the computed bias scores can aid in verifying if an artwork was a genuine creation of a particular artist. This is because, if an artwork is not a real work of an artist, then the bias score associated with such a work is likely to be higher, and similarly, if the artwork is a genuine creation of the artist, then the bias score is likely to be lower. It is to be noted that we are not claiming that the bias score alone is sufficient to validate the authenticity of an artwork; instead, we believe it can be beneficial in assessing the authenticity of artworks along with other forms of evidences, including those of art historians. A related application of the proposed metric would be for price assessment of generative art. In other words, the computed bias scores can serve as a measure of the selling price/value of a generative artwork. If the bias score is high, then the value of generative artwork is likely to be low, and vice versa. Finally, the proposed metric can also aid in the study of art history. The computed bias scores can provide an independent and complementary source of evidence to art historians to verify their assumptions or opinions regarding various topics of interest such as in understanding characteristics of art movements, and in studying influence of specific art materials on artists. By considering different DAGs that encode assumptions of different art historians, it is also possible to compare perspectives and understand if there are sources of bias that are common across assumptions of different art historians. Such common bias sources will then serve as a strong evidence for art historians in accepting or rejecting a viewpoint. Conclusions Art movements influenced the style of artists in many subtle ways. Overlooking the contribution of art movements in modeling artists' styles leads to confounding bias. In reality, there are several unobserved factors such as emotions, and beliefs that characterize an artist's style. Thus, it is not possible to computationally model an artist's style. In doing so, generative art might be stereotyping artists based on a narrow metric such as color or brush strokes, and not do justice to the artist's abilities. Furthermore, generated artworks might accentuate automation bias by conveying inaccurate information about socio-political-cultural aspects due to their inability in capturing the nuances depicted in art movements. In this work, leveraging directed acyclic graphs, we proposed a simple metric to quantify confounding bias due to the lack of modeling art movement's influence in learning artists' styles. We analyzed this confounding bias across genres for artists considered in the cycleGAN model, and provided an intuitive interpretation of the bias scores. We hope our work triggers discussions related to feasibility of modeling artists' styles, and more broadly raises issues related to accountability of AI-simulated artists' styles.
2021-02-25T02:15:46.025Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "b027dd07962fde4478c23a075956644897c74ed1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b027dd07962fde4478c23a075956644897c74ed1", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Computer Science" ] }
231716608
pes2o/s2orc
v3-fos-license
GWAS of serum ALT and AST reveals an association of SLC30A10 Thr95Ile with hypermanganesemia symptoms Understanding mechanisms of hepatocellular damage may lead to new treatments for liver disease, and genome-wide association studies (GWAS) of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) serum activities have proven useful for investigating liver biology. Here we report 100 loci associating with both enzymes, using GWAS across 411,048 subjects in the UK Biobank. The rare missense variant SLC30A10 Thr95Ile (rs188273166) associates with the largest elevation of both enzymes, and this association replicates in the DiscovEHR study. SLC30A10 excretes manganese from the liver to the bile duct, and rare homozygous loss of function causes the syndrome hypermanganesemia with dystonia-1 (HMNDYT1) which involves cirrhosis. Consistent with hematological symptoms of hypermanganesemia, SLC30A10 Thr95Ile carriers have increased hematocrit and risk of iron deficiency anemia. Carriers also have increased risk of extrahepatic bile duct cancer. These results suggest that genetic variation in SLC30A10 adversely affects more individuals than patients with diagnosed HMNDYT1. L iver disease remains an area of high unmet medical need, causing 3.5% of deaths worldwide, and the burden of liver disease is rising rapidly, driven mainly by increasing rates of nonalcoholic fatty liver disease (NAFLD) 1,2 . Better characterizing the genetic determinants of liver disease may lead to new therapies 3 . In addition, liver injury is a common side effect of drugs, and is a frequent reason that drugs fail to progress through the development pipeline; understanding the molecular mechanisms of liver injury can aid in preclinical drug evaluation to anticipate and avoid off-target effects 4,5 . Combined GWAS of ALT and AST have previously revealed genetic associations providing potential therapeutic targets for liver disease such as PNPLA3 25 and HSD17B13 26 . To further study the genetics of hepatocellular damage, here we perform GWAS on serum activities of ALT and AST in 411,048 subjects, meta-analyzed across four ancestry groups in the UK Biobank (UKBB). We find 100 loci associated with both enzymes and show that the strongest effect is a rare missense variant in SLC30A10. Results Discovery of ALT-and AST-associated loci by GWAS. We performed a GWAS of ALT and AST in four sub-populations in the UKBB (demographic properties, Supplementary Table 1; sample sizes, number of variants tested, and λ GC values, Supplementary Table 2; genome-wide significant associations, Supplementary Data 1; Manhattan and QQ plots for each enzyme and sub-population, Supplementary Figs. 1 and 2). After metaanalyzing across sub-populations to obtain a single set of genome-wide p-values for each enzyme (Manhattan plots, Fig. 1), Fig. 1 Manhattan plots showing trans-ancestry GWAS results for ALT and AST. Red dots indicate lead variants for shared signals between the two GWAS; for clarity, the shared signals are marked only once, on the plot for the GWAS in which the more significant association is detected. Cis-pQTLs (at GPT and GOT2) are labeled in blue. Loci with shared signals are labeled (for clarity, only when p < 10 −25 and only on the GWAS for which the association is most significant). Loci previously reported to associate with both ALT and AST are named in bold. SLC30A10, the main topic of this report, is labeled in red on both plots. Source data for this figure are available from NHGRI-EBI GWAS catalog accession GCST90013663 and GCST90013664. we found 244 and 277 independent loci associating at p < 5 × 10 −8 with ALT and AST, respectively, defined by lead single nucleotide polymorphisms (SNPs) or indels separated by at least 500 kilobases and pairwise linkage disequilibrium (LD) r 2 less than 0.2. Enzyme activities were strongly associated with coding variants in the genes encoding the enzymes, representing strong protein quantitative trait loci in cis (cis-pQTLs). For example, rs147998249, a missense variant Val452Leu in GPT (glutamicpyruvic transaminase) encoding ALT, strongly associates with ALT (p < 10 −300 ) and rs11076256, a missense variant Gly188Ser in GOT2 (glutamic-oxaloacetic transaminase 2) encoding the mitochondrial isoform of AST, strongly associates with AST (p = 6.3 × 10 −62 ). While these strong cis-pQTL effects validated our ability to detect direct genetic influences on ALT and AST, the aim of this study was to detect genetic determinants of liver health that have downstream effects on both ALT and AST due to hepatocellular damage; therefore we focused the remainder of our analyses only on the variants associated with serum activity of both enzymes (labeled with black text on Fig. 1). Focusing only on loci with both ALT and AST GWAS signals (lead variants from either GWAS were identical or shared proxies with r 2 ≥ 0.8), we found a total of 100 independent loci associated with both enzymes (Fig. 2, Supplementary Data 2). As expected, effect sizes on ALT and AST at these loci were highly correlated (r = 0. 98), and at all 100 loci the direction of effect on ALT and AST was concordant. Of these 100 loci, six were coincident or in strong LD with a published ALT or AST variant in the EBI-NHGRI GWAS Catalog, and 15 were within 500 kb of a published ALT or AST variant; 33 of the loci harbored a missense or predicted protein-truncating variant; and of the remaining 67 entirely noncoding loci, 19 were coincident or in strong LD with the strongest eQTL for a gene in liver, muscle, or kidney suggesting that effects on gene expression may drive their associations with ALT and AST. A majority (70 of the 100 loci) were shared with a distinct published association in the GWAS Catalog, suggesting pleiotropy with other traits. We observed significant heterogeneity in effects between sexes for both enzymes (Cochran's Q test p < 0.05/100 for both enzymes) for three of the lead variants: rs9663238 at HKDC1 (stronger effects in women), rs28929474 at SERPINA1 (stronger effects in men), and rs1890426 at FRK (stronger effects in men) (Supplementary Data 3). We tested the 100 lead variants from the ALT and AST GWAS analysis for association with a broad liver disease phenotype (ICD10 codes K70-77; 14,143 cases and 416,066 controls), metaanalyzing liver disease association results across all four subpopulations (Supplementary Data 2). Of the 100 lead variants, 28 variants associate with liver disease with p < 0.05. As expected, variants associated with an increase in ALT and AST tend to be associated with a proportional increase in liver disease risk (across all lead variants, Pearson correlation of betas r = 0.82 for both enzymes; Fig. 3). Liver disease is found more frequently in our sample of carriers of SLC30A10 Thr95Ile (rs188273166), proportional with the observation of increased ALT and AST (OR = 1.47); however, owing to the small sample size of carriers and liver disease cases, we are underpowered to confidently determine whether this high point estimate is due to chance (although the 95% CI from the PLINK analysis used to estimate effects does not include OR = 0, the p value from the SAIGE analysis which more accurately controls for Type I error in highly unbalanced casecontrol studies is 0.07; see "Methods"). Because SLC30A10 Thr95Ile had the strongest effect on ALT and AST of all of our lead variants and has not been reported as being associated with any phenotypes in the literature, we centered the following analyses on better understanding its function. Validation of SLC30A10 Thr95Ile genotype. Because rare variants are especially prone to errors in array genotyping 36 , we sought to validate the array genotype calls for SLC30A10 Thr95Ile in a subset of 301,473 individuals who had also been exome sequenced (Supplementary Table 3). The only individual homozygous for the minor (alternate) allele by array was confirmed by exome sequencing; no further homozygotes were identified. Of 702 individuals called as heterozygous for Thr95Ile by array data who had exome data available, 699 (99.6%) were confirmed heterozygous by exome sequencing, while three were called homozygous reference by exome sequencing, suggesting an error Unless otherwise noted with an asterisk, loci are named by the closest protein-coding gene. "Coding" indicates that one of the variants linked to the lead variant is predicted to have a moderate or high impact on a protein-coding gene. "Liver eQTL" and "Muscle or kidney eQTL" indicate that one of the variants linked to the lead variant is the strongest eQTL for a gene in those tissues by GTEx. Loci are further categorized as followed: asterisk, indicating the locus is named for a gene other than the closest to the lead variant, due to coding or eQTL annotation; gray, indicating the lead variant is over 100 kilobases from a protein-coding gene; bold, indicating a known GWAS catalog ALT or AST locus; bold and underlined, indicating a known GWAS catalog ALT and AST locus. Source data for this figure are in Supplementary Data 2. either in the array typing or exome sequencing for these three individuals. Overall, these results demonstrate high concordance between array and exome sequencing, implying highly reliable genotyping. Magnitude of ALT and AST elevation in SLC30A10 Thr95Ile carriers. After establishing the association between SLC30A10 Thr95Ile and ALT and AST, we sought to further explore the relationship between genotype and enzyme activity levels to understand clinical relevance. Carriers of Thr95Ile had a mean ALT of 27.37 U/L vs 23.54 U/L for noncarriers, and a mean AST of 28.85 U/L vs 26.22 U/L for noncarriers. Counting individuals with both ALT and AST elevated above 40 U/L, a commonly-used value for the upper limit of normal (ULN) 7 , 5.6% of carriers vs 3.6% of noncarriers had both enzymes elevated at the time of their UK Biobank sample collection, an increased relative risk of 58% (Fisher's p = 8.1 × 10 −4 ) (Supplementary Data 4). Drinking behavior in SLC30A10 Thr95Ile carriers. The SLC30A10 Thr95Ile has not been reported as associating with drinking behavior by any of the available studies in the GWAS Catalog. We used the drinking questionnaire taken by UK Biobank participants to assess drinking status at enrollment of SLC30A10 Thr95Ile carriers (current, former, or never drinkers.) While the rate of current drinkers is higher among carriers vs. non-carriers in the entire biobank (93.7% vs. 91.7%, Fisher's p = 0.019) (Supplementary Data 4), this association is highly confounded by genetic ancestry and country of birth (Supplementary Table 4). Limiting to the White British subpopulation and individuals born in England, the rate of current drinking is not detectably different among carriers (94.0% vs 93.4%, Fisher's p = 0.57) while the rate of individuals with elevation of both ALT and AST over the ULN remains significant (5.5% vs. 3.5%, Fisher's p = 4.6 × 10 −3 ) (Supplementary Data 4). Replication of ALT and AST associations. The initial association of rs188273166 with ALT and AST was identified in the White British population. To replicate this association in independent cohorts, we first identified groups besides the White British subpopulation harboring the variant in the UKBB. The only two other populations with a substantial number of SLC30A10 Thr95Ile carriers were individuals identifying as Other White and as White Irish (Supplementary Table 4); we tested for association with ALT and AST in these subpopulations. We then tested the association in two independent cohorts from the DiscovEHR collaboration between the Regeneron Genetics Center and the Geisinger Health System 37 . Meta-analyzing the association results across these four groups (N = 132,992 and N = 131,646, respectively) confirmed the Thr95Ile association with increased ALT and AST (p = 6.5 × 10 −5 and p = 5.4 × 10 −6 , respectively) (Supplementary Fig. 3, Supplementary Table 5). We also searched repositories of available complete summary statistics for ALT and AST GWAS and found two prior studies that reported associations 21,38 . Although these studies were underpowered to detect significant associations and were not reported in units that allowed their inclusion in our replication analysis, they were consistent with increases in both enzymes (Supplementary Table 6). Independence of SLC30A10 Thr95Ile from neighboring ALT and AST associations. Because we applied distance and LD pruning to the results of the genome-wide scan to arrive at a set of lead variants, it was unclear how many independent association signals existed at the SLC30A10 locus. Revisiting trans-ancestry association results in a window including 1 Mb flanking sequence upstream and downstream of SLC30A10 revealed 76 variants with genome-wide significant associations with both ALT and AST (Fig. 4). These 76 variants clustered into three loci: SLC30A10 (only Thr95Ile, rs188273166); MTARC1 (mitochondrial amidoxime reducing component 1, lead variant rs2642438 encoding missense Ala165Thr, previously reported to associate with liver disease and liver enzymes 39 , and six additional variants together spanning 68 kilobases); and LYPLAL1-ZC3H11B (intergenic region between lyophospholipase like 1 and zinc finger CCCHtype containing 11B, with array-genotyped variant rs6541227 and 67 imputed variants spanning 46 kilobases), a locus previously reported to associate with non-alcoholic fatty liver disease (NAFLD) 40 . To test for independence between these three loci, we performed ALT and AST association tests for each of the three array-typed variants while including the genotype of either one or both of the others as covariates. Associations were similar in these conditional analyses, suggesting that each of these three associations are not confounded by linkage disequilibrium with the other regional association signals (Supplementary Table 7) Therefore, the SLC30A10 Thr195Ile association is statistically independent of the associations at neighboring loci. This statistical independence of the liver enzyme associations does not preclude a long-distance regulatory interaction between the three loci; for example, rs188273166, despite encoding an amino acid change in SLC30A10, could conceivably influence transcription of MTARC1, and rs6541227, despite being nearest to LYPLAL1 and ZC3H11B, may influence transcription of SLC30A10. However, these three variants are not detected as liver eQTLs for the genes at neighboring loci in published data 41 . Linkage of Thr95Ile to GWAS variants at SLC30A10. A GWAS of circulating toxic metals 42 discovered an association between a common intronic variant in SLC30A10 (rs1776029; MAF in White British, 19.5%) and blood manganese levels, where the reference allele-which is the minor allele-is associated with increased circulating manganese. We calculated linkage disequilibrium statistics between rs1776029 and Thr95Ile and found that the minor allele of Thr95Ile (A) was in almost perfect linkage with the minor allele of rs1776029 (A) (r 2 = 0.005, D′ = 0.98); Thr95Ile (rs188273166) is 154 times more frequent among carriers of at least one copy of the minor allele of common variant rs1776029 (95% CI = 84-325; Fisher's p < 2.2 × 10 −16 ). These results suggest that the previously reported association of rs1776029 with circulating manganese may be partially or completely explained by linkage with Thr95Ile (Supplementary Table 8); however, genotypes of Thr95Ile in the manganese GWAS or manganese measurements in the UK Biobank would be needed in order to perform conditional analysis or directly measure association of Thr95Ile with serum manganese. We then systematically tested nearby variants reported in the GWAS Catalog for any phenotype for linkage to Thr95Ile, measured by high |D′|. Combining GWAS Catalog information and |D′| calculations, we find nearly perfect linkage (|D′| > 0.90) between rs188273166-A (rare missense Thr95Ile) with rs1776029-A (intronic), rs2275707-C (3′UTR), and rs884127-G (intronic), all within the gene body of SLC30A10 (Supplementary Data 5). In addition to increased blood Mn 42 , these three common alleles have been associated with decreased magnesium/calcium ratio in urine 43 , decreased mean corpuscular hemoglobin (MCH) [44][45][46] , increased red blood cell distribution width [44][45][46] , and increased heel bone mineral density (BMD) [46][47][48][49] . A recent study, not yet in the GWAS catalog, reported an association between another common intronic variant in SLC30A10 (rs759359281; MAF in White British, 5.6%) and liver MRI-derived iron-corrected T1 measures (cT1) 50 . However, the reported cT1-increasing allele of rs759359281, which is the minor allele, is in complete linkage (D′ = 1) with the major allele of Thr95Ile (rs188273166); in other words, the cT1-increasing allele and Thr95Ile liver disease risk allele occur on different haplotypes, suggesting that the mechanism of this reported cT1 association is independent of Thr95Ile. Phenome-wide associations of SLC30A10 Thr95Ile. To explore other phenotypes associated with SLC30A10 Thr95Ile, we tested for association with 135 quantitative traits and 4398 ICD10 diagnosis codes within the White British population (Supplementary Data 6 and 7). We were particularly interested in testing associations with phenotypes related to HMNDYT1, the known syndrome caused by homozygous loss of function of SLC30A10. Besides ALT and AST elevation, rs188273166 was associated with other indicators of hepatobiliary damage such as decreased HDL cholesterol and apolipoprotein A (ApoA) 51 , decreased albumin, and increased gamma glutamyltransferase (GGT). Other phenome-wide significant quantitative trait associations were increases in hemoglobin concentration and hematocrit (Table 1); increased hematocrit, or polycythemia, is a known symptom of HMNDYT1. Liver iron-corrected T1 by MRI (cT1), although only measured in seven carriers, was above the population median value in all seven ( Supplementary Fig. 4). The only phenome-wide significant associations of diagnoses with SLC30A10 Thr95Ile were C24.0, extrahepatic bile duct carcinoma, and C22.1, intrahepatic bile duct carcinoma. There are eight Thr95Ile carriers of each type of cancer, and six carriers with both types of cancer, for a total of ten carriers (1% of the 1,001 total carriers in the White British population) with bile duct carcinoma. Strikingly, over 5% of individuals with extrahepatic bile duct carcinoma (8 in 148) carry Thr95Ile. (Table 2, Supplementary Data 7). Among hematological manifestations of HMNDYT1, iron deficiency anemia was enriched among carriers (OR = 1.5, 95% CI, 1.1-1.9; p = 4.0 × 10 −3 ). Searching for neurological manifestations similar to HMNDYT1, we find no association with Parkinson's disease or dystonia but note that, as with liver diseases, we are powered to exclude only strong effects because of the small case number for these traits (Supplementary Data 7). The top non-cancer hepatobiliary associations with SLC30A10 Thr95Ile were with K83.0, cholangitis; K83.1, obstruction of bile duct; and K81.0, acute cholecystitis. Because biliary diseases are risk factors for cholangiocarcinoma and co-occur with them in our data, we tested whether SLC30A10 Thr95Ile was still associated with these biliary diseases, and the other selected quantitative traits and diagnoses, after removing the 148 individuals with extrahepatic bile duct cancer (Supplementary Table 9, Supplementary Table 10). All of the associations remained significant except for intrahepatic bile duct carcinoma. To test whether the association with extrahepatic bile duct cancer was driven by a nearby association, and to assess other risk variants and the potential for false positives given the extreme case-control imbalance, we performed a GWAS of the phenotype; remarkably, SLC30A10 Thr95Ile was the strongest association genome-wide, with minimal evidence for systematic inflation of p-values (λ GC = 1.05; Fig. 5, Supplementary Fig. 5). Replication of hematocrit association. A key result from the phenome-wide scan that was not related to hepatocellular damage was the association between SLC30A10 Thr95Ile and increased hematocrit. Polycythemia is a symptom of HMNDYT1 mechanistically related to manganese overload. We metaanalyzed hematocrit values from the Other White and White Irish populations, the DiscovEHR data, and a non-UKBB population (INTERVAL Study) from a published meta-analysis of hematocrit values 45 , and found that the association replicated (N = 179,689, p = 0.013; Supplementary Table 11). SLC30A10 expression in liver cell subtypes. Across organs, SLC30A10 is transcribed at the highest level in liver according to data from the GTEx Project 52 . The association of Thr95Ile with bile duct cancer led us to query expression of SLC30A10 in specific cell types within the liver using data from three single-cell RNA sequencing studies of liver [53][54][55] . These data show very low expression of SLC30A10 message in individual cells, but all studies detect expression in both hepatocytes and cholangiocytes (Supplementary Table 12, Supplementary Fig. 7). Immunohistochemistry has established that SLC30A10 protein is present in hepatocytes and bile duct epithelial cells and localizes to the cholangiocyte plasma membrane, facing the lumen of the bile duct 32 . Bioinformatic characterization of SLC30A10 Thr95Ile. To understand potential functional mechanisms of the Thr95Ile variant, we examined bioinformatic annotations of SLC30A10 Thr95Ile from a variety of databases. The UNIPROT database shows that Thr95Ile occurs in the third of six transmembrane domains and shares a domain with a variant known to cause HMNDYT1 ( Supplementary Fig. 8). Several in silico algorithms predict that Thr95Ile is a damaging mutation. The CADD (Combined Annotation Dependent Depletion) algorithm, which combines a broad range of functional annotations, gives the variant a score of 23.9, placing it in the top 1% of deleteriousness scores for genome-wide potential variants. The algorithm SIFT, which uses sequence homology and physical properties of amino acids, predicts Thr95Ile as deleterious. The algorithm PolyPhen-2 gives Thr95Ile a HumDiv score of 0.996 (probably damaging), based on patterns of sequence divergence from close mammalian homologs, and a HumVar score of 0.900 (possibly damaging), based on similarity to known Mendelian mutations. Cross-species protein sequence alignment in PolyPhen-2 shows only threonine or serine at position 95 across animals. These properties suggest that Thr95Ile substitution ought to affect the function of the SLC30A10 protein. Characterization of SLC30A10 variants in vitro. To test the protein localization of SLC30A10 harboring Thr95Ile as well as other variants, we created constructs with Thr95Ile (rs188273166) and the HMNDYT1-causing variants Leu89Pro (rs281860284) and del105-107 (rs281860285) and transfected these constructs into HeLa cells. Immunofluorescence staining revealed membrane localization for wild-type (WT) SLC30A10 which was abolished by the two HMNDYT1 variants, consistent with previous reports which showed that the HMNDYT1 variant proteins are mislocalized in the endoplasmic reticulum (ER) 56 . In contrast, Thr95Ile showed membrane localization similar to WT, suggesting that Thr95Ile does not cause a deficit in protein trafficking to the membrane (Fig. 6). Discussion Expanded genetic landscape of risk for hepatocellular damage. Our trans-ancestry GWAS of ALT and AST reveals a broad genetic landscape of loci that modulate risk of hepatocellular damage or other diseases that cause increases in circulating ALT and AST, bringing the number of loci known to associate with serum activities of both enzymes from 10 (currently in the GWAS Catalog) to 100. Two loci had been previously reported in majority-European ancestry GWAS of ALT and AST as associating with both enzymes: PNPLA3 14 we replicate four of these in our trans-ancestry GWAS (PANX1, ZNF827, EFHD1, and AKNA.) We are limited by the lack of diversity in the UK Biobank and expect that studies in more diverse populations will result in the discovery of new loci and alleles. Among the loci are many that had been previously identified as risk loci for liver disease, but had never been explicitly associated through GWAS of both ALT and AST, such as SERPINA1 (associated with alpha-1 antitrypsin deficiency 35 ), HFE (homeostatic iron regulator, associated with hemochromatosis 61 ), and TM6SF2 (transmembrane 6 superfamily member 2, associated with NAFLD 62-64 ). The MTARC1 lead variant was discovered in a GWAS of cirrhosis, and then found to associate with lower ALT and AST 39 . Others are known to associate with risk of gallstones (ABCG8, ANPEP, and HNF1B) 65,66 or increased GGT (EPHA2, CDH6, DLG5, CD276, DYNLRB2, and NEDD4L) 14,18 . Consistent with the fact that ALT and AST elevation can be caused by kidney or muscle damage, we detect an association with ANO5 (anoctamin 5), which has been implicated in several autosomal recessive muscular dystrophy syndromes 67,68 , and several loci associated with expression of genes in muscle or kidney but not liver (SHMT1, BRD3, DLG5, EYA1, IFT80, IL32, EIF2AK4, and SLC2A4). We expect only a subset of the loci from this screen to be directly causally implicated in hepatocellular damage; many may predispose to a condition where liver damage is secondary or where enzyme elevation originates in kidney or muscle, an important limitation of this approach. The significant sex heterogeneity we observe at the ALT-and AST-associated loci HKDC1, SERPINA1, and FRK warrants further investigation and is consistent with a prior study that found significant genotype-by-sex interactions in the genetic architecture of circulating liver enzymes 6 . HKDC1 has been associated with glucose metabolism and notably, this effect is specific to pregnancy 69,70 . SLC30A10 had not been identified in prior GWAS of circulating liver enzymes. Because SLC30A10 Th95Ile is so rare, it is not surprising that these scans were underpowered to detect its large effect, due either to insufficient study size, lack of inclusion on the genotyping arrays used, or lack of power to impute its genotype. For example, in the UK Household Longitudinal Study (N = 5458 and N = 5321, respectively) effects were reported that were consistent with a strong effect size but were not statistically significant (Supplementary Table 6). Properties of SLC30A10 Thr95Ile. The variant with the strongest predicted effect on ALT and AST, SLC30A10 Thr95Ile (rs188273166), is a rare variant carried by 1117 of the 487,327 array-genotyped participants in the UK Biobank. While Thr95Ile is found in some individuals of non-European ancestry, it is at much higher frequency in European-ancestry populations, with carrier frequency in our sample by UK country of birth ranging from a minimum of 1 in 479 people born in Wales to a maximum of 1 in 276 people born in Scotland (Supplementary Table 4). The increased frequency we see in European-ancestry populations is not merely due to those populations' overrepresentation in the NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-24563-1 ARTICLE UK Biobank, but is also consistent with global allele frequency data cataloged in dbSNP 71 . The Thr95Ile variant occurs in the third of six transmembrane domains of the SLC30A10 protein 72 , the same domain affected by a previously reported loss-of-function variant causing HMNDY1 (hypermanganesemia with dystonia 1), Leu89Pro (rs281860284) 56 (Fig. 6). In vitro, Leu89Pro abolishes trafficking of SLC30A10 to the membrane 56 , and another study pointed to a functional role of polar or charged residues in the transmembrane domains of SLC30A10 for manganese transport function 73 . Bioinformatic analysis suggests that Thr95Ile should impact protein function. Our site-directed mutagenesis experiment of SLC30A10 shows that Thr95Ile, unlike reported HMNDYT1-causing variants, results in a protein that is properly trafficked to the cell membrane. Further biochemical studies will be required to investigate whether the Thr95Ile variant of SLC30A10 has reduced manganese efflux activity, or otherwise affects SLC30A10 stability, translation, or transcription. Comparison of SLC30A10 Thr95Ile phenotypes to HMNDYT1 phenotypes. SLC30A10 (also known as ZNT10, and initially identified through sequence homology to zinc transporters 27 ) encodes a cation diffusion facilitator expressed in hepatocytes, the bile duct epithelium, enterocytes, and neurons 32 that is essential for excretion of manganese from the liver into the bile and intestine 28,32 . Homozygous loss-of-function of SLC30A10 was recently identified as the cause of the rare disease HMNDYT1, which in addition to hypermanganesemia and dystonia is characterized by liver cirrhosis, polycythemia, and Mn deposition in the brain [29][30][31][32][33][34]56 . Other hallmarks include iron depletion and hyperbilirubinemia. Mendelian disorders of SLC30A10 and the other hepatic Mn transporter genes SLC39A8 (solute carrier family 39 member 8, causing congenital disorder of glycosylation type IIn) 74 and SLC39A14 (solute carrier family 39 member 14, implicated in hypermanganesemia with dystonia 2) 75 , along with experiments in transgenic mice 76,77 , have confirmed the critical role of each of these genes in maintaining whole-body manganese homeostasis 78 . Notably, while all three of the genes have Mendelian syndromes with neurological manifestations, only SLC30A10 deficiency (HMNDYT1) is known to be associated with liver disease 78 . We detect two key aspects of HMNDYT1-increased circulating liver enzymes and increased hematocrit-exceeding phenome-wide significance in heterozygous carriers of SLC30A10 Thre95Ile. Among other hepatic phenotypes that have been reported in HMNDYT1 cases, we also detect an association with anemia, but no evidence of hyperbilirubinemia. The neurological aspect of HMNDYT1, parkinsonism and dystonia, is not detectably enriched among Thr95Ile carriers; however, we have limited power and cannot exclude an enrichment. It is therefore intriguing to consider that carrier status of Thr95Ile may represent a very mild manifestation of HMNDYT1. The quantitative trait with the largest effect associated with SLC30A10 Thre95Ile is liver MRI cT1 (+1.2 SD; 95% CI, +0.5 to +2.0; p = 0.0032). Liver MRI cT1 has been recently explored as a non-invasive diagnostic of steatohepatitis and fibrosis 79,80 . However, MRI T1 signal has also been used to detect manganese deposition in the brain, and it is unclear the extent to which hepatic manganese overload could confound the association of liver cT1 with liver damage 81 . Comparison of Thr95Ile phenotypes to SLC30A10 common variant phenotypes. Apart from rare variants in SLC30A10 causing HMNDYT1, Thr95Ile can also be compared to common variants in SLC30A10 that have been associated with phenotypes by GWAS (Fig. 7). We find that the minor allele of Thr95Ile is in almost complete linkage with a common intronic variant associated with increased blood manganese. Other GWAS variants in almost perfect linkage with Thr95Ile associate with decreased MCH, increased RBC distribution width, decreased magnesium/ calcium ratio, and increased heel bone mineral density (BMD). Decreased MCH could reflect the anemia experienced by HMNDYT1 patients, caused by the closely linked homeostatic regulation of manganese and iron 28 . Increased BMD may reflect the protective role of manganese in bone maintenance 82,83 . Looking for the subset of these phenotypes available in our scan of Thr95Ile, we do find a nominally significant increase in BMD but no detectable increase in MCH or erythrocyte distribution width. By contrast, we find that a common intronic variant in SLC30A10 recently reported to associate with liver MRI cT1 50 is in complete linkage with the major allele of Thr95Ile, suggesting an independent genetic mechanism but also providing independent evidence of the role of SLC30A10 variants in liver health and/or hepatic manganese content. The linked GWAS variants may be interpreted through two mechanistic hypotheses: first, the associations may all be causally driven by Thr95Ile carriers in the studies, which the GWAS variants tag; alternatively, the associations may be driven by effects of the common variants themselves, which are noncoding but may influence SLC30A10 (or another gene in cis) by modulating expression or post-transcriptional regulation; or some combination of both. To distinguish between these, measurements of Mn would need to be available to perform conditional analyses. If the GWAS variants have an effect independent of Thr95Ile, SLC30A10 still seems likely (although not certain) to be the causal gene at the locus, due to the similarity in phenotypes to HMNDYT1 and Thr95Ile. A putative regulatory mechanism could be through transcriptional or posttranscriptional regulatory elements, as the haplotype includes a variant (rs2275707) overlapping both the 3'-UTR of SLC30A10 and regions of H3K4me1 histone modifications (characteristic of enhancers) active only in brain and liver 84 . Clinical relevance: manganese homeostasis in health and disease. Manganese (Mn) is a trace element required in the diet for normal development and function, serving as a cofactor and regulator for many enzymes. However, it can be toxic in high doses; because Mn(II) and Mn(III) can mimic other cations such as iron, it can interfere with systemic iron homeostasis and disrupt in other biochemical processes 85,86 ; at the cellular level, it is cytotoxic and poisons the mitochondria by interfering with the electron transport chain enzymes that rely on Fe-S clusters 87 . The hallmark of occupational exposure through inhalation is neurotoxicity manifesting as parkinsonism and dystonia (manganism, or Mn intoxication) 85,86 . Neurotoxicity is an aspect of the Mendelian syndromes caused by loss of function of all three of the hepatic manganese transporters; interestingly, GWAS has also identified a common missense variant in SLC39A8 as a risk factor for schizophrenia and many other diseases 88,89 ; altered function of glycosyltransferases due to manganese overload in the neurons is a proposed mechanism for neurological manifestations of this variant 90 . Because manganese is excreted through the liver into the bile, increased circulating manganese secondary to liver damage may be a contributing factor to the neurological manifestations of chronic acquired hepatocerebral degeneration (CAHD) [91][92][93] . However, liver toxicity is not a hallmark of environmental or occupational exposure. Importantly, of the Mendelian syndromes of genes encoding manganese transporters, only SLC30A10 (causing HMNDYT1) involves hepatic symptoms 78,94 . Hepatotoxicity in HMNDYT1 is thought to be due to cytotoxic manganese overload within hepatocytes; polycythemia is thought to be caused by upregulation of erythropoietin by manganese; and iron anemia through systemic dysregulation of iron homeostasis by excess manganese 94,95 . Our results suggest that polymorphism in SLC30A10 is a risk factor for manganese-induced hepatocellular damage, polycythemia, and iron anemia in a much broader population beyond the rare recessive syndrome HMNDYT1. The association of SLC30A10 Thr95Ile with extrahepatic bile duct cancer was unexpected, as this disease has not been described in conjunction with HMNDYT1. Bile duct cancer (cholangiocarcinoma) is a rare disease (age-adjusted incidence of 1-3 per 100,000 per year); cirrhosis, viral hepatitis, primary sclerosing cholangitis, and parasitic fluke infection have been identified as risk factors 96,97 . It is unclear whether low levels of manganese in the bile, or high levels of manganese in the hepatocytes and bile duct epithelial cholangiocytes, could be directly carcinogenic; manganese-dependent superoxide dismutase (MnSOD, or SOD2) is a tumor suppressor 98 Increased liver MRI cT1 |D'| > 0.9 (minor alleleminor allele) |D'| = 1 (minor allelemajor allele) Fig. 7 Relationship of Mendelian and common-variant GWAS phenotypes at the SLC30A10 locus. Phenotypes are summarized for HMNDYT1causing variants, SLC30A10 Thr95Ile, and GWAS variants. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-24563-1 ARTICLE predisposes to cancer through similar mechanisms as other hepatobiliary risk factors. We do detect an association with cholangitis, but the effect of this association is weaker than the association with cholangiocarcinoma. To our knowledge, SLC30A10 Thr95Ile would be the strongest genetic cholangiocarcinoma risk factor identified to date, being carried by 5% of the extrahepatic bile duct cancer cases in the White British subset of the biobank. Because both SLC30A10 Thr95Ile and extrahepatic bile duct cancer are exceedingly rare, validation of this association in either another very large biobank or in a cohort of cholangiocarcinoma patients will be necessary. Clinical relevance: genome interpretation. Currently, SLC30A10 Thr95Ile (rs188273166) is listed as a variant of uncertain significance in the ClinVar database 99 . While the appropriate clinical management of carriers of SLC30A10 Thr95Ile is unclear and would require further studies to determine whether monitoring of hepatobiliary function is warranted, evidence from HMNDYT1 patients has demonstrated that chelation therapy combined with iron supplementation is effective at reversing the symptoms of SLC30A10 insufficiency 100 . Further studies will be needed to define whether other damaging missense variants or proteintruncating variants in SLC30A10, including the variants known to cause HMNDYT1, also predispose to liver disease in their heterozygous state. Because we only observe one homozygous carrier of SLC30A10 Thr95Ile in our data, further study will also be needed to understand the inheritance model of this association; we cannot determine whether risk in homozygotes is stronger than risk in heterozygotes, unlike cases of HMNDYT1 where identified cases have all experienced homozygous loss-of-function mutation. More broadly, the case of SLC30A10 fits a pattern of recent discoveries showing that recessive Mendelian disease symptoms can manifest in heterozygous carriers of deleterious variants, blurring the distinction between recessive and dominant disease genes and bridging the gap between common and rare disease genetics 101,102 . These discoveries are possible only by combining massive, biobank-scale genotype and phenotype datasets such as the UK Biobank. Methods Sub-population definition and PC calculation. Sub-populations for analysis were obtained through a combination of self-reported ethnicity and genetic principal components. First, the White British population was defined using the categorization performed previously by the UK Biobank (Field 22006 value "Caucasian"); briefly, this analysis selected the individuals who identify as White British (Field 21000), performed a series of subject-level QC steps (to remove subjects with high heterozygosity or missing rate over 5%, removing subjects with genetic and selfreported sex discrepancies and putative sex chromosome aneuploidies, and removing subjects with second or first degree relatives and an excess of thirddegree relatives), performed Bayesian outlier detection using the R package aberrant 103 to remove ancestry outliers using principal components (PCs) 1 + 2, 3 + 4, and 5 + 6 (calculated from the global UK Biobank PCs stored in Field 22009), selected a subset of variants in preparation for PCA by limiting to directlygenotyped (info = 1), missingness across individuals <2%, MAF > 1%, regions of known long range LD, and pruning to independent markers with pairwise LD < 0.1. Based on this procedure used by the UK Biobank to define the "White British" subset, we defined three additional populations, using other self-reported ancestry groups as starting points (Field 21000 values "Asian or Asian British", "Black or Black British", and "Chinese"). Principal components were estimated in PLINK using the unrelated subjects in each subgroup. We then projected all subjects onto the PCs. For the majority of downstream analyses (calculation of per-variant allele frequency and missingness thresholds, calculation of LD, and for association analyses performed in PLINK), just the unrelated subset of people in the subpopulation was used, the unrelated sets were used. The exception was association analyses performed in SAIGE 104 , a generalized mixed model method that allows inclusion of related individuals; for SAIGE, related individuals were retained in the subpopulations. For validation in an independent subpopulation of the UK Biobank, two other self-reported ethnicity groups with a sufficient number of SLC30A10 Thr95Ile carriers were assembled, who were not included in "White British" (Field 21000 values "White" subgroup "Irish", and "White" subgroup "Any other white background" or no reported subgroup). Array genotype data for association analysis. Data were obtained from the UK Biobank through application 26041. Genotypes were obtained through array typing and imputation as described previously. For genome-wide association analysis, variants were filtered so that imputation quality score (INFO) was greater than 0.8. Genotype missingness, Hardy-Weinberg equilibrium (HWE), and minor allele frequency (MAF) were then each calculated across the unrelated subset of individuals in each of the four sub-populations. For each sub-population a set of variants for GWAS was then defined by filtering missingness across individuals less than 2%, HWE p-value > 10 −12 , and MAF > 0.1%. Phenotype data. For genome-wide analysis, blood biochemistry values were obtained for ALT (Field 30620) and AST (Field 30650) and log 10 transformed, consistent with previous genetic studies 14,105 , resulting in an approximately normal distribution. For phenome-wide analysis, ICD10 codes were obtained from inpatient hospital diagnoses (Field 41270), causes of death (Field 40001 and 40002), the cancer registry (Field 40006), and general practitioner (GP) clinical event records (Field 42040). A selection of 135 quantitative traits was obtained from other fields (Supplementary Data 6), encompassing anthropomorphic measurements, blood and urine biochemistry, smoking, exercise, sleep behavior, and liver MRI; all were inverse rank normalized using the RNOmni R package 106 . All quantitative traits and cancer registry diagnoses were downloaded from the UK Biobank Data Showcase on March 17, 2020. The GP clinical events, inpatient diagnoses, and death registry were available in more detail or in more recent updates than was available through the Data Showcase and were downloaded as separate tables; data for GP clinical records were downloaded on September 30, 2019, data from the death registry was downloaded on June 12, 2020, and data from hospital diagnoses was downloaded on July 15, 2020. Genome-wide association studies of ALT and AST. Because of the high level of relatedness in the UK Biobank participants 107 , to maximize power by retaining related individuals we used SAIGE software package 104 to perform generalized mixed model analysis for GWAS. A genetic relatedness matrix (GRM) was calculated for each sub-population with a set of 100,000 LD-pruned variants selected from across the allele frequency spectrum. SAIGE was run on the filtered imputed variant set in each sub-population using the following covariates: age at recruitment, sex, BMI, and the first 12 principal components of genetic ancestry (learned within each sub-population as described above). Manhattan plots and Q-Q plots were created using the qqman R package 108 . The association results for each enzyme were meta-analyzed across the four populations using the METAL software package 109 using the default approach (using p-value and direction of effect weighted according to sample size.) To report p-value results, the default approach was used. To report effect sizes and standard errors, because the authors of the SAIGE method advise that parameter estimation may be poor especially for rare variants 110 , the PLINK software package v1.90 111 was run on lead variants on the unrelated subsets of each subpopulation, and then the classical approach (using effect size estimates and standard errors) was used in METAL to meta-analyze the resulting betas and standard errors. All PLINK and SAIGE association tests were performed using the REVEAL/SciDB translational analytics platform from Paradigm4. Identifying independent, linked association signals between the two GWAS. Meta-analysis results for each enzyme were LD clumped using the PLINK software package, v1.90 111 with an r 2 threshold of 0.2 and a distance limit of 10 megabases, to group the results into approximately independent signals. LD calculations were made using genotypes of the White British sub-population because of their predominance in the overall sample. Lead variants (the variants with the most significant p-values) from these "r 2 > 0.2 LD blocks" were then searched for proxies using an r 2 threshold of 0.8 and a distance limit of 250 kilobases, resulting in "r 2 > 0.8 LD blocks" defining potentially causal variants at each locus. The "r 2 > 0.8 LD blocks" for the ALT results were then compared to the "r 2 > 0.8 LD blocks" for the AST results, and any cases where these blocks shared at least one variant between the two GWAS were treated as potentially colocalized association signals between the two GWAS. In these cases, a representative index variant was chosen to represent the results of both GWAS by choosing the index variant of the GWAS with the more significant p-value. Next, these putative colocalized association signals were then distance pruned by iteratively removing neighboring index variants within 500 kilobases of each index variant with less significant p-values (the minimum p-value between the two GWAS was used for the distance pruning procedure.) The Manhattan plot of METAL results with labeled colocalization signals was created using the CMplot R package 112 . Annotation of associated loci and variants. Index variants and their corresponding strongly-linked (r 2 > 0.8) variants were annotated using the following resources: distance to closest protein-coding genes as defined by ENSEMBL v98 using the BEDTools package 113 , impact on protein-coding genes using the ENSEMBL Variant Effect Predictor (VEP) software package 114 with the LOFTEE plugin to additionally predict protein-truncating variants 115 ; eQTLs (only the most significant eQTL per gene-tissue combination) from GTEx v8 (obtained from the GTEx Portal) for liver, kidney cortex, and skeletal muscle 116 ; a published metaanalysis of four liver eQTL studies 41 ; the eQTLGen meta-analysis of blood eQTL studies 117 ; and GWAS results from the NHGRI-EBI GWAS Catalog (r2020-01-27) 118 , filtered to associations with p < 5 × 10 −8 . Association of ALT-and AST-associated loci with liver disease. Index variants were tested for association with any liver disease using ICD10 codes K70-K77 in inpatient hospital diagnoses, causes of death, and GP clinical event records, using SAIGE, with the same covariates used for the liver enzymes (age, sex, and genetic PCs 1-12) plus a covariate for each of the following: whether the subject was recruited in Scotland, whether the subject was recruited in Wales, and whether the patient had GP clinical event records available. Association results were metaanalyzed across the four sub-populations using METAL using the default method (combining p-values) to obtain the final p-value. To obtain effect sizes and standard errors, the same procedure was performed but using PLINK (on the unrelated subset of each population) and using the classical method in METAL (combining effects and standard errors.) Sequencing-based validation of rs188273166 array genotyping. Whole exome sequencing was available for 301,473 of the 487,327 array-genotyped samples. DNA was extracted from whole blood and was prepared and sequenced by the Regeneron Genetics Center (RGC). A complete protocol has been described elsewhere 119 . Briefly, the xGen exome capture was used and reads were sequenced using the Illumina NovaSeq 6000 platform. Reads were aligned to the GRCh38 reference genome using BWA-mem 120 . Duplicate reads were identified and excluded using the Picard MarkDuplicates tool (Broad Institute) 121 . Variant calling of SNVs and indels was done using the WeCall variant caller (Genomics Plc.) 122 to produce a GVCF file for each subject (GVCF files are files in the VCF Variant Call Format that are indexed for fast processing). GVCF files were combined to using the GLnexus joint calling tool 123 . Post-variant calling filtering was applied as described previously 119 . Replication of SLC30A10 Thr95Ile associations. ALT and AST association tests were repeated as described for the genome-wide scans, using SAIGE and PLINK, in the "Other White" and "White Irish" populations, for the SLC30A10 Thr95Ile (rs188273166) variant. In the two DiscovEHR Geisinger Health Service (GHS) cohorts, association tests were performed using BOLT 124 with covariates for age, age squared, age × sex, sex, and the first ten principal components of genetic ancestry; ALT, AST, and hematocrit values were taken from the median of lab values available. Results were meta-analyzed across the four populations. A forest plot was created using the forestplot package in R 125 . Testing linkage of SLC30A10 Thr95Ile to common GWAS variants. To test linkage of SLC30A10 Thr95Ile (rs188273166) to common GWAS variants, the GWAS Catalog was searched for all results where "Mapped Gene" was assigned to SLC30A10; because of the very relevant phenotype, blood Mn-associated variant rs1776029, an association that is not in the GWAS Catalog, was also included in the analysis, as well at cT1-associated variant rs759359281. LD calculations were performed in PLINK, using the White British unrelated subpopulation, between rs188273166 and the GWAS variants with the options --r2 dprime-signed in-phase with-freqs --ld-window 1000000 --ld-window-r2 0. For rs1776029, an additional Fisher's exact test was performed to determine the confidence interval of the enrichment of rs188273166 on the rs1776029 haplotype. The linked alleles from PLINK were then used in conjunction with the effect allele from the reported papers to determine the direction of effect. The GWAS Atlas website 46 was used (the PheWAS tool) to determine the direction of effect for the linked alleles from the original paper; in cases where the original paper from the GWAS Catalog did not report a direction of effect, other papers for the same phenotype and variant from GWAS Atlas were used to determine the direction of effect and cited accordingly (Supplementary Data 5). Reference epigenome information for the GWAS variants was obtained by searching for rs1776029 in HaploReg v4.1 126 . Phenome-wide association study of SLC30A10 Thr95Ile. A phenome-wide association study of SLC30A10 Thr95Ile (rs188273166) was performed by running SAIGE and PLINK against a set of ICD10 diagnoses and quantitative traits, obtained as described above, and using the covariates described above for the test of association with liver disease. ICD10 diagnoses were filtered to include only those at a three-character (category), four-character (category plus one additional numeral), or "block" level that were frequent enough to test in both subpopulations and without significant collinearity with the sex, GP availability, or country of recruitment covariates: at least 100 diagnoses overall, and at least one diagnosis in each of the following subgroups, to avoid collinearity with covariates while running SAIGE: the with-and without-GP data subgroups, men, women, and each of the three recruitment countries. This resulted in 4397 ICD10 codes to test, serving as the multiple hypothesis burden. Bioinformatic analysis of SLC30A10 Thr95Ile. To visualize Thr95Ile on the protein sequence of SLC30A10, UNIPROT entry Q6XR72 (ZN10_HUMAN) was accessed 72 . In UNIPROT, natural variants causing HMNDYT1 32,34,56 and mutagenesis results 56,73,127 were collated from the literature and highlighted. CADD score v1.5 128 was downloaded from the authors' website. SIFT score was obtained from the authors' website using the "dbSNP rsIDs (SIFT4G predictions)" tool 129 . PolyPhen score and multiple species alignment was obtained from the authors' website using the PolyPhen-2 tool 130 . Ethical compliance. Ethics oversight for the UK Biobank is provided by an Ethics and Governance Council which obtained informed consent from all participants for health-related research. All research described was performed within the framework of Application 26041. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Complete summary statistics from the genome-wide association studies of ALT, AST, and extrahepatic bile duct cancer have been submitted to the NHGRI-EBI GWAS catalog [https://www.ebi.ac.uk/gwas/] with accession numbers GCST90013663, GCST90013664, and GCST90013662, respectively. Source data are provided with this paper. Individuallevel genetic and phenotypic data from the UK Biobank are available to qualified researchers upon application [http://ukbiobank.ac.uk]. Individual-level genetic and phenotypic data from DiscovEHR are not available to outside researchers due to privacy restrictions. Source data are provided with this paper.
2021-01-27T20:22:01.083Z
2021-07-27T00:00:00.000
{ "year": 2021, "sha1": "ee9eae4adefc5d9e2abbcfedfa14c292d2576d5a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-24563-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e5c5902773536ea21f49c5a481f962996f4981d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
237936211
pes2o/s2orc
v3-fos-license
Transition from Seeds to Seedlings: Hormonal and Epigenetic Aspects Transition from seed to seedling is one of the critical developmental steps, dramatically affecting plant growth and viability. Before plants enter the vegetative phase of their ontogenesis, massive rearrangements of signaling pathways and switching of gene expression programs are required. This results in suppression of the genes controlling seed maturation and activation of those involved in regulation of vegetative growth. At the level of hormonal regulation, these events are controlled by the balance of abscisic acid and gibberellins, although ethylene, auxins, brassinosteroids, cytokinins, and jasmonates are also involved. The key players include the members of the LAFL network—the transcription factors LEAFY COTYLEDON1 and 2 (LEC 1 and 2), ABSCISIC ACID INSENSITIVE3 (ABI3), and FUSCA3 (FUS3), as well as DELAY OF GERMINATION1 (DOG1). They are the negative regulators of seed germination and need to be suppressed before seedling development can be initiated. This repressive signal is mediated by chromatin remodeling complexes—POLYCOMB REPRESSIVE COMPLEX 1 and 2 (PRC1 and PRC2), as well as PICKLE (PKL) and PICKLE-RELATED2 (PKR2) proteins. Finally, epigenetic methylation of cytosine residues in DNA, histone post-translational modifications, and post-transcriptional downregulation of seed maturation genes with miRNA are discussed. Here, we summarize recent updates in the study of hormonal and epigenetic switches involved in regulation of the transition from seed germination to the post-germination stage. Introduction Seed development is a critical step in the ontogenesis of higher plants. Obviously, it is crucially important in terms of plant survival and successful reproduction. Thereby, mature seeds are typically highly dehydrated and can be considered as units of dispersal and survival during the periods of unfavorable environmental conditions [1][2][3]. On the other hand, for successful propagation, germination of seeds needs to be associated with the periods of optimal water and temperature regime. To adjust growth of seedlings to environmental conditions, spermatophyte plants evolved an ability to control the time of germination [4]. This ability relies on the phenomenon of dormancy, i.e., a period or temporal inhibition of plant growth, which impacts on the prevention of germination under unfavorable conditions [5,6]. Thus, release from seed dormancy is controlled by such environmental factors as light, temperature, and duration of dry storage, whereas hormonal regulation, genetic, and epigenetic factors impact essentially on this phenomenon [7][8][9][10][11][12]. phases I and II are critical for maintaining seed viability. Thus, enhanced respiration and water uptake result in dramatic upregulation of ROS production. To avoid accumulation of molecular damage, repair of DNA and proteins is enhanced during phases I and II [31]. species-specific DT window [40,44]. It is known that DT of germinated Arabidopsis thaliana (L.) Heynh. seeds was, to a large extent, dependent on the presence of ABA in the germination medium [43,45,46]. Similar to PEG, ABA can extend the period of DT. The corresponding time window, when seeds are responsive to the introduction of ABA in medium, was described by Lopez-Molina et al. [47]. Phase III is characterized by progressing division and elongation of radicle cells, degradation of endosperm, and radicle protrusion [2,30]. Thereby, loosening of the cell wall in the hypocotyl region (which underlies elongation of radicle cells) is mediated by ROS. The main source of the oxidative burst accompanying radicle elongation is the superoxide anon radical generated by NADPH oxidases of the plasma membrane [48,49]. The transition from phase II to phase III can be treated as the transition from seed to seedling. Figure 1. Time course of seed germination. The overall germination times vary from hours to weeks, depending on plant species and environmental conditions. Phase I is characterized by rapid water uptake, accompanied with enhanced hydration of macromolecules, activation of respiration, repair Figure 1. Time course of seed germination. The overall germination times vary from hours to weeks, depending on plant species and environmental conditions. Phase I is characterized by rapid water uptake, accompanied with enhanced hydration of macromolecules, activation of respiration, repair of membranes, mitochondria, and DNA. Phase II is characterized by mobilization of reserves, translation of stored mRNAs, transcription and translation of newly synthesized mRNAs, and activation of protein biosynthesis. The radicle protrusion is considered as the beginning of Phase III. Epigenetic changes (methylation of DNA, as well as trimethylation, ubiquitination, and acetylation of histones), occurring in this phase, result in silencing of the genes related to seed maturation and triggering expression of the genes responsible for vegetative growth of seedlings. "Window of DT (desiccation tolerance)" can be defined as the part of the overall germination period, when seeds can be dried back to their original water contents without a decrease of their viability. Transition from germination to the post-germination stage corresponds to loss of DT. With respect to desiccation tolerance (DT), the seeds of vascular plants can be classified into orthodox and recalcitrant types [34][35][36][37]. During the last stages of maturation, the seeds of orthodox type acquire DT, which allows them to sustain unfavorable germination conditions in the metabolically inactive dry state [2,11,38,39]. The inhibited metabolic processes are resumed during germination afterwards. This ultimately results in the loss of desiccation tolerance at the stage of radicle protrusion [40,41] (Figure 1), which can be considered as the beginning of phase III-post-germination. Importantly, the part of the overall germination process before this time point is usually referred to as the "window of DT", i.e., the period when germinating seeds can be dried to their original water contents without loss of seed viability and any deterioration of their quality [42,43]. Already in the last decade, Buitink et al. showed that Medicago truncatula L. seeds with a radicle length of up to 1 mm can sustain a short-term dehydration, whereas the DT of the seeds was completely dependent on the degree of applied osmotic stress when the radicle length achieved 2.7 mm [40]. This fact illustrates the effect of osmopriming, which can extend a species-specific DT window [40,44]. It is known that DT of germinated Arabidopsis thaliana (L.) Heynh. seeds was, to a large extent, dependent on the presence of ABA in the germination medium [43,45,46]. Similar to PEG, ABA can extend the period of DT. The corresponding time window, when seeds are responsive to the introduction of ABA in medium, was described by Lopez-Molina et al. [47]. Ethylene interferes with ABA-and GA-related signaling pathways, promoting seed germination in numerous species [8,83,91]. On the one hand, ethylene acts as an ABA antagonist by suppressing the regulation of ABA metabolism and signaling [6,8,92]. In some species (e.g., Brassicaceae), ethylene prevents the inhibitory effects of ABA by facilitating endosperm rupture of germinating seeds [6,60,92,93]. On the other hand, ethylene impacts the GA biosynthesis via modulation of GA3ox and GA20ox gene expression and GA signaling via DELLA proteins [94]. Cytokinins can also promote germination at the signaling level, acting as antagonists of ABA [86,95,96]. Specifically, ABA triggers downregulation of Arabidopsis Response Regulators (ARRs), a family of genes induced by cytokinins during seed germination and cotyledon greening [86]. Among the type-A ARR family members, expression of ARR6, ARR7, and ARR15 was reported to be upregulated in ABA-deficient mutants. In turn, ARR6, ARR7, and ARR15 attenuated the ABA-mediated inhibition of germination. Application of exogenous ABA suppressed the type-A ARRs in Arabidopsis seeds and seedlings. Among the type-A ARR family members, expression of ARR6, ARR7, and ARR15 was upregulated in ABA-deficient mutants. In turn, ARR6, ARR7, and ARR15 proved to be negative regulators of ABA-mediated inhibition of germination. ABSCISIC ACID-INSENSITIVE4 (ABI4) plays the key role in ABA and cytokinin signaling by inhibiting transcription of type-A ARRs [86]. The ABI4 is a crucial regulator of the ABA signaling pathway during seed development, providing functional interactions between ABA and other hormones [86,[97][98][99][100]. ABI4 modulates ABA and GA metabolism by targeting CYP707A1, CYP707A2, and GA2ox7. It is involved in the suppression of ethylene biosynthesis by targeting ACS4 and ACS8 [98,100]. A high level of ABA in dormant Arabidopsis seeds enhances the transcriptional activity of ABI4. In the presence of high ABA content, this factor blocks induction of ARR6/7/15, resulting in the suppression of cytokinin responses. After completion of germination, cytokinins stimulate accumulation of ARR4/5/6 [86]. Brassinosteroids (BRs) are ABA antagonists and, like GAs, can promote seed germination by enhancing the growth potential of the embryo [7,58,[101][102][103]. In Arabidopsis, the BRs biosynthetic mutant det2-1 and the BRs responsive mutant bri1-1 were shown to be more sensitive to inhibition of ABA than the wildtype [101]. This observation indicates that the pathways of ABA and BR signaling might work as antagonistic regulators of seed germination. Recently, Sun et al. revealed that BRs signaling represses the accumulation of PIN-LIKES (PILS) proteins at the endoplasmic reticulum, thereby increasing nuclear abundance and signaling of auxin [104]. Auxin maintains a high level from fertilization to seed maturation by PIN carriers [105]. Auxin transport from endosperm is regulated by AGAMOUS-LIKE62 (AGL62), which is specifically expressed in the endosperm [106]. Auxins have recently emerged as essential players which modulate (in concert with ABA) different cellular processes involved in seed development, dormancy, and longevity [107][108][109]. Thereby, ABI3 appeared to be critical for cross-talk between auxin and ABA signaling [107,109]. In developing Arabidopsis embryos, the longevity-associated genes with promoters enriched in IAA response elements and ABI3 were induced by auxin [109], but the effect of exogenous auxin treatment was abolished in abi3-1 mutants. Recently, Hussain et al. showed that the auxin signaling repressor Aux/IAA8 accumulates and promotes seed germination. The IAA8 loss-of-function mutant iaa8-1 exhibited delayed seed germination. IAA8 was shown to suppress transcription of ABI3, a negative regulator of seed germination. Accumulation of IAA8 promotes seed germination by inhibiting AUXIN RESPONSE FACTOR (ARF) activity, which is accompanied by downregulating ABI3 gene expression [89]. Treatment of wheat (Triticum aestivum L.) with methyl jasmonate inhibited expression of the ABA biosynthesis-related gene, Ta9-cis-EPOXYCAROTENOID DIOXYGEN-ASE1 (TaNCED1), which resulted in a decrease of seed ABA contents [110]. However, in Arabidopsis, jasmonate precursor (12-oxo-phytodienoic acid) inhibited seed germination, indicating that the role of jasmonates in dormancy varies between the species [111]. Xu et al. found that cold-induced germination of dormant embryos correlated with a drop of ABA contents and an increase of jasmonic acid (JA) levels, along with expressional enhancement of JA biosynthesis [90]. It was shown that the cold-induced increase in JA contents was required for the release of seed dormancy [90]. The increase of JA levels was, at least partly, mediated by the repression of two key ABA biosynthesis genes-9-cis-EPOXYCAROTENOID DIOXYGENASE 1 and 2 in bread wheat Triticum aestivum L. (TaNCED1 and TaNCED2). These genes encoded for 9-cis-epoxycarotenoid dioxygenase, catalyzing oxidative cleavage of cis-epoxycarotenoids-a critical step in ABA biosynthesis in higher plants [112]. The Effects of Light and Temperature Light is a critical regulator of seed germination, especially for light-loving species with small seeds [113,114]. For most of the higher plants, seed germination is triggered by red and repressed by far red parts of the spectrum [54,115]. While far red light increases the tissue levels of ABA and suppresses GA biosynthesis, red light has an opposite effect [116,117]. Light is the key environmental signal, and phytochromes redundantly affect seed germination, with phytochrome B (PhyB) playing the major role in this process [114,116]. During the early stages of seed imbibition, Phy B mediates the R/FR photo reversible response to trigger germination. Phytochrome A (PhyA) is directly involved in irreversible photoinduction of seed germination via irradiation with low-fluence light in a broad spectral band from ultraviolet-A to the far red region of the spectrum [118,119]. The key element of the seed light-dependent signal transduction pathways is phytochromeinteracting factor 1 (PIF1), also known as PIF3-LIKE 5 (PIL5), which is known to strongly suppress seed germination in the dark via modulating the expression of GA-and ABArelated genes [114,120]. Indeed, PIF1 inhibits germination by suppressing GA biosynthesis and GA-related signaling, with a simultaneous activation of the ABA biosynthesis and signaling [116]. This inhibition is controlled by PhyB. Activation of this protein by red light leads to the degradation of PIF1. On the other hand, inactivation of PhyB by far red light results in stabilization of PIF1. Thus, light acts as a switch, affecting the balance between ABA and GA metabolism via a phytochrome-mediated mechanism, based on the PIF1 degradation and stabilization. The temperature is another critical environmental cue affecting seed dormancy and germination timing [9,121]. Thus, application of low temperatures during seed imbibition typically stimulates seed germination (so-called stratification), whereas high temperatures inhibit it [1,117]. Cold stratification was shown to interrupt seed dormancy and to enhance germination by modulation of the balance between ABA and GAs. Recently, Yamauchi et al. found that a subset of GA biosynthesis genes was upregulated in response to low-temperature treatment [122]. This resulted in higher transcript abundances of GA-inducible genes in imbibed Arabidopsis seeds and increased tissue levels of bioactive GAs. On the other hand, ABA metabolism and signaling also underlie the release of seed dormancy after cold stratification. During cold imbibition, ABA seed contents decrease and the expression of ABA-responsive genes changes [32]. Epigenetic Mechanisms of the Seed-to-Seedlings Transition All the major epigenetic mechanisms, which are generally known in eukaryotes to date, were successfully confirmed in plants [26,123,124]. Thus, DNA methylation, posttranslational modification of histones, and interaction with non-coding RNAs provide a multifactorial and robust basis for epigenetic regulation of plant development and adaptation [125][126][127][128]. Thereby, stable allelic epigenetic inheritance efficiently complements the hereditary role of DNA, representing an additional molecular mechanism underlying practically unlimited diversity [129]. DNA (de)Methylation Generally, DNA methylation represents a covalent modification of the cytosine base, which is typically associated with the dinucleotide consensus CG. However, in plants, in contrast to other organisms, DNA methylation can also occur at cytosines localized to CHG and CHH consensus sequences, where H is A, C, or T (Bird, 1986; Finnegan et al., 1998). In coding sequences, methylation most commonly occurs at CG sites, while non-CG methylation (CHG and CHH) is much less common [130]. In angiosperms, CG methylation accounts for more than 50% of the total cytosine methylation [131]. However, independently from the specific consensus sequence, the overall methylation patterns of genomic DNA vary essentially among plant species. Thereby, the heterogeneity of CHG and CHH methylation is higher in comparison to the modification patterns, characteristic for the CG sites [131][132][133]. In plants, the occupation of potential methylation sites decreases upstream of the transcription start site (TSS) and around the transcription termination site (TTS), while the degree of methylation inversely correlates with gene expression levels in promoter regions [130,134]. On the other hand, moderately expressed genes are methylated in gene bodies [134,135]. Thus, in the Arabidopsis genome, 73% of DNA methylation sites are located in exons, whereas only 8% of them can be found in putative promoter regions, 3% in introns, and 16% in extragenic regions [51]. It is generally agreed that DNA methylation is an epigenetic modification underlying the silencing of transposable elements (TE) and directly involved in gene expression regulation. It plays a critical role in plant growth and development [130]. Accordingly, both seed development and germination are accompanied by dynamic reconfiguration of DNA methylation [136,137]. DNA methylation is represented by two forms-maintenance methylation and de novo methylation [138]. Maintenance methylation assumes recognition of the methylation marks on the DNA parental strand and transfers new methylation to the daughter strands after DNA replication. During de novo methylation, transfer of methyl groups to cytosines of DNA occurs independently from their previous methylation by DRM2, with the participation of the RNA-directed DNA methylation (RdDM) pathway [139][140][141]. It is de novo methylation that is involved in the rearrangement of methylation patterns during differentiation processes. Several distinct DNA methyltransferases are involved in generation (de novo) and subsequent maintenance of DNA methylation at three sequence contexts. Recently, it was shown that precocious germination of Solanum lycopersicum L. seeds could be promoted by silencing of MET1 [151]. This was associated with a decrease in the contents of mRNAs encoding 9-cis-epoxycarotenoid-dioxygenase-a key enzyme of ABA biosynthesis. As repression of TEs is required for stability of the plant genome, they are typically located in transcriptionally inactive regions [152]. Thus, potential methylation sites in long and gene-distal TEs are the typical targets for both CMT2 and CMT3 [153]. Maintenance of CHH methylation at short, gene-proximal TEs as well as at the edges of long TEs requires the mechanism of RNA-dependent DNA methylation (RdDM), which involves DNA-dependent RNA polymerases IV and V (Pol IV and Pol V) [152,154]. This pathway involves two main steps: an upstream small interference RNA (siRNA) biogenesis phase and a downstream methylation targeting phase. Pol IV produces short precursor RNAs that are processed into 24 nt small interfering RNAs (siRNAs) by a Dicer-like endonucle-Plants 2021, 10, 1884 8 of 20 ase 3 (DCL3) and further loaded into ARGONAUTE 4 (AGO4), forming AGO4-siRNA complexes [155,156]. Pol V produces non-coding RNA transcripts that are proposed to act as a scaffold at sites of DNA methylation [157]. These scaffold transcripts are bound by Pol IV-dependent 24 nt siRNAs that recruit DRM1 and DRM2 to maintain DNA methylation [154,158]. DNA methylation patterns have a clearly dynamic character and are continuously changing during plant seed development [159][160][161][162]. Thus, the occupancy of the CHH methylation sites remarkably increases from the early to the late stages of seed development and gradually decreases later on during germination. Thereby, both RdDM and CMT2 are responsible for CHH methylation in developing seeds, although both these enzymes lose their activity upon germination [136,163]. In soybean (Glycine max (L.) Merr.), DNA methylation in the CHH context increased from 6% at the early stage of seed development to 11% at the late stage [161]. Thus, the dynamics of soybean and Arabidopsis seed methylomes were clearly similar, i.e., the levels of CHH methylation gradually increased during seed development from fertilization to onset of dormancy in all parts of the seeds [164]. In contrast to the CHH sites, the patterns of CG and CHG methylation remain, to a large extent, unchanged over the whole period of seed development (Figure 2A) [136,[161][162][163][164][165]. as a scaffold at sites of DNA methylation [157]. These scaffold transcripts are bound by Pol IV-dependent 24 nt siRNAs that recruit DRM1 and DRM2 to maintain DNA methylation [154,158]. DNA methylation patterns have a clearly dynamic character and are continuously changing during plant seed development [159][160][161][162]. Thus, the occupancy of the CHH methylation sites remarkably increases from the early to the late stages of seed development and gradually decreases later on during germination. Thereby, both RdDM and CMT2 are responsible for CHH methylation in developing seeds, although both these enzymes lose their activity upon germination [136,163]. In soybean (Glycine max (L.) Merr.), DNA methylation in the CHH context increased from 6% at the early stage of seed development to 11% at the late stage [161]. Thus, the dynamics of soybean and Arabidopsis seed methylomes were clearly similar, i.e., the levels of CHH methylation gradually increased during seed development from fertilization to onset of dormancy in all parts of the seeds [164]. In contrast to the CHH sites, the patterns of CG and CHG methylation remain, to a large extent, unchanged over the whole period of seed development ( Figure 2A) [136,[161][162][163][164][165]. Later, during the seed maturation step, the levels of CHH methylation in TEs decrease drastically [163]. For example, it was shown for the Arabidopsis quadruple mutant ddcc (drm1 drm2 cmt2 cmt3), which was deficient in all methyltransferases required for all types of non-CG methylation [164]. The authors found that more than 100 TEs were transcriptionally derepressed in ddcc seeds. This might indicate reinforcement of TE silencing in developing seeds upon the upregulation of cytosine methylation in the CHH consensus sequence [164]. Thus, the proposed mechanism might underlie the constantly silent state of TEs, which, therefore, do not inactivate genes essential for seed development. Later, during the seed maturation step, the levels of CHH methylation in TEs decrease drastically [163]. For example, it was shown for the Arabidopsis quadruple mutant ddcc (drm1 drm2 cmt2 cmt3), which was deficient in all methyltransferases required for all types of non-CG methylation [164]. The authors found that more than 100 TEs were transcriptionally derepressed in ddcc seeds. This might indicate reinforcement of TE silencing in developing seeds upon the upregulation of cytosine methylation in the CHH consensus sequence [164]. Thus, the proposed mechanism might underlie the constantly silent state of TEs, which, therefore, do not inactivate genes essential for seed development. Multiple genes involved in seed development and germination are located in hypomethylated regions of the genome, known as DNA methylation valleys (DMVs, [166]). The DNA methylation status of these regions remains unchanged during the whole period of seed development, from fertilization to germination. Indeed, several genes encoding the enzymes of hormone biosynthesis (e.g., gibberellic acid oxidase GmGA20Ox2, GmGA3Ox1, AtGm20Ox2, and AtGA3Ox1), storage proteins (e.g., GmGlycinin1, AtCruciferin1), and some transcriptional regulators, are located within hypomethylated regions of the soybean and Arabidopsis genomes [166]. DMVs constitute an important part of the soybean seed genome, which does not vary significantly in the context of methylation status during seed development and early germination [166]. Moreover, genome regions hypomethylated during the plant lifecycle are enriched in genes encoding TFs, as well as in the genes critically impacting on seed formation (LEC1, ABI3, and FUS3) [10,166]. Seed germination is accompanied by silencing of the genes involved in seed development and activation of those controlling vegetative growth, mostly associated with cell division and cell wall organization. These genes are typically methylated throughout seed development and are later demethylated during germination [136,163]. Hypermethylation of genes in germinating seeds is reprogrammed mainly by passive CHH demethylation. Demethylation of plant DNA can be either passive or active. Passive demethylation occurs when the new chain in the replicated DNA molecule is not involved in maintaining methylation. In this case, only the old (maternal) chain appears to be methylated [26]. In contrast, active demethylation relies on the activity of demethylases represented in Arabidopsis by four enzymes: DEMETER (DME), REPRESSOR OR SILENCING 1 (ROS1), DEMETER-LIKE 2 (DML2), and DEMETER-LIKE 3 (DML3) [167,168]. Modification of Histones Different types of histone post-translational modifications have been described to date in the context of epigenetic regulation of seed development and germination. Thus, alterations in chromatin structure leading to gene expression changes can be underlined, at least by acetylation, methylation, phosphorylation, ubiquitination, and SUMOylation of histones [24][25][26]. These modifications play an important role in control of seed maturation, dormancy, and germination [10,62]. Thereby, the patterns of histone modifications (socalled histone code) serve as the marks for attachment of other proteins, involved in remodeling of chromatin. Methylation and acetylation of lysine residues in histone H3 directly affect the expression of associated genes. Thus, the cycles of histone acetylation and deacetylation are important elements in the regulation of genome activity [169]. Acetylation of lysine side chains affects the overall positive charge of histones and charge distribution on their surface [170,171]. It ultimately affects the interaction of histones with negatively charged phosphate groups of DNA and results in de-condensation of chromatin. The resulted relaxed structure is associated with its higher transcriptional activity. This relaxation can be reversed by the activity of deacetylases [172] (Figure 2B). Trimethylation at K4 and K27 of histone H3 (H3K4me3 and H3K27me3), leading to activation or suppression of gene expression respectively, represent the most well-characterized examples of site-specific post-translational modifications of histones ( Figure 2B). H3K27me3 plays a critical role in the regulation of genes involved in plant developmental control [10,26]. In plants, H3K27me3 is found in transcriptionally inactive regions of promoters and in transcribed regions of individual genes, whereas H3K4me3 is an antagonistic modification of histones in transcriptionally active regions [173]. Multiple DNA methylation valley (DMV) genes contain H3K4me3, H3K27me3, or bivalent marks that fluctuate during development [25,174]. The Arabidopsis H3K4me3 demethylases are also known as Arabidopsis trithorax (ATX) and Arabidopsis trithorax-related (ATXR) [175] (Table 1). In mature embryos, ABI3 and LEC2 are associated with H3K4me3, which marks these genes as transcriptionally active. However, upon germination, these modifications are replaced by antagonistic H3K27me3, which results in transcriptional deactivation of these genes [10]. Recently, Chen et al. revealed that H3K27me3 demethylase RELATIVE OF EARLY FLOWERING6 (REF6) directly upregulates the expression of abscisic acid 8hydroxylase 1 and 3 (CYP707A1 and CYP707A3) involved in ABA catabolism in seeds [176] ( Table 1). Polycomb proteins form chromatin-modifying complexes that implement transcriptional silencing in higher eukaryotes. Thus, hundreds of genes can be silenced by Polycomb proteins, including dozens of those encoding crucial developmental regulators in organisms from plants to humans [177,178]. Gene suppression typically relies on the PRC1 and PRC2. Both PRC1 and PRC2 are represented by several families of related complexes, which target specific repressed regions [10]. Thus, PRC2 is responsible for the trimethylation at K27 of histone H3 [177,179], whereas PRC1 catalyzes mono-ubiquitination of K119 in histone H2A, yielding a transcriptionally inactive chromatin conformation. It is generally agreed that PRC2 is required for initial targeting of the genomic region to be silenced, whereas PRC1 impacts on stabilizing this silencing and underlies the cellular memory of the silenced region after cell differentiation. The activity of PRC2 in plants can be inhibited by treatment with 1,5-bis-(3-bromo-4-methoxyphenyl)-penta-1,4-dien-3-one, which affects both seed germination and radicle growth [179]. Although the PRC1 complexes differ significantly between animals and plants, some of their components, such as ring-finger proteins RING1 and BMI1, are rather conserved [180]. In Arabidopsis, the PRC1 core components, AtRING1 and AtBMI1, were shown to physically interact with the PHD domain H3K4me3-binding ALFIN1-like (AL) proteins. The loss of AL6 and AL7 by T-DNA insertion mutant analysis, as well as the loss of AtBMI1a and AtBMI1b, retards seed germination and causes transcriptional de-repression, accompanied by a switch of histone modification state from H3K4me3 to H3K27me3 [17]. Therefore, AL PHD-PRC1 complexes associated with histone H3 act as switchers from the H3K4me3associated active to the H3K27me3-associated repressive transcription state of the genes involved in seed development. In the regulatory pathways involved in control of seed development, maturation, and germination, transcription factors containing the B3 DNA-binding domain (DBD) play the key role [181]. The DBD is a highly conserved domain consisting of 100-120 amino acid residues, designated as B3, that was originally identified as the third basic region in the ABI3 and VP1 proteins [182]. Among them, the LAFL network of transcription factors is directly involved in the activation of seed maturation, whereas VAL (VP1/ABI3-LIKE) proteins suppress LAFL-related effects, i.e., initiation of germination and vegetative development [16,183]. As was already mentioned above, chromatin remodeling complexes PRC1 and PRC2 [17,18], as well as the PKL and PKR2 proteins [21,22], are involved in repression of the LAFL network of transcription factors during seed germination [160]. For example, during seed germination, LAFL genes are repressed by the Polycomb machinery via its histone-modifying activities: the histone H3 K27 trimethyltransferase activity of the PRC2 and the histone H2A E3 monoubiquitin ligase activity of the PRC1 [184][185][186][187] ( Figure 2B). Specifically, VAL proteins recruit histone deacetylase 19 (HDA19) and PRC1 to the chromatin regions, which contain genes involved in regulation of development and dormancy (Table 1). Thereby, HDA19 removes histone acetylation marks, whereas PRC1 incorporates monoubiquitinated histone H2A (H2Aub) marks to initiate initial repression of the target gene [188]. Thus, the VAL proteins (which are required for the introduction of H2Aub gene marks in histone H3 molecules) appear to cause the initial repression of the seed development-and germination-related genes. Later on, this repression is maintained by PRC2-mediated trimethylation at H3K27 [188]. It is important to note that VAL1 was shown to interact with HDA19 and to repress LAFL gene expression during germination [189]. Two other factors, playing an important role in repression of the embryonic state, were identified in Arabidopsis: PICKLE (PKL), encoding for the putative chromatinremodeling factor CHD3, and gibberellins. It was found that PKL acts throughout the seedling, repressing the expression of embryonic traits, and is required for GA-dependent responses in shoots [190]. miRNA-Target Modules It is well-known that the plant genome contains both protein-coding and non-coding sequences [191,192]. Non-coding sequences are represented by regulatory non-coding RNAs-microRNAs (miRNAs), long non-coding RNAs (lncRNAs), short interfering RNAs (siRNAs), and circular RNA (circRNA) [193,194]. Small non-coding RNAs (sRNAs) are known as important regulators of gene expression, affecting almost all stages of the plant lifecycle [195,196]. The regulatory RNAs of this type act at the transcriptional and post-transcriptional levels and essentially impact on seed development and germination [160,197,198]. The major class of plant sRNAs is represented by miRNAs, which are involved in regulation of plant development at the post-transcriptional level ( Figure 2C). The biogenesis of miRNAs is a multistep process, including transcription of miRNA genes, processing of primary miRNAs, and loading of mature miRNAs into ARGONAUTE (AGO) proteins to form the miRNA-induced silencing complex (miRISC). Plant miRNAs are involved in multiple regulatory mechanisms, including mRNA cleavage, repression of translation, and DNA methylation [193,196,199,200]. The tissue contents of individual miRNAs change dynamically throughout the whole stages of seed development and germination. Thereby, their abundance correlates well to the phases of seed development, maturation, and germination [165,197]. Thus, miRNAs block the expression of the genes involved in control of development and dormancy via cleavage of mRNA by AGO1 proteins [193] (Figure 2C). Comprehensive analysis of miRNAs in canola seeds showed that miR156 is involved in regulation of the transition to germination [201] (Table 2). It was also shown that DOG1 affected the levels of miR156 and miR172 and could therefore regulate seed dormancy in lettuce [202]. Thereby, suppression of the DOG1 expression enabled seed germination at high temperatures. This effect was accompanied by a decrease in miR156 and an increase in miR172 levels. The small RNAs miR159 (targeting transcripts of the myeloblastosis family genes MYB33, MYB65, MYB101) and miR160 (targeting transcripts of the gene ARF10) also impact on seed germination (Table 2). Changes in the levels of these miRNAs or in the sensitivity of the target transcripts alter the response of germinating seed to suppression of ABA biosynthesis [198,203,204]. Five further miRNAs (ath-miR8176, ath-miR851-5p, ath-miR861-3p, ath-miR158a-5p, and ath-miR779.2) showed the highest expression level during germination. Among these RNAs, miR851 might target the pentatricopeptide repeat (PPR) gene family, which is also expressionally upregulated during germination [165]. As some predicted targets of miR858a (MYB13, MYB65, and MYB93) are known as the regulators of germination, this RNA might also be involved in germination [165]. [205] To summarize, the epigenetic signals, such as the changes in DNA methylation, demethylation, histone post-translational modifications, and sRNA-related regulatory mechanisms, are the key modulators of seed development and the transition from seeds to seedlings. To date, the role of reversible DNA methylation and histone modifications accompanying seed germination is well-studied. However, specific miRNAs and their specific target genes are still mostly uncharacterized. Conclusions Due to complex temporal patterns of specific signals, deciphering the mechanisms behind the transition from seed to seedling represents a challenging task. Nevertheless, some exciting prospects for the future research in this area can be clearly seen. Thus, highly efficient comprehensive approaches to dissect these mechanisms at the epigenetic level will reveal gaps in our understanding of the transition from dormancy to germination. In this regard, the role of epigenetic modifications in the hormonal regulation of the transition from seed to seedling is of a particular interest. Obviously, detailed studies addressing the loss of desiccation tolerance during seed germination and aiming at identification of the involved genes, transcripts, proteins, and metabolites by means of comprehensive post-genomic techniques are still to be accomplished. Dynamics of chromatin, i.e., the transitions between its active and repressed states, are also poorly characterized in the context of seed germination, and the underlying molecular mechanisms remain mostly unknown. Physiological diversity of the seed to seedling transition is another issue to be addressed in the nearest future. Indeed, the mechanisms of seed dormancy and germination are mostly characterized for Arabidopsis as a model plant, whereas the crop plants remain to a high extent insufficiently addressed in this context. Thus, a comprehensive comparison of the mechanisms underlying the transition from dormancy to germination and from seed to seedlings in different species is strongly mandatory. The state-of-the-art methods of epigenomics research, such as bisulfite sequencing and 5-methylcytosine sequencing, might help to gain a deeper insight into the role of epigenetic variability in the formation of crop plant phenotype [206]. On the whole, the methods of genomics and post-genomic research provide a versatile instrument to probe the regulatory mechanisms behind the traits, promising in crop improvement programs. Finally, locus-specific modification of DNA methylation patterns by epigenome-editing tools might facilitate molecular breeding (epibreeding) of valuable crop plants.
2021-09-28T05:18:26.808Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "ccae37c6874efd7a0e8c7b9094f1581061fb6882", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/10/9/1884/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccae37c6874efd7a0e8c7b9094f1581061fb6882", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
86162706
pes2o/s2orc
v3-fos-license
Reproductive patterns of Aratus pisonii (Decapoda: Grapsidae) from an estuarine area of São Paulo Northern Coast, Brazil The tree crab Aratus pisanii (H. Milne Edwards 1837) is a tropical sesarminae species widely distributed in the western Atlantic from central Florida to Brazil, arid in the eastem Pacific from Nioaragua to Peru (Melo 1996), being commonly found 00 the upper HUoral zone in IUangroves. The tree crab Aratus pisanii (H. Milne Edwards 1837) is a tropical sesarminae species widely distributed in the western Atlantic from central Florida to Brazil, arid in the eastem Pacific from Nioaragua to Peru (Melo 1996), being commonly found 00 the upper HUoral zone in IUangroves. Many aspects concerning to the Ji fe cycle and population biology. of A. pisonii have been studíed in severalenvironments.(Hartnoll1965, Wamer 1967, Díaz andConde and Díaz 1989a b). Tbose studies baverevealed the large plasticity ofthe life historyJeatures shown by t.his species' populations inhabiting different environments. It ís well known that reproductive periodici ty may vary along a latítudinalgradient asa function of envíronmental changes. The relative importance of a given factor as cueing the initi atiQnof reproductive activities varíes among species ,and' habitats. The identification of this factor can clarify the cau §es of annuaJ ábim dance fluctuations in natural populations. The frequency of . o, ccurrence a nd the repro ductive period oi the tree mangrove crab A. pisonii in subtropical estuanne mangrove sys tem (230 29' S, 45Ciío' W) near the australlimit of its Western AUantic distribution are ana}yzed in .this study to provide information for f�her comparative work. MATERlALS AND METHODS Ubatuba is located in the northern coast of Sao Paulo State, Brazil. The climate in tbis region is tropical humid tending to subtropícal. Rainfall is more intense from September lO March (spring to summer). The "Comprido" and "Escuro" streams are the freshwater resource of the area and they drain into the Fortaleza Bay. The salinity varies widely due the mixed tide regime in that region. Monthly samples were carried out from January 1993 to June 1994 in the fringe area of rnangrove near to the water, where the abundance of A. pisonii had been previously observed to be bigher than in ¡nlet areas. The animal s were cap tured by hand from the roots and branches of the mangrove trees. Samplings consisted of 1 hour catch-effort sessions conducted by two people during low lide. AH crabs were sexed and pres ence or absence of eggs in the females was recorded. Their carapace width (CW ) was mea sured to the nearest 0.1 mm using a Vernier caliper. The crabs were grouped into eleven size classes from 4.1 lO 26.0 mm Cw. Recruitment pulses were verified in frequency of individuals measuring from 4.1 to 8.0 mm Cw. Indíviduals smaller than 4.1 mm CW were no! considered due to the difficulty to determine accurately their sexo The reproductive period was determined based on ovigerous female frequency in relation to mature female along the year. Since the small est ovigerous females obtained measured 15.0 mm CW, only crabs grouped in the 14.1-16.0 mm size class or larger were regarded mature. Reproductive periodicity was statistically exam jned by means of binomial proportion analyses (Goodman 1965). The sex ratio in each month and size class were examined by this analysis too. The significance level adopted for all analyses were 5%. Due to the terrestrial habits of tbis species the association between proportíon of ovigerous females and two factors (monthly average air temperature and rainfall) was analyzed. Temperature and rainfall data were obtained in a nearby (4 Km) meteorological station of the University of Sao Paulo. RESULTS During the study period, a total of 1 078 crabs were obtained; 489 males, 589 females, of which 131 were ovigerous. The size of the individuals ranged from 4.2 to 25.9 mm CW. The overall size frequency dis tribution presented an unímodal and slightly asymmetric shape skewed to the left (Fig. 1). Average size of both males and females was 16.8 mm Cw. Overall sex-ratio was 1: 1.2, not differing statistically from the 1 : 1 ratio. Remarkable deviations from 1 : 1 ratio were registered in January, February and May 1993 and in January 1994, favouring females. Only in May and June 1994 males outnumbered females (Table 1). Deviations were observed in sorne síze class es, these differences being statístically signifi cant in sorne cases (Table 2). Females were more numerous in the intermediate size classes, from 14.1 to 18.0 mm CW. Immature individu als from 4.1 to 14.0 mm CW did not present dif ferences in sexual proportions. In large crabs from 22 to 24 mm CW, males were significant ly more numerous than females (Goodman's test, p < 0.05). Only a few specimens larger than 24.0 mm were obtaíned. DISCUSSION The Oyeran size frequency distribution of A. pisonii is unimodal and asymmetric which usu ally reflects a population in equilibrium (Hartnoll and Bryant 1990). The modal varia tions in the size frequency distribution along the year were mainly caused by recruitment pulses, which were more evident in June 1993, May and June 1994 (late autumn and early winter). However, juveniles could be also found in other months, but in smaller frequencies. This con spicuous recruitment was probably a result of intense reproductive activity occured in March 1993 andFebruary andMarch 1994 (summer). It can be supposed that the main period of megalopa settlement occurred from late sum mer to earIy autumn, since the larval develop ment of this species includes four zoeal stages and a megalopa instar in approximately one month (Warner 1968). This reproduction and recruitment pattem could be a result of the geographical location of the studied area, a transitional faunistic region with tropical humid climate tending to subtropi cal. In this latitude, intensive temperature changes are likely to occur more often than in the tropics. In this study the temperature was very correlated to the ovigerous rate as had been observed in other studies as the main factor influ encing the reproduction (Fusaro 1980, Jones and Simons 1983, Siddiqui andAhmed 1992. An extended breeding season with a peak of higher activity during the rainy season was observed in the present study, as those obtained by Díaz and Conde (1989) and Conde and Díaz (1989a). Spawning in the rainy season may pro vide a selective advantage to intertidal decapods populations sínce that periods of higher rainfan rate can cause changes in the salinity of water (Pillay andNair 1971, DeVries et al. 1983) and also promote an increase of nutrients concentra tion, favouring the development of plank totrophic larvae (Siddiqui and Ahmed 1992), and an ¡ncrease of primary productivity and ses ton availability in the estuary (Rodriguez and Conde 1989). In the populatíon studied herein, higher flow rate and seawards water velocity during the rainy season, are conditions that probably prevent zoeae from stranding and osmoregulatory stress. In this study the sex-ratio does not differ from the 1 : 1 in the smalIest classes but it does differ in the intermediate size classes favouring females and also in larger crabs favouring males, according to the "anomalous" pattem describedby Wenner (1972). The highest female proportion in intermediate size classes (from 14 mm CW), match the size in which an important part of the population is sexually active. For A. pisonii, Warner (1967) suggests differential growth rates in which males reach large sizes fas ter due to the higher reproductive effort of females which do not molt when they are incubating eggs. According to Giesel (1972) variations in the sex-ratio occur primarily during the main sea sonal changes of environmental conditíons. Warner (1967) reported that during the repro ductive period of A. pisonii, females migrate from the interior of the mangrove to water edge increasing the relative female frequency. Peculiar characteristics of mangrove fringes, such as humidity conditions provide more suit able grounds to egg development and larval release (Emmerson 1994). Furthermore, female migration towards the fringes, possibly enhances the chances of mate encounters. Christy and Salmon (1984) assumed that sex ratio is associated to the mating system of each species, but Willson and Pianka (1963) pointed out that this is not a cause-effect relationship. It could be suggested that female-biased sex-ratio observed herein may be related to a poligynous mating system in which males could mate with different females increasing the reproductive output in population (Giesel 1972). However, further investigations are necessary to confirm the relation between sex-ratio and mating sys tem in A. pisonii as in other decapods. ACKNOWLEDGMENTS To the "Conselho Nacional de Desenvolvi mento Científico e Tecnológico -CNPq" that provided fellowship for the first author. To Adilson Fransozo from UNESP, Brazil for their suggestions. To the NEBECC memberS who helped us during the collections. !
2019-03-30T13:08:30.962Z
2015-07-23T00:00:00.000
{ "year": 2015, "sha1": "6f25ef4106130c497d813fced9e61285954a8a54", "oa_license": "CCBY", "oa_url": "https://revistas.ucr.ac.cr/index.php/rbt/article/download/20130/20330", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9365fac97b527e51148881efb74dd6c7ceade1d7", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
18342190
pes2o/s2orc
v3-fos-license
The power of coarse graining in biomolecular simulations Computational modeling of biological systems is challenging because of the multitude of spatial and temporal scales involved. Replacing atomistic detail with lower resolution, coarse grained (CG), beads has opened the way to simulate large-scale biomolecular processes on time scales inaccessible to all-atom models. We provide an overview of some of the more popular CG models used in biomolecular applications to date, focusing on models that retain chemical specificity. A few state-of-the-art examples of protein folding, membrane protein gating and self-assembly, DNA hybridization, and modeling of carbohydrate fibers are used to illustrate the power and diversity of current CG modeling. INTRODUCTION T o unveil the driving forces governing biomolecular processes, computer simulations have become an remarkable tool, in particular, the molecular dynamics (MD) simulation technique. In this approach, the time evolution of a system of interacting particles is computed mainly based on pairwise forces between the atoms. MD simulations have advanced to such a level that nowadays one can talk about 'computational microscopy' as an added tool to experimental microscopy methods. 1,2 Traditional all-atom models are inadequate to simulate the large spatiotemporal scales involved in cellular processes. Coarse grained (CG) models have gained a lot of popularity lately as by neglecting some of the atomistic degrees of freedom (DOFs) they allow for a signifi-cant increase over both the spatial and temporal limitations of all-atom models. 3,4 One of the current challenges in the field of biomolecular modeling is to develop accurate and transferable CG force fields (FFs). Essentially, two different routes are followed. In bottom-up approaches (structure-based coarse graining), effective CG interactions are extracted from reference atomistic simulations. This can be done in a systematic way by using inverse Monte Carlo (IMC), 5 iterative Boltzmann inversion (IBI), 6 force matching (FM), 7,8 or related methods (see Box 1). In top-down approaches (thermodynamic-based coarse graining), the focus is on reproducing key experimental data, especially thermodynamic properties. Typically, simple analytical interaction potentials are used and the parameters are optimized in an iterative procedure. Although the bottom-up approaches are capable of capturing more of the fine details of the interaction, the top-down approach usually provides potentials that are more easily transferable. In practice, many CG FFs rely on a combination of these two routes. For a recent review on different approaches to coarse graining, see Brini et al. 9 Provided that one respects the limitations of the CG model at hand, CG modeling has five powerful advantages, namely: (1) enabling efficient simulations of huge system sizes, with simulation volumes up to BOX 1: STRUCTURE-BASED COARSE GRAINING In structure-based coarse graining, CG potentials are constructed in such a way that predefined target functions, which structurally characterize the system, are reproduced in the CG simulation. The target functions are mostly obtained from higher resolution atomistic simulations, but in more knowledge-based approaches, experimentally derived structural data can also be used. In the commonly used IBI method, 6 radial distribution functions, g ref (r), are the target reference functions. Through the simple Boltzmann inversion where k B denotes the Boltzmann constant and T the temperature, the potential of mean force (PMF) V PMF between pairs of CG particles as a function of their distance r can be obtained. Unfortunately, this PMF cannot be directly used as a pair potential in a CG model because it encloses multibody contributions from all the particles in the system. Therefore, an iterative procedure should be used to extract the intermolecular CG potential V CG : The procedure is initiated with V PMF extracted from the simple Boltzmann inversion. The subscript i denotes the iteration number. According to the Henderson theorem, 13 the IBI method guarantees the theoretical uniqueness of the two-body CG interaction potential for the given g ref (r). FM 7,8 is another popular technique for constructing CG potentials. FM does not rely on pair correlation functions, that is, pair potentials of mean force, but instead matches forces on the CG interaction sites as closely as possible with the forces at the atomistic level. Thus, it aims at reproducing the multibody PMF with a set of CG interaction functions. In order to determine FM CG potentials, first reference forces F i ref on CG beads are calculated as a sum of the associated atomistic forces f γ Next, a model is constructed in which the CG FF depends linearly on a number of fitting parameters, the coefficients of cubic splines used to tabulate the CG forces. Subsequently, a fitting procedure is performed, which in essence involves a solution to the following set of N × L equations, where g 1 . . . g m are the fitting parameters, N is the number of CG beads, and L is the number of reference frames used for the coarse graining. L should be large enough to make the set of equations overdetermined. The calculation is usually repeated for a number of smaller parts of the trajectory and the final result is constructed as an average over the set of solutions. A number of other structure-based methods exist, for example, IMC, 5 minimization of relative entropy, 14 or conditional reversible work, 15 but so far IBI and FM have been most widely used in the development of effective CG potentials for biomolecular simulations. For an excellent overview, see the work of Noid. 16 It is important to realize that there is no unique method to construct CG potentials from higher resolution data. A full representation of higher order correlations requires multibody potentials, which are impractical and computationally expensive thereby defeating the purpose of coarse graining. Even when the pair correlations are well described, other system properties such as the pressure or energy cannot be matched at the same time. The art of coarse graining is in the compromise of assessing which level of detail needs to be included. In the end, the most suitable CG method depends on the type of questions asked. 100 × 100 × 100 nm 3 containing millions of particles; (2) allowing for the simulation of slow processes requiring time scales in the micro-to millisecond range; (3) enabling high-throughput studies, systematically exploring state conditions in thousands of parallel runs; (4) showing where details matter and where not when compared with higher resolution methods, thus providing insights into the physical nature of the fundamental driving forces; and (5) providing a computationally inexpensive testing ground for exploring novel generic biophysical pathways. Compared with all-atom models, CG models are easily two to three orders of magnitude faster, as a result of fewer DOFs combined with larger integration time steps and faster sampling due to a smoothened energy landscape (see Box 2). Here, we review some of the more successful methods that have been developed for large-scale simulation of biomolecular systems, as well as highlight some novel and promising approaches. To narrow the scope, we restrict ourselves to particle-based simulation methods and only describe models that retain chemical specificity. These models present the advantage of providing an easy access to full atomistic details using resolution transformation methods. [10][11][12] This review is organized as follows. First we give a survey of CG models grouped according to class of biomolecule. We then provide a number of BOX 2: WHY ARE CG MODELS SO FAST? The major incentive to use CG models is the fast sampling they provide. There are four reasons for this speedup: 1. Reduced number of DOFs. For CG models that retain chemical specificity, the typical reduction factor in the number of particles, n, is between 3 and 5 with respect to united atom FFs, or around 10 compared with fully atomistic FFs that include explicit hydrogens. For models that map multiple water molecules into a single CG bead, the solvent DOFs are also greatly reduced. Fewer particles to compute and fewer neighbors to consider cause a speedup of the order of n 2 . Taking the Martini model as an example, with n = 4 for most molecules compared to a united atom model and n = 12 for water, a speedup of 16-144 is obtained depending on overall water content. In models where the solvent is omitted entirely, the speedup is orders of magnitude larger. 2. Short range interactions. Most CG models only compute short-range interactions, typically cutoff at a distance around 1 nm. No expensive PME methods are needed as the electrostatic interactions are effectively captured in the short-range potentials. Furthermore, many CG models shift the potentials to zero at the cutoff, allowing for less-frequent pairlist updates. Compared with a typical setup for atomistic simulations with full treatment of long-range electrostatics, an order of magnitude speedup is easily obtained. 3. Faster dynamics. Because of the loss of atomistic DOFs, the potential energy surface is smoothed out leading to reduced friction. In the same simulation time, a CG system can therefore sample more of the phase space. The speedup factor is difficult to generalize because the amount of friction removed depends quite sensitively on the nature of the mapping. H-bonds, for instance, are likely to contribute much more to the friction than methylene groups do. 4. Larger integration time steps. The overall smoother energy surface permits the use of larger integration time steps. Typical time steps used are tens of femtoseconds for MD, and >100 femtoseconds for dissipative particle dynamics (DPD), compared with 1-4 femtoseconds employed in all-atom MD simulations. However, CG models that used more detailed potentials obtained through IBI or FM, for instance, are limited to shorter time steps. All together, the combined speedup factors are between 2 and 5 orders of magnitude for most of the CG models considered in this review. There is however no universal rule to predict the particular speedup of a CG model from the approximations and strategies it is based on. Consequently, the interpretation of time is problematic in CG models. The time scale is best calibrated by directly comparing with experimental data or dynamics from atomistic simulations for the system at hand. state-of-the-art examples to illustrate the power of CG simulations. We conclude with a short outlook. CG MODELS In this section, we discuss the most widely used CG models as well as some promising recent methods. For each model, a brief characterization is given of the main features and limitations. The models are grouped according to the class of biomolecule, starting with water, followed by lipids, proteins, nucleotides, and carbohydrates. We do not claim, nor aim, to be exhaustive and we apologize if we have overlooked important contributions. CG Water Models Water is present as solvent in all biological systems and as such it is an important molecule to consider when developing biomolecular FFs. In spite of water's simple molecular structure, it shows in many aspects very complex collective behavior, making it a challenging molecule to model on a CG level. As a result, CG models have employed widely differing approaches to map ( Figure 1) and parameterize water. The water models are divided into three categories: implicit, explicit, and polarizable models. As the current work deals with biomolecules, we focus on water models that have been used with or are suitable to be used in combination with a CG biomolecular FF. Implicit Models Roughly two main strategies have been used to model the aqueous phase implicitly. In a conceptually simple strategy, the hydrophobic effect and charge screening are accounted for by adjusting the nonbonded interactions between nonsolvent molecules. This strategy was applied by top-down (lipid) models, 23 on solute size and charges are explicitly screened using either Debye-Hückel theory or Generalized-Born models. Because of the emphasis on electrostatics, these methods have been popular in DNA models. 27,28 Note that although most of the implicit solvent models use an implicit representation for ions, some models combine implicit solvent with explicit ions. 29 Analogous to the screening of charges, some FFs scale the van der Waals (vdW) interactions using a term based on solvent accessible surface area. 30 Explicit Models An excellent overview of explicit CG water models is given in recent reviews by Hadley and McCabe 31 and Darré et al. 32 Explicit water models can be divided into models that have been parameterized using either structure or thermodynamics-based methods. Structure-based models are constructed from atomistic simulations of water using methods such as IBI or FM (see Box 1). Several systematic studies looking at the properties and transferability of these water models are available. [33][34][35] As these methods rely on atomistic simulation trajectories, they have traditionally mapped one water molecule to one CG bead to avoid the challenging issue of grouping water molecules. This 1:1 mapping has limited the speedup of these models but allowed quite accurate CG wa-ter models to be conceived. The grouping issue has recently been circumvented by assembling together water molecules in the atomistic trajectory using a K-means algorithm 35 or the CUMULUS method. 36 Thermodynamics-based models are parameterized by fitting to experimental solvent properties such as density, water-air surface tension, diffusion rates, or solvation free energies. These models use analytical potentials, most commonly Morse, Lennard-Jones (LJ), or Mie ones, and apply different mappings ( Figure 1). He et al. 37 performed a systematic study of the properties of different potentials. A popular example of a thermodynamic-based water model is the water model associated with the Martini FF 17 (Figure 1a). The model represents four water molecules by a single CG bead (4:1 mapping) using a shifted LJ potential for the nonbonded interactions. It has been parameterized based on the density of pure water and the solubility of water in apolar solvents. An alternative water model compatible with the Martini FF uses a Morse potential for the nonbonded interactions. 38 The M3B model, 39 developed in connection with carbohydrates, is based on a 1:1 mapping and also uses a Morse potential. It was parameterized against the experimental density, intermolecular energy, and the diffusion coefficient of water. The thermodynamic-based water models used by Shelley et al. 40 and later by Shinoda et al. 41 use a 3:1 mapping and a 6-4 or 12-4 Mie potential, respectively. The first one was parameterized against water density, whereas the latter used density, compressibility, and air-water surface tension. The monoatomic water (mW) model by Molinero et al. 42 uses a nonbonded potential with two and three body terms mapping one water to one bead. This model reproduces the tetrahedral organization of water molecules in addition to a range of other properties such as density and phase transition temperatures. The lack of charges in these explicit water models prevents them from screening electrostatics. Charge-charge interactions are either ignored or implicit screening is used. 42,43 For instance, the Martini model uses an implicit dielectric constant ε = 15. Ions, however, are typically included explicitly, and several of the CG water models have specific parameters available for ions. 22,43,44 The models applying a coarser mapping (e.g., Martini and WT4-see below) represent an ion together with its first solvation shell in one bead. Polarizable Models To alleviate the lack of proper electrostatic screening in the explicit water models discussed above, a number of polarizable CG water models have recently been developed. Different methods have been used to mimic the electrostatic screening of water molecules, which arises in large part from their orientational polarizability. The most common approach is the introduction of extra particles carrying a charge. Two recent water models specifically aimed to be compatible with the Martini FF have been developed. Yesylevskyy et al. 19 proposed a model with two additional particles carrying a charge and bound to the LJ interaction site (Figure 1c). The relative rotation of the particles within a molecule-their polarization-is controlled by an angle potential and their interactions with the environment. Wu et al. 20 introduced the big multipole water (BMW) model (Figure 1d), in which water consists of three sites connected in a rigid V-shape. All three sites carry a partial charge, whereas only the central site is involved in vdW interactions-via a modified Born-Mayer-Huggins potential-with other water beads. Several CG FFs model the polarizability of water using an induced point dipole. Examples are the ELBA FF parameterized by Orsi and Essex 21 ( Figure 1e) and the polarizable pseudo-particle (PPP) model obtained by Ha-Duong et al., 47 based on a (roughly) 1:1 mapping. For the PPP model, the induced dipoles are only susceptible to charges on other nonwater molecules. Quite different in topology is the Wat Four (WT4) model 22 (Figure 1f) that consists of four vdW spheres in a tetrahedral geometry that together map 11 water molecules. The beads interact via harmonic bonds and all four carry a charge. In addition to the original use in combination with a CG DNA model, 22 WT4 has also been used in multiscale simulations. 48 Other examples of CG water directed at multiscale simulations are the GROMOS CG water model 18 (Figure 1b), to be combined with the atomistic GROMOS FFs, and the PPP model in combination with the atomistic polarizable TCPEp FF. 49 CG Lipid Models Lipid bilayer structure and function has been extensively studied using all scales of molecular simulations. Although a typical lipid is rather small, around a hundred atoms, the bulk material properties of a lipid bilayer depend on the collective behavior of hundreds, if not hundreds of thousands, of lipidsrendering atomistic lipid bilayer simulations computationally costly. The large time and length scales required to study many of the interesting membraneassociated processes, such as lipid domain formation, sorting and clustering of membrane proteins, vesicle fusion and fission, and so on, have spurred the development of a large number of CG lipid FFs. The The one bead per lipid aggressively CG model of Ayton and Voth 46 showing the analytical Gay-Berne ellipsoid particle model combined with an in-plane potential systematically derived from atomistic simulations. first CG lipid model dates back to 1990, by Smit et al. 50 Today, CG lipid models range all the way from continuum or semi-continuum models to atomistic or united atom models. Here, we focus on models that retain chemical specificity and are therefore able to distinguish specific lipid types. These kinds of models usually group 3-6 heavy atoms per CG bead, reducing a typical lipid to around 8-14 beads. This CG lipid mapping ( Figure 2) is quite common and gives a good reduction in the number of particles and still allows enough flexibility for chemical specificity. Because of the large number of relevant CG methods available here, we only discuss recent models; for additional information please see recent reviews on CG lipid simulations. 23, 51-54 Klein Models Klein and coworkers are one of the pioneers in exploring CG lipid models. In Shelley et al., 40 they demonstrated the feasibility of constructing a specific CG lipid FF directly from atomistic simulation data using dimyristoylphosphatidylcholine (DMPC) as an example. Each DMPC lipid is represented by 13 CG beads ( Figure 2a) linked together using harmonic bond and quartic angle potentials, each one fitted to the underlying atomistic simulation. The nonbonded interactions were based on the radial distribution functions of the corresponding atomistic groups and refined using IBI. The resulting CG model reproduced the structural details of the atomistic simulations quite accurately but has limited transferability. There have been quite a few refinements to this model and other CG models have also been introduced, including a promising new lipid model by Shinoda et al. 55 This new model uses the same atomistic to CG mapping scheme but CG particles representing the same atomistic group in different molecules are fitted jointly based on thermodynamic properties and multiple atomistic simulations. This strategy-combining bottom-up and top-down parameterization-resulted in improved transferability of the FF, both for different molecular structures and environmental conditions. Recent applications of this new model include studies of the phase behavior of lipid monolayers 55 and membrane partitioning of fullerenes. 56 The Martini Model The Martini FF was originally developed for lipids. 17,43,45 The philosophy behind Martini was not to capture every detail of a given atomistic simulation, but rather to present an extendable CG model based on simple modular building blocks, using few parameters and standard interaction potentials to maximize applicability and transferability. Martini uses an approximate 4:1 mapping ( Figure 2b) and in version 2.0, 43 18 bead types were defined to represent different levels of polarity as well as charged groups. The CG beads have a fixed size and interact using an interaction map with 10 different strengths. Both vdW and electrostatic interactions are described using shifted potentials and the electrostatics is screened with a relative dielectric constant ε = 15 using the standard Martini water or ε = 2.5 using the polarizable water model, see water section. Bonds and bond angles are described with harmonic potentials. Parameters were tuned to match thermodynamic and structural data from experimental as well as atomistic simulations of a number of systems. Because of the modularity of Martini, a large set of different lipid types has been parameterized (e.g., Refs 43,45,57, and 58) with applications ranging from vesicle self-assembly 45 to formation of raft domains 59 and membrane tethers 60 to name only a few. The ELBA Model The ELBA (electrostatics-based) CG lipid FF developed by Orsi and coworkers, 21, 61 focuses on modeling lipid-water interactions and capturing important electrostatic contributions. The model represents each water molecule individually using soft sticky dipole potentials and incorporates electrostatics in the CG lipid beads as point charges or point dipolesallowing for a relative dielectric constant ε = 1. A few lipid types have been parameterized 21 by matching lipid properties, such as volume and area per lipid, average segmental tail order parameter, spontaneous curvature, and dipole potential. Additionally, the ELBA FF was constructed with possible multiscale applications in mind. 62 Applications of the ELBA FF have thus far been focused on lipid phase behavior 21 and permeation of drugs and other compounds across bilayers. 62 Voth Models Voth and coworkers have developed numerous CG lipid models, for example, Refs 63-65 that, like the Klein model, 40 build the FFs directly from atomistic simulations, but instead of matching average structural properties, they target the underlying forces at the atomistic scale (FM, see Box 1). A typical atomistic lipid is mapped onto 13-15 CG beads, similar to the other models discussed in this section. Depending on the model, electrostatic interactions are treated explicitly or implicitly-by combining them with the short-range nonbonded potentials. Different models also represent water differently: explicitly incorporate each water molecule in one CG bead 63 or implicitly include the water contribution in the nonbonded potentials. 65 These methods have been demonstrated on a number of lipids, for example, DMPC, 63 cholesterol, 64 dioleoylphosphatidylcholine (DOPC), and dioleoylphosphatidylethanolamine. 65 By essence, this approach builds potentials that are difficult to transfer from one system to another. Smit's DPD Model Kranenburg et al. 66 studied CG lipid models using soft interaction potentials in DPD simulations. They modeled DMPC on two different CG scales: a fine (close to united atom scale) with a CG bead volume of 30Å 3 and a 1:1 mapping for water and a coarser scale (13 beads for a DMPC lipid) with a CG bead volume of 90Å 3 and a 3:1 mapping for water. In both models, the CG beads are connected with harmonic bonds and the bending potentials adjusted to fit distributions obtained from atomistic simulations. The DPD repulsion parameter set was determined by testing parameter combinations from various related DPD studies. The coarser model is quite fast even compared with other CG lipid models because of the very soft nature of the DPD potentials. The model has been improved upon a number of times and has been shown to describe the phase behavior for a variety of phospholipids and cholesterol quite accurately. 52, 67-69 Other Promising Models A number of other models should be mentioned, in particular, recent attempts to parameterize solvent-free lipid models that retain chemical detail. Implicit solvent models gain considerably on computation cost but do need to incorporate the excluded hydrophilic-hydrophobic interactions into the effective potentials between the CG beads. The models of Lyubartsev, 70 Wang and Deserno, 71 Sodt and Head-Gordon, 72 and Curtis and Hall 73 use similar number of CG beads per lipid (10-15) and derive their CG potentials from representative atomistic simulations. Wang and Deserno 71 and Sodt and Head-Gordon 72 add long-range attractive interactions on the lipid tails to mimic the hydrophobic effect, which they tune to fit experimental data. Curtis and Hall 73 in their LIME (Lipid Intermediate resolution ModEl) FF use hard-sphere and square-well potentials in order to use discontinuous molecular dynamics and gain even greater speedup. Additionally, the Voth group introduced two supraresolution solvent-free methods: the Ayton and Voth 46 hybrid analytic-systematic (HAS) approach and the Srivastava and Voth 74 hybrid CG (HCG) models. These methods are aimed at even larger time and length scales, with applications such as modeling of large liposomes consisting of tens of thousands of lipids. The HAS approach was demonstrated for a model with one bead per lipid (Figure 2c) and the HCG with 3-4 beads per lipid. The neat feature of these models is that analytical potentials describing the generic behavior of the lipids are combined with detailed FM potentials that give the model chemical specificity. CG Protein Models CG models for proteins have a long history, with the pioneering models for protein folding introduced in the mid 70s 75 and 80s. 76 The motivation for such simple models was, as for most biological molecules, to address the issue of conformational sampling. Structure-based models have contributed to our understanding of the physicochemical forces governing the protein folding process 77 and protein-protein interactions. 78 However, these models often lack a proper description of the chemical specificity of amino-acid side chains (SCs) and are therefore not described here. Readers are referred to earlier reviews on such models. 79, 80 CG protein models that retain chemical specificity are diverse with respect to the level of representation ( Figure 3) and complexity of the associated interaction potentials, which are closely tied to the problem of interest. In particular, a detailed backbone (BB) is compulsory for exploring secondary structure formation while SCs are more important for proteinprotein interactions. These two sets of interactions may thus be parameterized separately and may differ in the level of mapping, varying from one to five CG sites per residue for both the BB and the SCs. Most protein CG models use a combination of physics-based and knowledge-based potentials for which transferability is probably the most challenging aspect. Some models have been quite successful and contributed to the popularization of CG approaches as an alternative to atomistic models. Bereau and Deserno Model The protein CG FF developed by Bereau and Deserno 81 uses an intermediate level of description emphasizing on structure. The model has a quite detailed three-bead protein BB and one-bead SCs (Figure 3a). The bonded terms were derived from existing geometric parameters and given an approximate 5% flexibility around their reference values to account for thermal fluctuations. The BB phi/psi dihedral angles were used as parameters during the fitting and as an indicator of local structure and flexibility. The SC nonbonded interactions were based on the knowledge-based potential derived by Miyazawa and Jernigan 85 from a statistical analysis of SC contacts in a protein structure database. Bereau and Deserno converted this energy scale into a two-body distance-dependent potential. Note that the model does not account explicitly for electrostatic interactions but they are implicitly accounted for in several terms. The BB beads interact through a more complex combination of terms that have previously been identified as secondary structure determinants. These include local excluded volume, an explicit geometric H-bonding function, and a dipole-dipole interaction of neighboring residues. These terms were tuned to reproduce local protein structure (Ramachandran plot or dihedral BB distribution of GGG and GAG tripeptides) and global folding properties (folding of a three helix bundle). The FF showed promising results in folding α-helical proteins 86 but fine-tuning is needed to stabilize β-sheet structures and proteins with a mixture of α-helices and β-sheets. The OPEP Model The OPEP (Optimized Potential for Efficient Protein structure prediction) CG model developed by Derreumaux and coworkers 82,83 is a generic CG model. It has a detailed BB close to a full atomistic model (N, HN, Cα, C, and O atoms are represented) and uses a single CG bead for each SC, with the exception of the proline SC, which has three beads (Figure 3b). The position of the SC is defined by the BB conformation using an off-lattice (discrete) representation. The most recent OPEP potential (version 4.0) is a combination of generic-bonded terms (derived from the all-atom AMBER FF) and nonbonded interactions consisting of vdW and H-bonding terms. The vdW interactions are knowledge based and combine BB-BB, BB-SC terms with 210 SC pair interactions. The H-bond potential combines a two-body geometrydependent term with a four-body term to account for H-bond cooperativity. The effects of solvent are taken into account implicitly within the nonbonded terms. The OPEP potential was parameterized to maximize the energy gap between native and nonnative structures and to enforce stability of native structures in MD simulations. OPEP was used successfully for protein folding, 87 structure prediction 88 and aggregation studies. 89,90 Potential drawbacks of OPEP models are the lack of SC specificity crucial for accurate description of protein-protein interactions and the need of a 1.5 femtoseconds integration time (due to the detailed BB and H-bonds) in MD simulations, which reduces the amount of conformational sampling as compared with other CG models. The Martini Model As an extension to the Martini lipid FF (see above), the protein version 84 has inherited the general 4:1 mapping used to define chemical groups as beads and the broad experience in using partitioning data for parameterization. Each amino acid is represented by one bead for the BB and from zero (Gly and Ala) to four (Trp) beads for the SC (Figure 3c). The bonded terms were extracted from a set of protein structures with the BB bead placed on the center of mass (COM) of the BB 84 . An elastic network model (based on the Cα positions) was parameterized in conjunction with Martini to improve structural stability. 91 Partitioning behavior of SC analogues between water and oil phases and at the water interface of a DOPC bilayer was originally used to parameterize the nonbonded interactions. Recently, a thorough examination of the binding of Wimley−White pentapeptide to a palmitoyloleoylphosphatidylcholine bilayer 92 and SC analogue (self and cross) dimerization free energies 93 were used to refine the nonbonded parameters. 94 Notably, an explicit polarization term was added to polar SCs such as glutamine and asparagine. Coulomb interactions for charged SCs use the standard shift function used in Martini with a relative dielectric constant ε = 15 or ε = 2.5 when combined with the regular and polarizable water, respectively. The Martini FF does not allow dynamic secondary structure conformational flexibility and thus precludes folding studies. However, it has been successful in describing protein tertiary conformational changes, 95 protein supramolecular organization, [96][97][98] and their relation with the membrane environment. 99, 100 The UNRES Model The UNRES (UNited RESidue) CG model developed by Liwo et al. 24 models the BB by two CG beads-an interacting peptide-group (P) and noninteracting (CA) group-and the SC as a single ellipsoidal bead (Figure 3d). This FF has gone through numerous refinements 101 to come close to a pure physics-based version in contrast with its first appearance two decades ago as a strongly knowledge-based potential. 102 The UNRES potential is a free energy function where all the DOFs (including that of the solvent) are averaged out into effective potentials, except for the ones describing the protein conformation. The bonded interactions include bonds, angles, and dihedrals for the BB and a rotational potential defines the SC rotamer. The nonbonded interactions (vdW and Coulomb) include terms between SC-SC, SC-P, and P-P beads. All the nonbonded terms are derived from ab initio or semi-empirical calculations of small model systems and PMFs extracted from all-atom MD simulations of SC analogue pairs. Notably, the UNRES potential incorporates temperature-dependent correlation terms. UNRES now stands as a prototype for purely physics-based approaches to coarse graining and it has been successful and widely used over the last two decades to study protein folding, 103 structure prediction, 104 protein-protein binding, 105 and mechanisms of protein fibrillation. 106 However, there are a couple of caveats. First, in contrast to other CG models, the conversion from UNRES to an all-atom representation is not straightforward and may thus be inappropriate for multiscale approaches. Second, because of its emphasis on accurate description of protein interactions, it will be difficult to extend UN-RES to other biomolecules such as nucleic acids and lipids. The SCORPION Model The SCORPION protein CG FF 107 was initially developed as a scoring function for protein-protein recognition using one bead to model the BB and one or two beads for the SCs. The bead self-and crossinteractions were extracted by (1) fitting vdW SC pair interactions extracted from PMFs of SC association determined at an atomistic resolution and vanishing charges and (2) determining a set of point charges to reproduce the electrostatic potential, the total charge, and the permanent dipole as described by an atomistic model of the full protein. The parameterization was done in vacuum to allow, in principle, mixing with any solvent. In a recent study, 108 the authors combined the protein potential with a compatible water model, which was validated against solvation free energies of peptides. The use of the combined protein and water model showed great promise for studying protein-protein recognition. 108 The notorious and challenging barnase/barstar complex and two others were successfully predicted. The current main drawbacks of the model are the lack of bonded terms (an elastic network is used instead) and the use of a high temperature in the simulations, which affects the balance between the enthalpic and entropic contributions. Other Promising Models The PaLaCe model recently introduced by Lavery and coworkers 109 to study the mechanics of proteins uses a two-tier representation of amino acids: one for bonded and another one for nonbonded interactions. The FF consists of physics-based bonded and nonbonded terms combined with an implicit treatment of the solvent and a BB H-bonding potential (allowing secondary structures changes). These terms were collectively and iteratively parameterized against a large database of protein structures and MD simulations of their native state. The PaLaCe model was able to reproduce force-induced conformational changes for the immunoglobulin-like domain of the giant protein titin, originally observed by single-molecule experimental results and all-atom simulations. The PRIMO model proposed by Feig and coworkers 110 represents the protein BB using three CG beads and one to five sites for the SCs. The mapping was carefully optimized to allow a highresolution reconstruction of all-atom protein models 111 aiming for multiscale approaches. The interaction scheme is typical of a molecular mechanics potential with bonded and nonbonded terms optimized against a diverse set of peptides and proteins described by the all-atom CHARMM FF. The model also includes an explicit treatment of BB H-bonds and an implicit solvent. The PRIMO FF was validated against the conformational sampling of alanine-based polypeptides and folding of small peptides as observed in atomistic MD simulations and experimental data. In the AWSEM (Associative memory, Water mediated, Structure, and Energy Model) FF, developed by Papoian and coworkers, 112 the position and orientation of each amino acid residue is dictated by the positions of its Cα, Cβ, and O atoms. AWSEM combines a large number of physical interactions, from BB terms to direct and water-mediated interactions and H-bonding, with structural biases that are local in sequence, based on the alignments of fragments of nine residues or less of the target protein to the local segments found in a protein database. It has been successfully used to predict protein structures both de novo and using homology models, 112 as well as dimeric protein interfaces. 113 The dynamic properties of the model have yet to be characterized in detail. CG Nucleic Acid Models Nucleic acids (DNA and RNA) seem more tractable for modeling than proteins due to the smaller number of building blocks involved. Nevertheless, perhaps because of scarcer structural data, tight packing, and higher charge density, the development of CG nucleic acid models has progressed more slowly than with many other biomolecules. Despite the similarities between DNA and RNA, the challenges in modeling them differ greatly. DNA exists primarily as double-stranded structures (ds-DNA) with only a limited number of well-defined conformations but forms extremely large-scale assemblies. For RNA, the challenge is to predict how the single-stranded RNAs (ssRNAs) fold into their functional form. These differences have led the CG models for DNA and RNA to use disparate strategies in order to reach their objectives. Most CG RNA models are aimed at predicting structures (see a recent review 114 ) and use structure-based potentials. Only a few CG RNA models can be used in MD simulations, 115,116 in contrast to the numerous DNA CG models available-which span several orders of magnitude in the length scales they describe. Largescale mechanical properties of DNA structures have been studied using very coarse models since the early 90s. 117,118 The development of more detailed CG models that are able to describe for example DNA melting started later. 119 We focus here on CG models that are detailed enough to describe sequence specificity. For an overview of coarser models, we refer the reader to previous reviews. 120-122 3SPN Models de Pablo and coworkers 27 proposed a CG model for DNA, coined 3SPN.0, where phosphate, sugar, and base are described using one bead each (Figure 4a). Bonded interactions were derived from a canonical structure of B-DNA. Base stacking was implemented using intrastrand Gō-potentials that act between the first and second neighbors. H-bonds were modeled using a potential that acts between complementary bases and an excluded volume term is used between all other beads. Electrostatics is described using a Debye-Hückel approximation. The authors did note that these choices bias the model toward the B-form of DNA compared with other forms. The parameters were selected to reproduce experimental melting curves of DNA at a specific salt concentration, but the model also performed rather well in other salt concentrations as well as reproducing the persistence length of dsDNA and ssDNA and bubble formation in dsDNA. The authors measured roughly three orders of magnitude speedup as compared with atomistic simulations for short dsDNAs. The model has been refined (3SPN.1) 123 to describe DNA hybridization and to model ions explicitly. 124 Ouldridge Model Ouldridge et al. 25, 128 developed a model that represents the nucleotide as a rigid system of one BB site aligned with two sites for the base (Figure 4b). Neighboring nucleotides interact with excluded volume and angle-dependent stacking potentials, whereas the BB beads are connected with finite extensible nonlinear elastic bonds. All other beads interact with excluded volume, cross-stacking, and H-bonding potentials. The latter two are also angle dependent. All potentials except for H-bonding are identical for different nucleotide pairs. The model does not include electrostatics but it is parameterized in high salt concentration where they play a smaller role. Furthermore, solvent effects are included implicitly within the interaction potentials. The potential functions are parameterized to reproduce experimental properties of base stacking in ssDNA and dsDNA melting. The parameterized model was shown to reproduce mechanical properties of both ssDNA and dsDNA as well as DNA hairpin formation. The stacking and H-bonding interactions were further refined 129 to introduce sequence dependence and reproduce experimental melting temperatures of short duplexes. The model was also able to show sequence-dependent differences in the structure of ssDNA as well as in the opening of dsDNA ends. Dans Model Dans et al. 28 developed a DNA model for the SIRAH FF that describes each nucleotide with two interaction sites for the BB, one for the sugar, and three for the base (Figure 4c). This mapping allows for easy backmapping to atomistic detail. The bonded interactions were parameterized to reproduce the canonical B-DNA structure. The nonbonded interactions are described using LJ and Coulomb potentials and were parameterized to reproduce structural and dynamical properties of dsDNA as observed in atomistic simulations. A generalized Born model describes the solvent. The authors found qualitative agreement with experimental melting curves of dsDNA, dsDNA transition from A to B DNA structures and base pair opening dynamics as observed in long atomistic simulations. The model was roughly 600 times faster than fully atomistic simulations for short dsDNAs. The model was later adjusted to include the explicit solvent model WT4. 22 The modified model reproduced qualitatively effects such as experimentally observed narrowing of the minor groove due to cations, but the increased detail rendered it roughly 30% slower than the implicit model. Other Promising Models The recent HiRE-RNA model by Pasquali 131 has the same mapping as the 3SPN models but uses directional nonbonded potentials to describe both the Watson-Crick and Hoogsteen base pairing and employs no dihedral potentials. The model shows improved accuracy in situations where Hoogsteen pairs are known to play a role but has difficulties with the handedness of ds-DNA due to the omitted dihedrals. The model by Morriss-Andrews et al. 132 describes each nucleotide with three beads and uses orientation-dependent base-base and base-BB potentials as well as an explicit hydrogen bond potential. Bonded and nonbonded parameters are derived from atomistic potentials and distributions from atomistic simulations. The model reproduces the structure and chirality of dsDNA as well as persistence length of both ssDNA and dsDNA. CG Carbohydrate Models Carbohydrates are ubiquitous biomolecules involved in many biological processes. Because of their nature, carbohydrates encompass a huge degree of polymerization making up a virtually infinite number of sequences, linkages, and degrees of branching. 133 Unlike proteins, nucleic acids, and lipids, which tend to predominantly adopt a relative well-defined (native) conformation under the conditions where they are biologically functional, carbohydrates typically feature a high degree of conformational freedom. [134][135][136] As a result of this structural diversity, carbohydrates represent a very challenging class of biomolecules in terms of CG modeling. Consequently, current CG carbohydrate models often aim for the simulation of very specific systems. The MB3 Model Pioneering efforts by Molinero and Goddard 39 aimed at modeling hexopyranose glucose. This model, coined 'MB3', was the first attempt to build a robust reductive model for the simulation of carbohydrates. The hexopyranose ring is represented by three particles (Figure 5a) and was directly mapped from atomistic simulations, with bonded terms including bond, angle, and dihedral potentials derived using an IBI approach. A single nonbonded term was used for all interactions and rigorously parameterized against density, cohesive energy and structural unit cell parameters. While the model was successfully applied to the simulation of disaccharides (maltose) and longer polysaccharides (amylose), the set of bonded parameters is state specific and thus not transferable to alternative glycosidic links. Following the same, topological descriptors, Liu et al. 137 developed a reductive model for the simulation of carbohydrates, but using nonbonded terms based on more versatile pretabulated potentials derived from atomistic simulations. This model can be combined with an explicit water model. So far, applications of the model have been restricted to the simulation of glucose and amylose. The Martini Model A more extendable and general approach is the one taken by López et al. 138 based on the Martini CG FF. Each saccharide is mapped using three CG beads in such a manner that the underlying polar-nonpolar feature of the ring is preserved (Figure 5b). Following the philosophy of the Martini model, 43 several monosaccharides and disaccharides were parameterized combining bottom-up and top-down approaches. Atomistic trajectories were used to iteratively adjust the set of bond, angle, and dihedral potentials of the CG representation. Nonbonded terms were determined to reproduce experimental partitioning data. While the parameters are straightforwardly applicable for the simulation of mono-, di-, and polysaccharides in the colloidal state, application to the crystalline phase is rather problematic as was shown by Wohlert and Berglund 139 in the case of crystalline cellulose. A potential advantage of the Martini carbohydrate model is its compatibility with the ample set of different biomolecules, illustrated recently by the parameterization of a Martini FF for glycolipids 140 and application to cyclodextrin-cholesterol complex formation. 141 Bellesia Model Recently, Bellesia et al. 142 developed a solvent-free CG model for the interconversion between cellulose I β to III. Based on a five-bead mapping of the ring, the model combines LJ terms with harmonic bonded potentials aimed to reproduce the crystalline phase of cellulose. The CG representation not only reproduces the torsional angles between glucose planes, but also the transitional rotameric states of the hydroxymethyl groups thus effectively mimicking the changes in both intracrystalline hydrogen bonds and stacking interactions during the transition from cellulose I β to cellulose III. The model has been shown to reproduce structural as well as thermomechanical properties of cellulose. Bathe Model The solvent-free model of Bathe et al. 143 is aimed at modeling chondroitin (glycan chains forming a major component of the extracellular matrix). This model explicitly describes the DOFs associated with the torsional angle representing the glycosidic linkage. Each monosaccharide consists of three-bead sites, plus two additional beads at the centers of charge and geometry, used to model the nonbonded electrostatic and steric interactions, respectively. All-atom resolution trajectories of isolated disaccharides are used to generate pretabulated PMFs for the glycosidic torsions. Electrostatic interactions are included between nonadjacent monosaccharides using a Debye-Hückel potential, assuming zero ionic radius. Steric interactions between nonadjacent monosaccharides were modeled using a LJ potential applied to the center of geometry. The model was able to reproduce the ionic strength-dependent persistent length, pHdependent expansion factor, and titration behavior of chondroitin. Srinivas Model Aimed to study the structure and dynamics of cellulosic biomass, the model developed by Srinivas et al. 26 pursues to study the intrinsic conformation of long I β cellulose macrofibrils. Their simplified representation makes use of a single bead for every monomeric glucose subunit (Figure 5c). The glucose COM was mapped from atomistic simulation trajectories and every associated bond, angle, and torsion potential parameter was extracted from the same conformational ensemble. Nonbonded interactions were optimized using an IBI approach, with the distance distributions between individual monomers as target observables. The model has been used for studying the transition between crystalline and amorphous phase at long time scales, as detailed in the Applications of CG Models section. Other Promising Models Recently, Satelle et al. 144 developed an interesting CG model for the prediction of hydrodynamic properties of heparin sulfate. The CG model potentials were carefully adjusted to reproduce the glycosidic linkage between consecutive monosaccharide subunits and the internal ring puckering observed in long unbiased all-atom simulation. The model is not only able to reproduce relative ring-ring orientations but also the internal energy landscape of different ring conformers. Markutsya et al. 145 constructed four different CG models of cellulose, with potentials derived from FM either using one, two, three, or four sites per monomer. They found that the four-site CG model is most promising, as it is best at reproducing the glucose-glucose conformations observed in the allatom simulation. The model underscores the importance of decoupling the pyranose ring from the oxygen atom in the glycosidic bond when developing allatom to CG mapping schemes for polysaccharides. APPLICATIONS OF CG MODELS Applications of CG models range far and wide and providing an exhaustive coverage is far beyond the scope of this review. Instead we cherry-picked five state-of-the-art examples demonstrating successful and potentially inspiring use of CG models. We show different types of application with various aims and include a variety of biomolecule classes as discussed in the CG Models section. We start by the description of typical applications of a protein CG FF whose development emphasized on getting the native protein structure, following by larger scales applications using a model which allows for exploration of protein conformational changes and assembly in lipid bilayers, ending with coarser models of DNA hybridization and cellulose fibrils stability. Protein Folding In spite of a few recent studies able to simulate the folding of a number of small proteins, 146-148 the current state of atomistic protein folding simulations is mostly limited to small single-chain proteins using considerable dedicated computer resources. CG models are thus extremely attractive but capturing the relevant DOFs has shown to be quite challenging. 122 In that context, the UNRES 24, 101 model is unique. Its development over more than two decades has gone through many successive but consistent modifications. This laborious parameterization illustrates the extreme challenging aspect of the development of a reliable CG FF when it is about capturing a delicate mixture of complex structural and chemical contributions. UNRES successfully participated in the CASP exercise in which success of ab initio prediction of new protein folds is almost exclusively reserved to knowledge-based potentials. Two UNRES predictions are shown in Figure 6a: the target T0215 (a three-helix bundle; PDBX9B) and T0281 (a α/β fold; PDB:1WHZ) predicted to 0.35 and 0.55 nm Cα RMSD from the native state. 104 The latter represents one of the first successful predictions of an α/β structure with a physics-based potential. UNRES also described the folding pathway of several single-and multichain proteins. For example, UNRES folded ab initio ( Figure 6b) the 48-residue Lysm domain protein (PDB:1E0G) to a structure with a 0.39 nm Cα RMSD from the native structure. 149 The folding processed by the initial formation of an almost all α-helical structure followed by the unfolding and refolding of the C-and N-terminal regions into their native antiparallel β-sheet structure. Another example of ab initio folding with UNRES is of a multichain protein (PDB:1G6U) (Figure 6c). Individual chains folded independently to their native structures and later assembled into a structure at 0.24 nm Cα RMSD from the native state. 150 Gating of Mechanosensitive Channels The bacterial mechanosensitive channels of large conductance (MscL) serve as a last resort emergency release valve, protecting bacteria from lysis upon acute osmotic downshock. With induced membrane tension, MscL opens a large (∼3 nS) mostly unselective pore, releasing ions and small solutes, thereby relieving the cytoplasm of osmotic tension. 151 Because of the high computational cost of atomistic simulations, MscL gating from its closed state 152 has not been simulated at the atomistic level without strong biasing potentials. [153][154][155] It was only recently that MscL could be gated in an unbiased way and in tractable computational time using the Martini CG FF. 95,156,157 Those studies confirmed the iris-like opening mechanism of the channel and provided valuable insight into how changes in protein shape influence its preferred conformational equilibrium. To gate MscL in CG simulations, the channel was equilibrated in a solvent/bilayer environment for a few microseconds after which tension was rapidly ap-plied. In 10-100 nanoseconds following the application of lateral tension and thinning of the bilayer, the MscL transmembrane helixes tilted, extending the extracellular cavity of the channel. The channel hydrophobic gate takes an additional 0.2-2 microseconds before rapidly expanding and opening the channel (Figure 7). Such gating of MscL took 5-10 days on a 12-CPU computer at the CG level; it is not clear how many years it would take at the atomistic level, even using much faster computers. Ongoing work on MscL has shed new light on its gating mechanism and how changes in bilayer properties can influence membrane protein gating. Moreover, this CG model of MscL provides a new tool for studying the mechanism of mechanosensation, the bilayer regulation of membrane proteins, and the rational design of future drug delivery systems, for example, Membrane Protein Self-Assembly Biological membranes have a complex and dynamic supramolecular organization that has recently emerged as a potential major component in many fundamental processes. The transient nature of these processes has made their characterization by conventional experimental and computational approaches a great challenge. CG MD simulations using the Martini model have shown promise in elucidating the forces involved at the molecular level with a close to atomistic resolution. Notably, a set of recent studies took advantage of the increased system size and length scales accessible to reveal a few significant protein/lipid interplays. First, a lipid membrane responds to the presence of a protein by an anisotropic deformation at the protein/bilayer interface in order to match to the protein's hydrophobic surface. 96 Second, the extent of membrane deformation determines the protein's propensity to self-organize. 96 Third, the protein surface properties (sequence dependent) design specific lipid binding sites 100 and favor different protein/protein interfaces that may drive proteins to assemble into well-ordered and highly organized arrays. 97 Forth, protein sorting is mediated by lipid properties in multidomain membrane patches. 159 A typical system used in these studies is pictured in Figure 8. It consists of 64 visual receptor rhodopsins embedded in a DOPC membrane bilayer at 1/100 pro- tein/lipid molar ratio. 97 The 100 microseconds time scale reached in these simulations will soon become a standard and the millisecond time scale is not far off. DNA Hybridization DNA hybridization is the assembly process of complementary ssDNAs forming dsDNA that is ubiquitous in, for example, transcription. Also a large number of systems and protocols in molecular biology and biotechnology rely on DNA hybridization. Therefore, the understanding of the sensitivity of this process to the sequence of the ssDNA strands, as well as to the effect of environmental factors like temperature or the salt concentration, is essential. MD simulations offer ways to bridge the gap between experimental observations and theoretical models and elucidate the whole hybridization process by describing the hybridization pathways and kinetics. Such studies are particularly amenable for CG models that are able to reach system size and time scales relevant to study such pathways thoroughly. Sambriski et al. 160 used the 3SPN.1 CG DNA model 123 coupled with a transition path sampling technique to investigate the hybridization process of 14-30 base pairs long ssDNAs. These simulations revealed the presence of multiple and nonspecific hybridization pathways for highly repetitive sequences, whereas for random sequences, a nucleation site of specific complementary base pairs is formed to start the hybridization process ( Figure 9). In the latter case, the pathway from initial nucleation is strongly restricted. In addition, they could pinpoint short repetitive sequences as the most probable nucleation sites because of greater number of possible complementary base pairs. This work illustrates how CG simulations can be successfully applied to understand the mechanisms underlying the experimental observations of a crucial cellular process. Cellulose Fibrils Considerable attention has been focused on the study of plant biomass, especially concerning the degradation of cellulose and its application to biofuels. In that respect, computational modeling has been applied widely to understand the structure and conformation of cellulose. Natural cellulose microfibrils are on average several micrometers in length. degree of polymerization makes it virtually impossible to obtain reliable conformational data using atomistic models. Using a CG model, Srinivas et al. 26 studied the structural differences between cellulose fibrils and amorphous cellulose. By tuning of the LJ potential through a λ factor, a discrete transition between the fully crystalline (λ = 1) to the fully amorphous (λ = 0) systems was established. In line with experimental evidence, the model suggests that during the conformational transition, the microfibril denaturation is started by an external uncoating mechanism that involves the outer cellulose fibrils ( Figure 10). Because of the versatility of the model, different cellulose crystal allomorphs can be studied and characterized at permissible computational effort. Why are CG Models Useful? During the last decade, we have seen a thriving development of a large number of CG biomolecular models, of which we have only been able to provide a limited outline above. However, do we actually need any of these CG models? This may seem like a provocative question, but considering the continuous increase in availability of computational power (soon entering the exascale era), one may argue that most relevant (bio)materials can be studied at the atomistic level in the near future. Currently, the largest systems that can be handled by particle-based simulations are limited to 10 7 interacting atoms and time scales up to 1 microsecond, but we can expect these limits to steadily increase (according to Moore's law, computational performance doubles every 2 years). Could we thus soon model a complete cell in atomistic resolution? Not really. Modeling a typical eukaryotic cell, for instance, with a diameter of 10 μm requires about 10 14 atoms. Given the relevant time scales on which cellular processes take place, microseconds to seconds and beyond, it is clear that simulations capturing the full complexity of a cell in atomistic detail are still far fetched. Even the kind of CG models described in this review will have a hard time to cope with this size, but CG modeling of a bacterial cell with a much smaller diameter of approximately 0.5 μm, amounting to 10 9 atoms, becomes tractable in the foreseeable future. Obviously, many cellular processes can be studied at smaller length and time scales, but even a simple case like simulating the undulation spectrum of a planar lipid bilayer requires CG modeling as soon as one increases the system size above 100 nm. This example was presented in an eloquent essay by Deserno,161 in which he argued that the computational effort needed to increase the system size of a typical membrane patch of length L scales as L 6 due to the much longer relaxation times involved. Thus, increasing the typical size of an atomistic membrane patch (around 20 nm) to 200 nm would require a million times more work to fully equilibrate the system. Even assuming Moore's law continues to proceed apace, we would still need to wait for about 40 years to accomplish this! Thus, to simulate collective effects such as the formation of multimeric membrane protein complexes, membrane patches of 100 nm size and beyond are necessary, which in the foreseeable future can only be accomplished at the CG level. Current Challenges of CG Model Development The current generation of CG models are still under active development and numerous new models are continuously emerging. One may wonder, is there not an optimal CG FF that eventually may replace all current models? The answer is no. At a given state point, the pair potential that reproduces the pair structure is unique (see Box 1), implying that it is impossible to simultaneously represent the pair structure and additional key thermodynamic properties of the system with pair potentials. This is known as the representability problem. 33 In practice, this means that a different model is required depending on the question asked. To choose the right model for each problem, it is necessary to know the limitations of the model. Therefore, in this review, we have tried to indicate the underlying assumptions of commonly used CG models. However, a number of limitations are pertinent to most CG models and future efforts should be directed to improve on those aspects: (1) the model is too biased, that is, not transferable to other state conditions, (2) the model is only parameterized for a specific class of molecules, implying there is a lack of compatibility, and (3) the model is too coarse to capture certain behavior. To improve transferability, systematic frameworks for obtaining accurate CG potentials from higher resolution data are being developed. For instance, CG potentials can be simultaneously optimized with respect to multiple reference simulations 162 and a variety of experimental data can be used as additional constraints in the optimization strategy, 163 essentially combining top-down and bottom-up approaches. Automated workflows are being developed to generate large sets of converged interaction potentials. 163,164 Methods have been developed to measure and minimize the information loss upon coarse graining 14 and to optimize the mapping procedure. 165 Additionally, ongoing improvements of atomistic FFs provide us with more accurate reference structures and the steady increase in single molecule experimental data (e.g., force spectroscopy of individual biopolymers, single molecule particle tracking) allow for novel and direct ways to further calibrate and validate our models. The issue of limited compatibility is related to the lack of transferability (and to the general representability problem), but there is a pressing need for compatible FFs that can be used for more than just a single class of biomolecules. Except for the generic Martini model, none of the CG FFs discussed above can describe the more complex setting of real biosystems, let alone handle the rich and growing diversity of bioinspired materials such as biofunctionalized nanoparticles, DNA-polymer hybrids, peptidosurfactants, and so on. Triggered by the development of more transferable models, we expect an increase in the number of compatible CG FFs in the near future. The problem of a model being too coarse is touching upon the very limits of coarse graining. No matter how hard we try, not every problem can be tackled with a CG model. Some atomistic details are notoriously hard to mimic at a CG level, for example, the directionality of H-bonding, and sometimes fine grain resolution is required. In that respect, the active field of multiscaling [166][167][168][169][170][171] shows a lot of promise. Multiscale methods treat part of the system, the region of interest, at high resolution and the surrounding at a lower level of resolution, thereby combining the advantages of atomistic and CG models. Multiscale methods can either use a static division as in QM/MM, or allow particles to change resolution on the fly. The challenge is to achieve a realistic coupling between the atomistic and CG DOFs. One route is to specifically parameterize the crossinteractions, as demonstrated in a number of recent test systems. 62,172,173 Although the results are encouraging, such methods are not easily transferable. A more generic approach is the AdResS (Adaptive Resolution Simulation) scheme, developed by Kremer and coworkers. 174 In this method, a transition region al-lows molecules to pass from atomistic to CG resolution and vice versa as a function of the position of the molecule in the simulation box. The coupling of resolutions is achieved through the use of a thermodynamic force in the transition region that compensates for the chemical potential difference between the two resolutions. Another approach is the use of virtual sites to couple the two levels of resolution. 175,176 With the help of these virtual sites, the interactions between CG and atomistic molecules are treated the same way as pure CG-CG interactions, and thus no need for additional parameters arises. For each of these methods, however, applications have so far been limited to simple test systems. The real benefit of multiscale methods has yet to come. CONCLUSIONS This review hopefully has provided the reader with a perspective on CG modeling of biosystems. Given the large variety of biomolecular processes, covering many length and time scales, CG models offer access to otherwise unreachable dimensions. However, the universal CG FF does not exist. The real challenge is to choose the right model for the right problem, and to know the inherent limitations. Keeping this in mind, we foresee a bright future for CG modeling, which will claim its place-bridging the microscopic and macroscopic worlds.
2018-04-03T02:15:38.760Z
2013-08-27T00:00:00.000
{ "year": 2013, "sha1": "182643aecf824e5c511c617a825d7e39cccbd1bc", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wcms.1169", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "182643aecf824e5c511c617a825d7e39cccbd1bc", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
5126460
pes2o/s2orc
v3-fos-license
Proton Grid Therapy: A Proof-of-Concept Study In this work, we studied the possibility of merging proton therapy with grid therapy. We hypothesized that patients with larger targets containing solid tumor growth could benefit from being treated with this method, proton grid therapy. We performed treatment planning for 2 patients with abdominal cancer with the suggested proton grid therapy technique. The proton beam arrays were cross-fired over the target volume. Circular or rectangular beam element shapes (building up the beam grids) were evaluated in the planning. An optimization was performed to calculate the fluence from each beam grid element. The optimization objectives were set to create a homogeneous dose inside the target volume with the constraint of maintaining the grid structure of the dose distribution in the surrounding tissue. The proton beam elements constituting the grid remained narrow and parallel down to large depths in the tissue. The calculation results showed that it is possible to produce target doses ranging between 100% and 130% of the prescribed dose by cross-firing beam grids, incident from 4 directions. A sensitivity test showed that a small rotation or translation of one of the used grids, due to setup errors, had only a limited influence on the dose distribution produced in the target, if 4 beam arrays were used for the irradiation. Proton grid therapy is technically feasible at proton therapy centers equipped with spot scanning systems using existing tools. By cross-firing the proton beam grids, a low tissue dose in between the paths of the elemental beams can be maintained down to the vicinity of a deep-seated target. With proton grid therapy, it is possible to produce a dose distribution inside the target volume of similar uniformity as can be created with current clinical methods. Introduction For over a century, grid therapy has been carried out on a small scale at a few clinics around the world with the aim of reducing the size of large bulky tumors. 1,2 Historically, unidirectional (occasionally parallel opposing) photon beam grids have been used for grid therapy. The elemental beams, building up the grid, typically have had sizes of approximately 1 cm or larger at the patient surface and have been separated with a similar distance. The beam grid array has been used to irradiate the patients with a chessboard-shaped irradiation pattern. At the onset of grid therapy, the aim was to reduce the skin toxicity observed in the early days of radiotherapy. 3 Later, it was realized that not only the skin would benefit from leaving volumes of unirradiated cells in between the radiation beams but that the toxicity is also reduced for other organs located deep beneath the skin. 4 The dose delivered to the target in grid therapy has typically alternated between very high peak and lower valley doses. In recent years, a large number of patients have been treated for bulky head and neck, thoracic, and abdominal cancers using the grid technique, with impressive results. [5][6][7][8][9][10] Typically, a large dose, for example, 15 Gy (inside each of the small beams building up the grid), has been given to the targeted disease in a single fraction. Sometimes, it has been combined with other therapies. Grid therapy has been found to produce limited toxicity in the surrounding sensitive tissues, considering the high in-beam doses given. Although certain subvolumes of the target (in between the beams) are given lower doses, significant reductions in the sizes of large tumors have been demonstrated. The high normal tissue tolerance to beam grids is closely related to the so-called dose-volume effect, which has been described for single beams. 11 Experiments with beam sizes in the millimeter to centimeter range with both protons and photons have demonstrated that the tolerance doses for certain biological endpoints are rising with reduced beam sizes. 11,12 The migration of cells from unirradiated to irradiated volumes and an improved vascular repair if only a short segment of a vessel is irradiated have been stated as reasons for the improved tissue repair for smaller beam sizes. Experiments and preclinical radiotherapy trials with photon and ion beam grids, containing beam elements of widths in the micrometer to millimeter range, have more recently been carried out. [13][14][15][16] In this work, we calculated dose distributions, produced by proton beam grid irradiations, using real patient composition data. Using proton beams, instead of photons, enables better protection of sensitive risk organs located posterior to the target due to their limited range in tissue. The aim of this work was to study whether it is possible to produce a dose distribution with a well-defined grid structure throughout the normal radiationsensitive tissue while delivering a more uniform dose (with a high minimum dose) to a large, deep-seated target, containing solid cancer growth. For this purpose, we explored the use of cross-firing of proton beam grids over the target volume. Cross-firing allows for a larger separation between the beam elements incident from each direction than what would have been possible if only 1 beam grid would have been used to irradiate the whole target volume with a uniform dose. That, in turn, makes it possible to maintain a low dose in between the beam elements of the grid, which has been shown to be of importance to keep the toxicity at a low level for the grid therapy carried out in the past. Even though the dose-volume effect has only been systematically studied for a few organs and biological endpoints, we hypothesize that many of the organs traversed by the beam array exhibit an increased radiation tolerance if irradiated with grids containing small beams (width <1.5 cm) instead of with conventional beams, used clinically, of widths of several centimeters. 4 Modern proton therapy centers provide the possibility to perform so-called spot scanning, which makes it possible to scan the proton pencil beam in a grid-like pattern without added collimation. The divergence of the scanned proton beam is small. Therefore, the elemental beams, building up the grid, will be quasi-parallel and nonoverlapping. However, Coulomb scattering will widen the elemental proton beams with increasing depth in tissue. We regard this study as a first step in the development of a new grid therapy method. We expect the target dose to be more inhomogeneous than what is typically created when crossfiring uniform beams. Therefore, we suggest that proton grid therapy (PGT) with cross-firing could be used for treating solid tumor growth. If the minimum dose is high enough, the therapeutic objectives could be reached, despite a more fluctuating target dose. At the moment, we are mainly focused on the dosimetric possibilities offered by such a technique. The dose prescription that should be used for this type of treatment and whether the suggested technique should be combined with other therapies are not dealt with in this study. Issues regarding organ motion, setup and range uncertainties, and the validity of radiobiological assumptions are only briefly discussed here and will be addressed separately in more detail at later stages of this project. The patient computed tomography (CT) data sets with delineated structures used in this study (from 1 patient with liver cancer and 1 patient with rectal cancer) have been selected based on the shape, size, and location within the body of the planning target volumes (PTVs). Whether these 2 specific cancer types are suitable to treat with PGT must be determined based on several other medical considerations than what is considered in this work. Irradiation Setup Two types of proton beam grids were evaluated in this work: A 1-dimensional (1-D) grid with narrow rectangular (''planar'') beams ( Figure 1A) A 2-dimensional (2-D) grid with circular beam elements ( Figure 1B) The total beam area in each array is different for these 2 cases. With the intention of producing a more homogeneous dose to the target volume, an approach with an interlaced crossfiring irradiation technique was attempted ( Figure 2). This method has been developed in the microbeam grid radiotherapy research but it has not yet been used clinically. [17][18][19] With this technique, the target is irradiated with grids of beams incident from different directions. By slightly shifting the grid position in a direction perpendicular to the direction of the beam grid propagation, the beam grids can be interlaced at a certain depth. Patient Data and Treatment Planning With prior permission, CT data with delineated structures from 2 anonymous Karolinska Hospital patients, previously treated with conventional photon radiotherapy for either liver or rectal cancer, were used for this study. All of the grid plans were prepared using the Varian Eclipse treatment planning system version 13 (Varian Medical Systems, Palo Alto, California). The proton beam data from the Skandion proton therapy center, with available energies from 70 to 235 MeV, were employed. The smallest available beam size (full width at half maximum, FWHM) at the patient skin currently available at the Skandion clinic was used for the elemental beams in the grid. It is varying in the range from 7 to 10 mm, depending on the incident proton energy. A spot spacing of 8 mm was chosen to build the grid pattern since it was of similar size as the beam FWHM, which was considered suitable for the interlaced cross-firing. For the maximum proton kinetic energy considered in this work (235 MeV), the maximum kinetic energy of secondary electrons will be *0.5 MeV. The maximum range in tissue for this electron energy is less than 2 mm, which means that the electrons produced near the beam edge will not reach midway between the 2 neighboring elemental beams in the grid. However, the incident proton beams have a Gaussian spatial distribution, which means that a small fraction of the protons will be incident on the patient also in between the beam elements and produce dose there. The treatment fields (the proton beam grids) were created in a stepwise process. Multifield optimization, that is, intensitymodulated proton therapy (IMPT), of regular broad proton beams was first of all performed with a homogeneous dose objective set for the PTV. One hundred percent of the PTV should receive 100% of the prescribed dose (priority ¼ 100). To prevent too large dose inhomogeneities inside the PTV, an additional restriction was included. Only 5% of the target volume was allowed to receive more than 120% of the prescribed dose (priority ¼ 50). This first optimization resulted in 2 or 4 (depending on the number of grids used) rather uniform fields, without any grid pattern. In the next step, the ''edit spots'' builtin functionality of Eclipse was used to delete selected spots and thereby build the grid pattern. For the 1-D grid irradiations, every second vertical line of spots was deleted, whereas for the 2-D grid irradiations, every second vertical and horizontal line of spots were deleted. After performing the spot deletions, the center-to-center distance between 2 beam elements inside a grid was 16 mm. To produce the interlacing, the spots that were deleted in 1 field were kept in the opposing field and vice versa. Finally, using the same objectives and priorities as used initially, but with the new spot maps, a second optimization was performed. The dose distributions were normalized to 100% at the point of PTV minimum dose. A few initial calculation tests showed that if the spot spacing was set to a larger value than the 8 mm chosen, for example, 10 mm (ie, 20 mm after the spot deletion), it was not possible for the optimization algorithm to reach the optimization objectives. The dose-volume histograms (DVHs) for the PTVs were then extracted and compared for the different grid plans prepared. Furthermore, the conformity index (CI), defined as the ratio between the volume receiving 100% of the prescribed dose and the PTV volume, was determined. Intensitymodulated proton therapy has been shown to be sensitive to organ motion and setup errors. 20 We also evaluated the robustness of the treatment against setup errors by varying the position or the rotation angle of one of the incident beam arrays. Results In Figure 3A and B, coronal views of the patient with rectal cancer are shown at a tissue depth of 7 cm (3 cm upstream from the target) with superimposed dose distributions produced by a 1-D grid or a 2-D grid irradiation, respectively. As shown in these figures, the dose distributions produced by the individual beam elements were well separated at this depth. The small divergence of the proton beams and the short range of the secondary electrons produced created a low dose in between the beam elements down to depths, where the Coulomb scattering had widened the proton beams considerably. In Figure 4A to D, it can be observed that it was possible to maintain a grid-shaped dose pattern from the skin down to the proximity of the PTV, even though the beams are densely spaced. Despite the preserved grid pattern of the delivered dose close to the target surface, satisfying dose coverage of the target was achieved by combining the interlacing and crossfiring techniques. The choice of beam setup (1-D or 2-D grids and the number of grids used) has an important impact on the produced dose distributions. For the case depicted in Figure 4A, for which only 2 opposing 1-D grids were used, the target dose was rather homogeneous, ranging from 100% to 116%, with a mean dose of 111%. The maximum peak dose in the elemental beams outside the target was approximately 71% at a distance of 2 cm from the target. As shown in Figure 4B, a higher level of homogeneity could be produced inside the target when using 2 Â 2 opposing 1-D grids (mean dose ¼ 107%; maximum dose ¼ 113%). The maximum peak dose outside the PTV also decreased to 46%. Similar results were obtained for the rectal cancer case with this geometry (Figure a b 4C), with PTV mean and maximum doses of 114% and 125%, respectively. The use of 2 Â 2 opposing 2-D grids ( Figure 4D) created a mean dose of 120% and a maximum dose of 142% inside the target, but higher doses were also delivered in the beams outside the PTV. This tendency could be noted especially at the skin entrance where the relative maximum dose was approximately 82%. Figure 5 shows the differential PTV-DVHs for the liver and rectal cancer target irradiations for each of the beam setups studied in this work. The maximum peak and valley doses at the skin entrance and at a position 2 cm anterior (upstream) to the PTV are summarized in Table 1, as well as the mean and maximum doses inside the PTV. Since large fluctuations of the valley and peak doses could be observed for each irradiation setup (even within the same grid), only the maximum valley and peak doses at the different locations were recorded. Of all the cases studied, the 2 Â 2 opposing 1-D grid geometry produced the highest dose homogeneity inside the target. On the other hand, the 2 Â 2 opposing 2-D grid geometry produced the highest mean and maximum target doses, indicating that it is difficult to avoid cold spots in the target with this setup. Because a smaller total beam area is used with this configuration, the peak doses outside the PTV must be increased in order to deliver a satisfying minimum target dose. In Figure 6, the variation in the peak and valley doses with depth is shown inside one of the 1-D and one of the 2-D beam grids used for the irradiation of the rectal cancer target, as shown in Figure 4C and D, respectively. The doses produced inside the anterior treatment grids were studied and the dose profiles were recorded at the center of these grids. The peak-tovalley dose ratios (PVDRs) ranged from approximately 9 at the skin to 5, 2 cm anterior to the PTV, inside both of these grids. Finally, the PVDRs were approximately 1 or 1.1 inside the PTV for the 1-D or 2-D beam grids, respectively. The target dose homogeneity was somewhat lower for the 2-D beam grid irradiation setup. In Table 2, the doses received by 95% and 50% of the PTV are presented for both the liver and rectal cancer treatments for the 4 different grid irradiation techniques evaluated. The CI is also shown. Higher D 95% , D 50% , and CI can be observed for the 2-D grid irradiation setups. A situation in which one of the grids used in the cross-firing was translated or rotated was also studied to determine the robustness of the proposed grid irradiation to realistic setup uncertainties. Gantry angular shifts of 1 , 2 , and 3 were considered as well as lateral translations (perpendicular to the beam propagation) of the patient position of 1 and 2 millimeters. In Table 3, the resulting minimum, maximum, and mean doses inside the target after the translations or shifts of one of the grids are presented. Due to the rather poor target dose uniformity obtained for the irradiation geometry with 2 opposing 2-D grids, it was excluded from this part of the study. The use of 4 beam grids instead of 2 increased the robustness of the treatment to small setup errors. In most cases, a shift in the gantry angle of 1 grid had only a small effect on the final dose distribution in the target. However, a lateral shift of one of the beam grids caused bigger fluctuations. In that case, the peaks of the shifted grid are getting closer to the peaks of the opposing grid instead of coinciding with the valleys. As a result, hot and cold spots appear. The mean target dose, on the other hand, was unaffected by setup errors as can be expected. In small volumes containing normal tissue, located next to the target, the doses in some cases increased with up to 10% as a result of the setup errors. Since these hot spot volumes are small, we do not expect them to increase the treatment toxicity significantly. Further away from the target (at more than a few mm distance), the size of doses given to the skin and other normal tissues remained unchanged. Discussion The results of this work showed that it is possible to create a rather uniform dose (with a high minimum dose) inside the PTV by cross-firing proton beam grids. Similar dose homogeneity can be achieved with PGT as with more conventional treatment techniques, for example, stereotactic body radiation therapy (SBRT). 21 The choice of irradiation geometry, that is, beam grid type and the numbers of grids used, was shown to have an important impact on the dose homogeneity that could be achieved inside the PTV. Cross-firing of beam grids, containing either ion or photon beam elements, over a target volume has previously been suggested. [22][23][24][25] In previous studies of proton beam grid therapy, the aim has normally been to create a uniform target dose. 15,16,23 On the contrary, when photon beam grids have been considered for cross-firing, the aim has often been to produce a highly nonuniform target dose, reminiscent of what can be created in brachytherapy. 22,24,25 It is evident that it is possible to create nonuniform target doses also with proton beam grid irradiations. However, existing tumor control probability models indicate that an improved therapeutic effect can be obtained if the minimum target dose is sufficiently elevated. 26 We have shown in this work that a high minimum target dose can be produced with the proposed interlaced cross-firing PGT technique, without irradiating any risk organ from more than 1 direction. This will be more difficult to achieve with photon-based grid therapy with divergent beams. Furthermore, we have demonstrated that it is possible to preserve the grid structure of the dose distribution down to the depth of the target volume if cross-firing is used. This distinguishes this work from previous studies done on proton beam grid therapy, 15,16 in which a homogenous dose has been produced for each incident beam grid. Keeping the grid structure of the dose distribution down to the direct vicinity of the target (with high doses restricted to small volumes) could be of importance to prevent side effects. In photon beam-based radiosurgery, the tissue volume immediately surrounding the target is normally given high doses. It is in these tissue volumes that the risk of radiation-induced side effects after radiosurgery has been shown to be the highest, for example, in the brain. 27 For extracranial SBRT, there is to our knowledge no detailed report describing which metric is the most appropriate for predicting the risk of side effects. The DVHs for the normal tissue extracted from the grid therapy planning (not shown in this article) give an indication of which dose levels are present in this tissue but cannot be used for a direct estimation of the negative side effects of the treatment. The reason is that the DVHs show a summary of the doses given to independent volumes regardless of whether spatial fractionation is used. The DVHs in that sense depend only on the total beam area used to irradiate a delineated volume. In this work, we have investigated the possibility to perform proton grid treatments with beam sizes and energies available at modern proton facilities using a commercial treatment planning system. However, substantial evidence from preclinical research indicates that the use of even smaller beams could improve the normal tissue tolerance to this type of treatment even further. 12,13,18 Further studies to determine suitable proton beam collimation techniques are required to develop a therapeutic method using beam widths of only 1 or a few millimeters. Moreover, the increase in the tolerance doses with decreasing beam sizes for different organs and endpoints must be more accurately determined to establish a clinical advantage of PGT. The prescription doses that should be administered and whether fractionation should be used must also be decided. The possibility to use this treatment as a boost will be evaluated further on. In this work, the IMPT method was used to produce the treatment plans for the grid irradiations. The proton range uncertainties are important to consider in IMPT when the proton beam path goes through tissue with a density that varies with, for example, the breathing motion. Intensity-modulated proton therapy has previously been suggested for the treatment of abdominal cancer by other authors. 28,29 We expect that the range uncertainties will have a similar impact on the delivered doses in PGT as for other types of proton beam treatments. The range uncertainties are affecting the dose distribution along the beam grid propagation direction. These will therefore not deteriorate the beam grid characteristics. We suggest irradiations from several angles (as was done in this work) to improve the target dose coverage, the treatment robustness, and to reduce the peak doses given to normal tissue. Furthermore, since we aim to irradiate solid tumors, minor fluctuations in the target dose will be less important as compared to the situations when smaller targets with microscopic spread are treated, as long as the minimum dose is sufficiently high. If the cross-fired proton beam grids are misaligned, due to, for example, organ motion, the target dose will become more inhomogeneous and more similar to what is typically achieved with photon beam-based grid therapy. Methods such as image-guided radiotherapy, abdominal pressure, gating, and rescanning can of course also be used to further reduce the treatment delivery uncertainties. Conclusion The PGT method suggested in this work can be offered by proton therapy centers equipped with spot scanning capabilities. With PGT, a rather homogeneous and high dose can be produced in a deep-seated target, while the grid structure of the dose distribution can be maintained down to the vicinity of the target volume (with a low dose in between the beams). By cross-firing 1-D proton beam grids, a more uniform target dose, with a CI closer to 1.0, can be produced than with 2-D proton beam grids. We anticipate that the high minimum dose given to the target by cross-firing the proton beam grids will translate into an increased tumor control probability, compared to what can be obtained with the highly nonhomogenous target dose obtained with unidirectional photon beam grid irradiations. We also expect that a high degree of normal tissue sparing can be obtained because the normal tissue is only irradiated with grids of small beams down to large depths. A sensitivity test showed that treatments with 4 interlaced proton beam grids are reasonably robust against setup errors. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Financial support from the Cancer Research Funds of Radiumhemmet is gratefully acknowledged.
2018-04-03T02:47:23.051Z
2016-12-14T00:00:00.000
{ "year": 2016, "sha1": "ae12ca35208f4115d6743c52106a54ca96c8a8a1", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1533034616681670", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ae12ca35208f4115d6743c52106a54ca96c8a8a1", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
269899038
pes2o/s2orc
v3-fos-license
Dynamic Mechanism of Cerebral Venous Disruption: Longitudinal Evidence From a Community‐Based Cohort Background This study aims to investigate the temporal and spatial patterns of structural brain injury related to deep medullary veins (DMVs) damage. Methods and Results This is a longitudinal analysis of the population‐based Shunyi cohort study. Baseline DMVs numbers were identified on susceptibility‐weighted imaging. We assessed vertex‐wise cortex maps and diffusion maps at both baseline and follow‐up using FSL software and the longitudinal FreeSurfer analysis suite. We performed statistical analysis of global measurements and voxel/vertex‐wise analysis to explore the relationship between DMVs number and brain structural measurements. A total of 977 participants were included in the baseline, of whom 544 completed the follow‐up magnetic resonance imaging (age 54.97±7.83 years, 32% men, mean interval 5.56±0.47 years). A lower number of DMVs was associated with a faster disruption of white matter microstructural integrity, presented by increased mean diffusivity and radial diffusion (β=0.0001 and SE=0.0001 for both, P=0.04 and 0.03, respectively), in extensive deep white matter (threshold‐free cluster enhancement P<0.05, adjusted for age and sex). Of particular interest, we found a bidirectional trend association between DMVs number and change in brain volumes. Specifically, participants with mild DMVs disruption showed greater cortical enlargement, whereas those with severe disruption exhibited more significant brain atrophy, primarily involving clusters in the frontal and parietal lobes (multiple comparison corrected P<0.05, adjusted for age, sex, and total intracranial volume). Conclusions Our findings posed the dynamic pattern of brain parenchymal lesions related to DMVs injury, shedding light on the interactions and chronological roles of various pathological mechanisms. I n recent years, a growing body of evidence has shown an association between cerebral venules and white matter damage, as well as with neurodegenerative diseases, [1][2][3][4] raising the research interest in the association between brain venular lesions and brain parenchymal injury. Using the technology of high-field magnetic resonance imaging (MRI) and susceptibility-weighted imaging, 3 studies have shown that a lower number of deep medullary veins (DMVs), venules of 50 to 400 μm in diameter, was associated with larger white matter hyperintensity (WMH) volume, with a presumed mechanism of impaired venous reflux and white matter edema, 2,5 whereas there were inconsistent results. 3,6,7herefore, further research is needed to find a more comprehensive view of the impact of small venules on white matter damage.Diffusion tensor images (DTI) and skeleton-based analysis may provide a more sensitive imaging marker than WMH. On the other hand, the role of the perivenous space in the brain lymphatic system has been increasingly recognized, and the relationship between perivenous disease and brain atrophy has gathered attention. 8,9ross-sectional studies also showed that DMVs damage is significantly associated with brain atrophy, in both community-based populations 3,4 and in patients with cerebral small vessel disease (CSVD). 2 A hypothesis has been proposed that damage to DMVs may lead to hindered lymphatic drainage and impaired clearance of amyloid substances, resulting in gray matter atrophy and involvement in the processes of neurodegeneration and aging.However, whether and how CSVD leads to brain atrophy remains unclear at present, and its associated mechanisms require further exploration. We hypothesize that there might be a coexistence of venous drainage obstruction and lymphatic clearance impairment in the mechanism of cerebral small venules disease and lead to brain parenchymal injury and brain atrophy with spatially and temporally interactions.However, the cross-sectional setting in previous studies was unable to depict the chronological order among cerebral venular disease, white matter damage, and brain atrophy.Thus, this study aims to investigate the temporal and spatial patterns of DMV injury, white matter damage, and brain atrophy in a longitudinal community-based cohort. METHODS The data that support the findings of this study are available from the corresponding author upon reasonable request. Study Population All participants were from the Shunyi Study, an ongoing prospective community-based cohort study designed to investigate the risk factors and associated brain imaging of cardiovascular and age-related diseases.The design and methods of the Shunyi study have been described previously. 10All residents aged 35 and older living in 5 villages in Shunyi, a suburban district of Beijing, were invited.Between June 2013 and April 2016, a total of 1586 participants underwent a baseline assessment that included structured questionnaires, physical examinations, and blood tests.After excluding those who refused or had MRI contradictions, 1257 participants completed the baseline brain MRI.Of all participants who underwent baseline MRI, we excluded those with a history of stroke or intracranial arterial stenosis greater than 50% (n=81) and those with poor imaging quality for structural segmentation or DMVs assessment (n=199).Finally, 977 participants were included in the baseline analysis.All participants were invited for MRI follow-up, and 650 of them completed it.Of the 650 participants who completed follow-up MRI, 12 patients with new stroke were excluded, and 94 participants were excluded due to poor imaging quality, leaving 544 subjects for the longitudinal analysis.The study flow chart is shown in Figure 1. All participants provided written informed consent.The study was approved by the Ethical Committee of Peking Union Medical College Hospital (reference number: B-160). MRI Protocol At baseline and follow-up, an MRI was performed using the same scanner and protocol.The details of MRI CLINICAL PERSPECTIVE What Is New?What Are the Clinical Implications? • Our findings showed that DMV disruption is a potential predicting marker for widespread microstructural injury. Assessment of the Number of DMVs The number of DMVs was quantified by visual inspection based on the protocol described by Ao et al. 3 Briefly, a fixed area of 6 mm thickness from the inferior to the superior margin of the corpus callosum (parallel to the anterior-posterior junction) was first selected on the median sagittal Tl image.The minimum density projection was derived from 4 consecutive slices of axial raw susceptibility-weighted imaging images within a fixed area to generate a 2-dimensional image.Rectangular (6 cm×1 cm) regions of interest parallel to the lateral ventricle were placed adjacent to the left and right ventricles, respectively, and the DMVs crossing within the regions of interest were counted.The number of DMVs was defined as the average count of veins in both regions of interest. A random sample of 50 individuals was used to assess the intrarater reliability, and there was a gap of more than 1 month between the first and second readings.A total of 50 individuals were separately rated by 2 trained and blinded researchers (Y.C.Z. and D.H.A.).The kappa values for intra-and interrater reliability were 0.76 and 0.79, respectively.The interclass correlations for intra-and interrater reliability were 0.68 and 0.75, respectively. Assessment of CSVD WMH is defined as an abnormal signal in the white matter region of the brain that appears as equal/low signals on T1WI sequences and high signals on T2WI and fluid-attenuated inversion recovery sequences.Quantitative WMH volumes for baseline and follow-up were calculated by the brain intensity abnormality classification algorithm 12 using the same processing methodology in the same server environment. WMH masks were manually segmented from the fluid-attenuated inversion recovery sequences of 25 subjects as training data for brain intensity abnormality classification algorithm.The output of this algorithm is a WMH probability map for each individual, which was then thresholded at 0.95 to generate a binary map of the WMH where the WMH volume for each individual (mL) was extracted.The training set was independently validated by a neurologist (Z.F.F.) who was blinded to the clinical status of the subjects.Compared with manual segmentation, the Dice coefficient was 0.87 and the interclass correlation coefficient was 0.94.The WMH volumes obtained were log-transformed because of their skewed distribution.The annual change of the WMH was calculated as (follow-up value − baseline value)/follow-up years.Assessment of lacunes is described in Data S1. DTI Data Processing Baseline and follow-up DTI images were processed using the FMRIB Software Library v6.0 (FSL, https:// web.mit.edu/ fsl_ v5.0.10/ fsl/ doc/ wiki/ FSL.html) in the same server environment.The process includes skull stripping, eddy current and head motion correction, spatial alignment normalization, and calculation of diffusion parameter values.Fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity maps, which represent the microstructural integrity of the white matter, were generated for each participant.Subsequently, progression maps of fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity, calculated as the difference between the follow-up and baseline values divided by the follow-up interval (years) in each voxel, were obtained using the fslmaths function (http:// fsl.fmrib.OX. ac.uk/ fsl/ fslwi ki/ fslmaths) of the FSL software.Global mean DTI metrics were calculated based on all voxels with fractional anisotropy thresholds >0.2 in the generated white matter masks, and the difference between baseline and follow-up was used as the amount of change in brainwide DTI parameters.The annual change of the global mean fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity were calculated as the difference between the follow-up and baseline values divided by the follow-up interval (years). Assessment of Atrophy Measures To assess brain structure volumes (gray matter, white matter, and hippocampus), an automatic segmentation pipeline using Freesurfer image analysis suite v6.0 was performed in the native space. 13Brain parenchymal fraction (BPF) was calculated as the ratio of brain tissue volume (sum of gray matter and white matter volumes) to total intracranial volume.Gray matter fraction (GMF), white matter fraction, and hippocampus fraction were calculated as the gray matter, white matter, or bilateral hippocampus volume divided by total intracranial volume.The annual change of the BPF, GMF, and white matter fraction was calculated as the difference between the follow-up and baseline values divided by the follow-up interval (years). Assessment of Covariates Demographic and clinical information, including age, sex, smoking status, blood pressure, history of hypertension, diabetes, hyperlipidemia, and current medications, was collected using a structured questionnaire and physical examination.Blood pressure was measured 3 times, and the mean value was used.Definitions of these covariates are provided in Data S1. Statistical Analysis Descriptive analyses of demographic characteristics, neuroimaging characteristics, and their changes over time were conducted in the baseline and follow-up populations.Continuous variables were expressed as mean±SD or median with interquartile range, and categorical variables were expressed as frequencies and proportions.We used t tests for continuous variables and chi-square tests for categorical variables to analyze differences in baseline characteristics between follow-up participants and lost participants. The relationships between the number of DMVs and the rate of progression of WMH volume, DTI parameters, and brain structural volume were examined using general linear models with the number of DMVs as the dependent variable.Three models were used to adjust for confounders in a stepwise manner.Model 1 was univariate.Model 2 was adjusted for age and sex.Model 3, based on model 2, was additionally adjusted for hypertension, diabetes, and hyperlipidemia.In analyzing the correlation between DMVs numbers and the annual rate of progression of brain volume parameters, the primary, quadratic, and cubic DMVs numbers were used as independent variables, and the best fit (with the largest R 2) was taken as the final fitted model. All analyses were conducted using SAS 9.4 (SAS Institute, Cary, NC), and 2-sided P values of <0.05 were considered statistically significant.The annual change for continuous variables was calculated as (follow-up value − baseline value)/follow-up years.The annual change of lacunes' number was calculated as (number of new-onset lacunes/follow-up years).BPF indicates brain parenchymal fraction, the sum of gray matter and white matter volumes/total intracranial volume; GMF, gray matter fraction, gray matter volume/total intracranial volume; WMH, white matter hyperintensity; and WMF, white matter fraction, white matter volume/total intracranial volume. For the voxel-wise analysis of DTI metrics and DMVs numbers, we performed tract-based spatial statistics (http:// fsl.fmrib.OX. ac.uk/ fsl/ fslwi ki/ TBSS). 14The diffusion progression images for each individual were skeletonized and then merged into 4-dimensional images using the FSL Merge function.General linear model analysis was performed using a permutationbased statistical interference tool for a nonparametric approach ('randomize').The number of permutation tests was set at 5000, and the threshold for significance was set at threshold-free cluster enhancement 15,16 corrected at P<0.05. The relationship between the number of DMVs and the rate of change of cortical volume was calculated using surface base group analysis.Two-stage models were applied for longitudinal analysis, where the percentage of change in the vertex-wise maps was calculated according to the scan interval.We used generalized linear models to derive the spatial distribution of cortical atrophy rates associated with DMVs number.Significant clusters were determined by the permutation method (permutation number=5000) at a cluster-corrected level of P<0.05 (cluster-forming vertex threshold was set at P<0.001) across both hemispheres.This analysis was performed in 4 subgroups of DMV count quartiles to characterize the progression of cortical atrophy at different stages of DMV injury. Demographics The baseline characteristics of the cohort (n=977) are presented in Table S1.Five hundred forty-four participants (55.7%) underwent follow-up brain MRI for a mean interval of 5.56±0.47years, and their baseline and follow-up demographic characteristics are shown in Table 1.The mean number of DMVs was 18.96±1.67at baseline.The initial median volume of WMH was 0.26 mL, with an interquartile range from 0.17 to 0.42 mL.The median annual increase in WMH volume during follow-up was 0.028 mL, ranging from 0.013 to 0.05 mL.The mean BPF at baseline was 75.78±2.57%,with an annual decrease of 0.38±0.26%. Those lost to MRI follow-up were older, more likely to be men, and had a higher prevalence of cardiovascular risk factors, heavier burden of CSVD, and more severe brain atrophy than those who were followed (Table S2). Correlation of DMVs With the Progression of White Matter Damage The linear correlation between DMVs numbers at baseline and longitudinal changes in DTI parameters are shown in Table 2.After adjustment for age, sex, and vascular risk factors, those with lower DMVs Annual change rates for continuous variables were calculated as (follow-up value − baseline value)/follow-up years.The annual change of lacunes' number was calculated as (number of new-onset lacunes/follow-up years).BPF was calculated as the ratio of brain tissue volume (sum of gray matter and white matter volumes) to total intracranial volume.GMF, WMF, and hippocampus fraction were calculated as the gray matter, white matter, or bilateral hippocampus volume divided by total intracranial volume.The WMH volumes obtained were log-transformed because of their skewed distribution.BPF indicates brain parenchymal fraction; DMVs, deep medullary veins; GMF, gray matter fraction; WMF, white matter fraction; and WMH, white matter hyperintensity.*P<0.05. numbers had greater increases in mean diffusivity rate (β=0.0001,SE=0.0001,P=0.04) and radial diffusion rate (β=0.0001,SE=0.0001,P=0.029) during followup, indicating a diffused white matter microstructural damage.No correlation was found between the DMVs number and the rate of WMH progression.As shown in Figure 2, tract-based spatial statistics analysis further revealed the spatial pattern of white matter microstructural integrity disruption related to the reduction in baseline DMVs, The progression of white matter microstructural damage was mainly located in the right internal and external capsule, corona radiata, as well as a portion of the bilateral inferior and superior longitudinal fasciculus (P<0.05 after correction for multiple comparisons, adjusted for age and sex). Correlation of DMVs With the Progression of Brain Atrophy In the linear correlation analysis, we observed no statistically significant association between DMVs numbers and the annual change of BPF across all models (Table 2).To explore potential nonlinear relationships, we stratified DMVs numbers by quartiles and identified a linear association between the DMVs quadratic terms and GMF progression exclusively within the first quartile population (Table S3).Baseline characteristics on the DMV quartile subgroups is shown in Table S4.Consequently, for our fitted model analysis, we incorporated quadratic terms of DMVs numbers and found a bidirectional trend of brain atrophy along with the number of DMVs (Figure 3).Specifically, there is a statistically significant relationship between the quadratic term of DMVs numbers and the annual rate of decline in GMF (β=0.006,SE=0.003,P=0.043), independent of age and sex (Table S5). The spatial distribution of brain atrophy in relation to DMVs is shown in Figure 4.In the first quartile subgroup (13<DMVs number≤18), we found that fewer DMVs were significantly associated with the progression of cortical volume shrinkage in the left frontal lobe and right cingulate gyrus (cluster-wise P<0.05, multiple comparisons corrected, adjusted for age and sex, and total intracranial volume).No significant associations were found in the other subgroups, except in the third quartile, where a reduction in DMVs number was associated with slower progression of supramarginal gyrus atrophy.Details of the voxel clusters with significant correlations are shown in Figure 4C. We performed mediation analyses of diffusion metrics for the association between DMVs number and brain atrophy (Figure S1); the results are shown in Tables S6 and S7. DISCUSSION In this longitudinal study, we found that fewer DMVs at baseline were associated with a more severe progression of white matter microstructural integrity loss, mainly located in the right internal and external capsule, corona radiata, as well as a portion of the bilateral inferior and superior longitudinal fasciculus.Of more interest, we found a bidirectional trend association between number of DMVs and progression of brain atrophy on follow-up MRI.Specifically, participants with mild DMVs disruption had greater cortical enlargement, whereas those with severe disruption exhibited more significant brain atrophy, primarily involving clusters in the frontal and parietal lobes, independent of age, sex, and total intracranial volume. Our results suggest that DMV impairment is associated with the progression of white matter microstructural damage, represented by DTI parameters, independent of age, sex, and vascular risk factors but not with the progression of WMH volume.This is consistent with our previously published cross-sectional findings. 3,4WMH on conventional imaging typically appears late in the course of white matter damage, 17 whereas DTI metrics recognize mild white matter damage at an early stage before WMH appears on imaging. 17,18Because this study was conducted in a relatively healthy community-based population, the overall extent of white matter damage was less severe, which may explain the inconsistency with the results of the WMH correlation in the diseased population. 6,7,19urther analysis of the spatial pattern revealed that the progression of white matter microstructural disruption was mainly located in the deep white matter, overlapping with the region of DMVs drainage.Therefore, we hypothesized that early cerebral white matter microstructural damage may be due to impaired reflux in the small cerebral veins.Cerebral venous lesions lead to increased intercellular fluid and edema in deep white matter areas. 20,21The increase in intercellular fluid is likely accompanied by an accumulation of toxic metabolic waste, which may trigger an inflammatory response in the myelin. 22Additionally, the stenosis or occlusion of venules might increase vascular resistance and exacerbate the hypoperfusion of white matter.Further pathologic and experimental studies are needed to explore the specific mechanisms and molecular pathways linking venule disruption to white matter damage. More interestingly, we found a bidirectional association between DMVs injury and brain volume.To be specific, participants with mild DMVs disruption showed greater cortical enlargement along with fewer DMVs numbers, whereas those with severe disruption exhibited more significant brain atrophy.This bidirectional correlation was significant mainly in the GMF, and a similar trend was observed in the WMH and BPF but did not reach a statistical difference, which needs to be further validated in a longer follow-up or a larger population.We assumed impaired venous reflux and brain tissue swelling in the early stages was followed by a transition to neurodegeneration and atrophy in the later stages.There is also some evidence of this from previous studies.For example, Yao et al 23 found that the brain volume of patients with cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy showed edema in the temporal pole followed by regional atrophy, suggesting that patients with CSVD may have impaired intracerebral reflux in the early course of the disease.Another study 20 found that DMV disruption is correlated with increased interstitial fluid.3][4] However, this may be due to participants at different stages of the disease being included in different studies.Through longitudinal settings and nonlinear analysis, our findings enhance the understanding of the intricate and multifaceted pathogenic mechanisms of cerebral venular disease. Multiple mechanisms might participate in the brain atrophy associated with DMV reduction.First, venule disruption contributes to microstructural integrity loss of subcortical white matter, which leads to secondary cortical atrophy.Also, DMV damage in the lateral paraventricular area may be a localized manifestation of whole-brain venous pathology and cortical drainage dysfunction also distributed to local structural alternations.If feasible, this could be further validated in the future by direct observation of the cortical veins. In the population with the most disrupted DMVs, we found the progression of atrophy primarily located in the frontal and parietal lobes, which is similar to the spatial pattern identified in normal aging. 24This indicated a role of venule disruption in the neurodegenerative process.We surmised that the potential mechanism was impeded perivenous glymphatic efflux, which led to inadequate drainage, toxic accumulation, and structural injury in the cortex. 25,26Data from our autopsy study (unpublished) showed an association between venular collagenosis and increased deposition of amyloid beta.However, our assumption remained to be verified in vivo human studies. The strengths of our study include the large sample size, longitudinal community cohort design, and high-quality clinical and MRI data, in contrast to most previous studies that were cross-sectional settings and overlooked the spatial pattern of brain parenchyma alternations.However, several limitations should be considered.First, the longitudinal study design introduced attrition bias.The lost individuals were older, with more vascular risk factors and more severe CSVD burden, so the progression of structural brain damage may be underestimated.Participants with previous stroke, severe intracranial stenosis, and poor image quality were excluded at baseline, which may also introduce selection bias.Second, the mild degree of WMH volume progression (median annual change=0.28mL) and brain atrophy progression (average annual change=0.38%) in the study population may reduce the power of the regression analysis.Third, there may be differences between the 2 MRI scans at baseline and follow-up due to head position, head movement, and magnetic field interference, resulting in inconsistent anatomic segmentation between the 2 scans.We used Freesurfer to construct within-subject template corrections to minimize this error and subjects with automated segmentation errors by manual inspection.Fourth, we selected the major vascular risk factors for adjustment, but other relevant risk variables, such as education level, may also cause confounding. CONCLUSIONS Our findings posed the dynamic pattern of brain parenchymal lesions related to DMVs injury and shed light on the interactions and chronological roles of various pathological mechanisms.In future studies related to neurodegenerative processes, the pathological changes and specific pathogenic pathways of cerebral venules deserve more attention and in-depth investigation. Figure 2 . Figure 2. Distribution of progression of white matter damage correlated to DMVs number.The blue map and red-yellow map show locations on the white matter skeleton where faster FA decrease, or faster increase in MD, AD, and RD is correlated with lower DMVs number, respectively (threshold-free cluster enhancement corrected P<0.05, adjusted for age and sex).AD indicates axial diffusivity; DMVs, deep medullary veins; FA, fractional anisotropy; MD, mean diffusivity; and RD, radial diffusivity. Figure 3 . Figure 3. Fitted curves for quadratic term of DMVs number and brain volume change.The β and SE indicated the estimated effect of the quadratic term of DMV number and annual change rate gray matter volumes.DMVs indicates deep medullary veins; and GMF, gray matter fraction. Figure 4 . Figure 4. Distribution of brain atrophy with significant correlation to DMVs number in the first quantile subgroup.A, Linear fit of DMVs number with the annual change in GMF in the first quantile subgroup (13<DMVs number≤18).B, Clusters of cortical volume changes significantly associated with DMVs number, multicorrected for CWP <0.05 and adjusted for age, sex, and total intracranial volume.The red color bar indicates the log-transformed P value.C, Listing of cluster information for regions of brain atrophy related to fewer DMVs number.DMVs indicates deep medullary veins; GMF, gray matter fraction; lh, left hemisphere; MNI, Montreal Neurological Institute; and rh, right hemisphere. Table 1 . Demographic Characteristics of the Analyzed Population (n=544) at Baseline and Follow-Up Table 2 . Linear Correlations Between DMVs and Annual Change Rate of Brain Structural Parameters Model 1 denotes unadjusted; model 2, adjusted for age and sex; and model 3 was additionally adjusted for hypertension, diabetes, and hyperlipidemia.
2024-05-20T06:16:49.461Z
2024-05-18T00:00:00.000
{ "year": 2024, "sha1": "b78fc34ecc79e4e274be863a10cfb9d29fd93759", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "c44db87cf6350768a3c6f680c81965f447666112", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252217116
pes2o/s2orc
v3-fos-license
The Mobility of the Cap Domain Is Essential for the Substrate Promiscuity of a Family IV Esterase from Sorghum Rhizosphere Microbiome ABSTRACT Metagenomics offers the possibility to screen for versatile biocatalysts. In this study, the microbial community of the Sorghum bicolor rhizosphere was spiked with technical cashew nut shell liquid, and after incubation, the environmental DNA (eDNA) was extracted and subsequently used to build a metagenomic library. We report the biochemical features and crystal structure of a novel esterase from the family IV, EH0, retrieved from an uncultured sphingomonad after a functional screen in tributyrin agar plates. EH0 (optimum temperature [Topt], 50°C; melting temperature [Tm], 55.7°C; optimum pH [pHopt], 9.5) was stable in the presence of 10 to 20% (vol/vol) organic solvents and exhibited hydrolytic activity against p-nitrophenyl esters from acetate to palmitate, preferably butyrate (496 U mg−1), and a large battery of 69 structurally different esters (up to 30.2 U mg−1), including bis(2-hydroxyethyl)-terephthalate (0.16 ± 0.06 U mg−1). This broad substrate specificity contrasts with the fact that EH0 showed a long and narrow catalytic tunnel, whose access appears to be hindered by a tight folding of its cap domain. We propose that this cap domain is a highly flexible structure whose opening is mediated by unique structural elements, one of which is the presence of two contiguous proline residues likely acting as possible hinges, which together allow for the entrance of the substrates. Therefore, this work provides a new role for the cap domain, which until now was thought to be an immobile element that contained hydrophobic patches involved in substrate prerecognition and in turn substrate specificity within family IV esterases. IMPORTANCE A better understanding of structure-function relationships of enzymes allows revelation of key structural motifs or elements. Here, we studied the structural basis of the substrate promiscuity of EH0, a family IV esterase, isolated from a sample of the Sorghum bicolor rhizosphere microbiome exposed to technical cashew nut shell liquid. The analysis of EH0 revealed the potential of the sorghum rhizosphere microbiome as a source of enzymes with interesting properties, such as pH and solvent tolerance and remarkably broad substrate promiscuity. Its structure resembled those of homologous proteins from mesophilic Parvibaculum and Erythrobacter spp. and hyperthermophilic Pyrobaculum and Sulfolobus spp. and had a very narrow, single-entry access tunnel to the active site, with access controlled by a capping domain that includes a number of nonconserved proline residues. These structural markers, distinct from those of other substrate-promiscuous esterases, can help in tuning substrate profiles beyond tunnel and active site engineering. out by applying a metagenomic functional screening approach. The strategy involved the use of LB agar plates containing tributyrin (22). The pooling of 40 clones per well dispensed into 96-well microtiter plates (approximately 3,800 clones in one plate) facilitated the colony screening at a high throughput. After screening of more than 100,000 clones, 1 hit from the SorRhizCNSL3 W library showed lipase/esterase activity. A fosmid insert was extracted using the Qiagen Large-Construct kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol and digested with restriction enzymes for insert size estimation, and the insert was sequenced by Illumina technology. Upon the completion of sequencing, the reads were quality filtered and assembled to generate nonredundant metasequences, and genes were predicted and annotated as described previously (12). One gene encoding a predicted carboxylic ester hydrolase was identified. The gene was amplified with specific primers, cloned into the p15TV-L vector, and transformed into Escherichia coli BL21(DE3) cells for expression of the N-terminal His 6 -tagged proteins. The deduced amino acid sequence of the enzyme (324 amino acids long) was used for homology searches in the taxonomy and functional assignment. A database search indicated that EH 0 showed 99% identity with the a/b hydrolase enzyme from Sphingomonas pruni (protein identifier [ID] WP_066587239), both of which are classified as members of the a/b hydrolase-3 family (PF07859). The typical H-G-G-G motif and the G-X-S-X-G catalytic motif are conserved, and EH 0 clustered together with family IV esterases. Biochemical characterization. The recombinant protein was successfully expressed in soluble form and purified by nickel affinity chromatography. Purified protein was desalted by ultrafiltration, and its enzymatic activity was assessed. Four model p-nitrophenyl (p-NP) ester substrates with different chain lengths were used to determine the substrate specificity of the enzymes and therefore determine whether the enzymes are in fact true lipases or esterases. Lipases hydrolyze ester bonds of long-chain triglycerides more efficiently than esterases, which instead exhibit the highest activity toward water-soluble esters with short fatty acid chains (23). The substrates used for the hydrolytic test were p-NP acetate (C 2 ), p-NP butyrate (C 4 ), p-NP dodecanoate (C 12 ), and p-NP palmitate (C 16 ). The hydrolytic activity was recorded under standard assay conditions (Fig. 1). EH 0 showed a specific activity of 496.5 U mg 21 for p-NP butyrate, which was the best substrate. Lower levels of activity were observed with longer-chain esters (C $ 12). The esterase followed the Michaelis-Menten kinetics, and its kinetic parameters are reflected in Table 1. A comparison of the catalytic efficiency values (k cat /K m ) indicated a high reactivity toward p-NP butyrate followed by p-NP-acetate. Its voluminous (volume of the active site cavity, 5,133 Å 3 ) but low-exposed (solvent-accessible surface area [SASA], 5.07 over 100 dimensionless percentage) active site allows hydrolysis of a broad range of 68 out of 96 structurally and chemically diverse esters (see Table S1 in the supplemental material), as determined by a pH indicator assay (pH 8.0, Specific activities (mean 6 standard deviation from triplicates) are shown. Reaction mixtures contained 1 mM concentrations of the corresponding p-NP esters, and reactions were conducted in the presence of 1 to 5% DMSO-acetonitrile (see Materials and Methods), under standard conditions described in Materials and Methods. At the solvent concentration used, the enzyme showed 100% of its activity compared to a control without solvent (see Table S2 in the supplemental material). The Cap Domain of an Esterase from Sorghum Rhizosphere Applied and Environmental Microbiology 30°C). Phenyl acetate (30.23 U mg 21 ) and glyceryl tripropionate (29.43 U mg 21 ) were the best substrates. We also found that EH 0 efficiently hydrolyzed bis(2-hydroxyethyl)-terephthalate (BHET; 163.6 6 6.2 U g 21 ), an intermediate in the degradation of polyethylene terephthalate (PET) (24); high-performance liquid chromatography (HPLC) analysis (Fig. S1), performed as described previously (25), confirmed the hydrolysis of BHET to mono-(2-hydroxyethyl)-terephthalic acid (MHET) but not to terephthalic acid (TA). However, using previously described conditions (25), we found that the enzyme did not hydrolyze large plastic materials such as amorphous and crystalline PET film and PET nanoparticles from Goodfellow. According to the number of hydrolyzed esters, EH 0 can be thus considered an esterase with a wide substrate specificity, similar to other enzymes of family IV (19,20). EH 0 showed maximal activity at 50°C, retaining more than 80% of the maximum activity at 40 to 55°C ( Fig. 2A), suggesting that it is moderately thermostable. This was confirmed by circular dichroism (CD) analysis, which revealed a denaturing temperature of 55.7 6 0.2°C (Fig. 2B). Its optimal pH for activity was 9.5 (Fig. 2C). The effect on the enzymatic activity of organic solvents at different concentrations was evaluated (Table S2). An activation effect was observed for EH 0 when 10% methanol (60% activity increase) and 10 to 20% dimethyl sulfoxide (DMSO) (22 to 40% increase) were added to the reaction mixture. The presence of bivalent and trivalent cations did not have a remarkable positive effect on the activity of the enzymes, which showed, in some cases, tolerance to high concentrations of cations (Table S3). A prominent inhibiting effect was shown for all cations, except for magnesium, which was well tolerated at 1 to 10 mM (,5% inhibition). EH 0 presents tight folding of its cap domain. The crystal structure of wild-type EH 0 was obtained at 2.01-Å resolution, with the P2 1 2 1 2 1 space group and two crystallography-independent molecules in the asymmetric unit. Molecular replacement was performed using Est8 as a template (PDB code 4YPV) (26), and the final model was refined to a crystallographic R factor of 0.1717 and R free of 0.2019 (Table S4). As with other reported family IV esterases, EH 0 has an a/b hydrolase fold with two different a Assays were performed in the presence of 5.0 to 7.5% DMSO-acetonitrile (see Materials and Methods), concentrations at which the enzyme showed 100% of its activity compared to a control without solvent (see Table S2 in the supplemental material). FIG 2 Optimal parameters for the activity and stability of purified EH 0 . (A) Temperature profile. (B) The thermal denaturation curve of EH 0 at pH 7.0 was measured by ellipticity changes at 220 nm and obtained at different temperatures. (C) pH profile. The maximal activity was defined as 100%, and the relative activity is shown as the percentage of maximal activity (mean 6 standard deviation from triplicates) determined under standard reaction conditions with p-NP butyrate as the substrate. Graphics were created with SigmaPlot version 14.0 (the data were not fitted to any model). The Cap Domain of an Esterase from Sorghum Rhizosphere Applied and Environmental Microbiology domains, a cap domain (residues 1 to 43 and 208 to 229) and a catalytic domain (residues 44 to 207 and 230 to 324), constituted by a total of 9 a-helices and 8 b-sheets (Fig. 3A). The catalytic domain was composed of a central b-sheet with eight parallel b-strands (b1, b3, b4, b5, b6, b7, and b8), except b2, which was antiparallel and surrounded by five a-helices (a3, a4, a5, a8, and a9). The cap domain involved four a-helices (a1, a2, a6, and a7) (Fig. 3B). There were two cis peptides, Ala122-Pro123 and Trp127-Pro128, located at the b4-a4 turn within the catalytic domain. Analysis of EH 0 folding using the DALI server (27) was employed to search for homologous proteins. The closest homologs are E53 isolated from Erythrobacter longus, with 46% identity and a root mean square deviation (RMSD) of 2.3 Å on 296 Ca atoms (PDB code 7W8N) (28), Est8 isolated from Parvibaculum, with 38% identity and an RMSD of 2.5 Å on 291 Ca atoms (PDB code 4YPV) (26), PestE isolated from Pyrobaculum calidifontis, with 33% identity and an RMSD of 2.6 Å on 294 Ca atoms (PDB code 3ZWQ) (29), and EstA isolated from Sulfolobus islandicus REY15A, with 31% identity and an RMSD of 2.6 Å on 281 Ca atoms (PDB code 5LK6). The structural superimposition of these proteins reveals a high conservation of the corresponding catalytic domains and also a common spatial arrangement of the helices at the cap domains in all the proteins except EH 0 , where a2 and the long a7 are visibly shifted very close to its EH 0 active site and are apparently impeding the entrance of substrates (Fig. 3C). This was unexpected considering the broad substrate specificity of esterase EH 0 , approaching that of most promiscuous ones (19). Thus, withdrawal of the cap domain seems a necessary requirement for allowing access of the bulky substrates to the EH 0 catalytic site. In agreement with this assumption, a substantial rearrangement of the cap domain was previously described in the homolog esterase EST2 (having 31% sequence identity), with its M211S/R215L double variant being trapped in the crystal in a conformation resembling the open form of lipases (30). However, the authors did not assign any biological relevance to this issue, considering this state an artifact derived from the crystal packing. In our study, some flexibility at this region has been found from the two conformations adopted by loop a1-a2 observed in subunits A and B of EH 0 (Fig. 3D). In addition, the inspection of residues within the cap domain showed a high number of proline residues (Pro8, Pro21, Pro23, Pro31, Pro46, and Pro47) (Fig. 3B) that are mostly nonconserved and could confer flexibility on the N-terminal cap domain, allowing entrance of the substrate. This feature will be discussed below. EH 0 is a dimeric enzyme. While E53 and Est8 are monomeric enzymes, and EstA is a tetramer, EH 0 is presented as a biological homodimer with approximate dimensions of 6.3 by 4.9 by 2.7 nm, which is assembled in a twofold axis symmetry arrangement ( Fig. 4A) that buries 5.6% of its total surface area. Hydrogen bonds mainly involve b8 and a8, while salt bridges involve motifs b8 and a9 ( Fig. 4B and Table S5). Similarly to EH 0 's dimeric homolog PestE, oligomerization occurs through a tight interaction of b8 strands from both subunits. However, while only the subsequent helix a12 was involved in the PestE interface (29) (Fig. 4C), both the precedent a8 and subsequent a9 helices make the EH 0 interface (Fig. 4D). These different interactions observed in PestE and EH 0 were reflected in a different orientation between the monomers that, nevertheless, present a similar distance between the catalytic serines of 35 to 38 Å and an equivalent disposition of the tunnels giving access to the catalytic site at two edges of the dimer ( Fig. 4E and F). Therefore, as seen in Fig. 4A, the two cap domains are far from the interface and project out from the dimer, revealing that dimerization is not affecting the cap function. The peculiar EH 0 active site. EH 0 has a long catalytic tunnel with a very narrow entrance (approximately 16.8-Å depth) (Fig. 5A). The catalytic triad of EH 0 is formed by Ser161 (in the conserved motif 159 G-D-S-A-G 163 in the nucleophilic elbow), Asp253 (in the conserved motif 253 D-P-I-R-D 258 ), and His283 (Fig. 5B). To analyze the active site, a series of soaking and cocrystallization experiments were performed with different suicide inhibitors, all of which were unsuccessful. A deep inspection of the active site showed that the nucleophilic Ser161 is hydrogen bonded to Glu226 from the nearby a7 and a movement of loop b3-a3, including the oxoanion in the conserved motif large and voluminous esters such as dodecanoyl acetate, pentadecyl acetate, vinyl laurate, methyl-2,5-dihydroxycinnamate, and ethyl-2-chlorobenzoate, which were not accepted by the wild-type enzyme (Table S1). This apparently supports that removal of the Ser161-Glu226 hydrogen bond increases cap flexibility and enhances the enzymatic efficiency toward large esters at either the acyl or the alcohol sites. Indeed, it appears clear that helix a7 must retract from the catalytic site to allow substrate entrance, but this helix seems fixed by many atomic interactions. Close to the Ser161-Glu226 hydrogen bond, Tyr223 makes additional hydrogen bonds to the Asp194 and Lys197 main chain, from loop b6-a6, and with the side chain of Gln258 from a8 (Fig. 5C). A mutation, EH 0Y223A , was generated by directed mutagenesis and found, unlike the EH 0E226A mutation, to show little effect on conversion rates compared to the wild-type enzyme. However, this mutation allowed the hydrolysis of large and voluminous esters, such as dodecanoyl acetate, pentadecyl acetate, methyl-2,5-dihydroxycinnamate, and ethyl-2-chlorobenzoate, which were also hydrolyzed by the variant EH 0E226A . Unlike EH 0E226A , the EH 0Y223A variant was unable to hydrolyze vinyl laurate, but it was able to hydrolyze methyl-3-hydroxybenzoate, not accepted by the EH 0E226A variant. This suggested that Tyr223 may play a role in accepting large esters, particularly at the acyl side, but may also play an additional role in substrate specificity different from that of Glu226. Additionally, at the beginning of a7, Trp218 is within a hydrophobic pocket surrounded by residues Phe13, Ile17, Leu28, and Phe219 from the cap domain and Phe91 from the catalytic domain, which is also anchored by interaction with loop a1-a2 and Pro23 (Fig. 5D). Thus, to depict how this tight molecular packing may be disrupted by the proposed cap motion, molecular dynamics were applied to crystallographic refinement through the ensemble refinement strategy, which is shown to model the intrinsic disorder of macromolecules, giving more accurate structures. The ensemble models obtained for molecules A and B within the asymmetry unit are shown in Fig. 5E and H, respectively. The analysis of the molecule A conformers (Fig. 5E) revealed that the region comprising a1 and a2 shows a wide spectrum of possible pathways from more "open" to more "closed" conformations. At one edge, Pro23 is in an extended a1-a2 loop far from Trp218, therefore releasing a7, which consequently could retract from the catalytic pocket ("open-like conformation") ( Fig. 5F). In fact, the ensemble refinement models a very flexible conformation, being unstructured even at regions corresponding to a1 and a2. At the other edge, the second scenario is the entrapment of Trp218 by Pro23 in loop a1-a2, hindering substrate entrance ("closed-like conformation") ( Fig. 5G). This last scenario is equivalent to the three-dimensional (3D) structure captured by crystallography (Fig. 5D). Furthermore, three prolines can be found within this a1-a2 loop, Pro21 (at the end of a1), Pro23 (at the middle), and Pro31 (at the beginning of a2), all of them unique to EH 0 , which are probably behind the two different conformations observed at this loop in both subunits within the asymmetric unit ( Fig. 3B and D) and explain the ensemble of conformers modeled for molecule B (Fig. 5H). Furthermore, as seen in Fig. 3B, Pro46 and Pro47 are potential hinges that would involve flexibility of a larger region of the EH 0 cap domain, including the whole N-ter- The Cap Domain of an Esterase from Sorghum Rhizosphere Applied and Environmental Microbiology minal peptide chain up to the end of a2. This is consistent with the more "open" conformations resulting from the ensemble refinement shown in Fig. 5F. In fact, the sequence comparison of EH 0 to its closest homologs reveals that only EH 0 has two sequential prolines at this region and that Pro46 is unique to EH 0 (Fig. 6), which could be a reason behind the high EH 0 promiscuity. Interestingly, EST2 also presents the two contiguous Pro residues (Pro38 and Pro39), which can also confer high mobility on its cap domain and facilitate the "open-like" conformation captured in the crystal mentioned above (30). Therefore, the mutation EH 0P46A was generated by directed mutagenesis and submitted to crystallization experiments to investigate the Pro46 putative role. However, the crystals grown from this variant, EH 0P46A , failed to diffract, suggesting that removal of Pro46 introduces some structural instability to the polypeptide chain, resulting in crystal disorder. Moreover, analysis of the activity profile showed that Pro46 is a critical residue for the entry and hydrolysis of bulky substrates, as its mutation by Ala extends the substrate specificity from 68 to 84 esters. Additionally, the hydrolytic rate increased from 1.2-to 18,000-fold (average, 335-fold) for most esters. This variant was also able to hydrolyze large glyceryl trioctanoate and 2,4-dichlorophenyl 2,4-dichlorobenzoate, which were not hydrolyzed by the wild-type enzyme or the EH 0E226A and EH 0Y223A variants. Consequently, although the proposed role of Pro46 and Pro47 as putative hinges enabling the opening of the cap domain seems appealing, other mechanisms promoting EH 0 plasticity to bulky substrates may also operate. Furthermore, as seen below, it should be noted that the two proline residues are located at the entrance of the narrow tunnel giving access to the active site, an issue that ascribes a prominent role to both residues in binding activity and specificity. Structural details of the EH 0 active site and assignment of the acyl and alcohol moieties were explored by comparison with its homologs E53 complexed with 4-nitrophenyl hexanoate (PDB code 6KEU) (28) and EH 1AB1 in complex with a derivative of methyl-4-nitrophenylhexylphosphonate (PDB code 6RB0) (31). However, the acyl and alcohol moieties of these complexes are located in opposite sites ( Fig. 5I and J). Therefore, as we could not obtain complexes from EH 0 , the activity experiments were crucial to correctly assign acyl/alcohol moieties. As mentioned above, experimental evidence demonstrated that Tyr223 produces a steric hindrance at the acyl moiety, and consequently, the acyl and alcohol sites correspond to those observed in EH 1AB1 (Fig. 5J). On the basis of this assumption, the acyl binding site seems to be a small cavity bordered by the Tyr199 and Ile255 side chains, which produce steric hindrance for substrates with large acyl moieties. The long and narrow alcohol binding site is surrounded by both hydrophobic (Phe13, Phe91, Val92, and Leu288) and hydrophilic (Asp45, His99, Asp160, Tyr190, and Thr287) residues, with Pro46 and Pro47 being at the entrance of the tunnel (Fig. 5K). Most residues at the acyl and alcohol moieties are conserved among EH 0 homologs, with the exception of Met40, Pro46, Tyr199, Glu226, and Thr287. Remarkably, the bulky Met40 and Tyr199 residues are replaced by smaller residues in the EH 0 homologs. As previously mentioned, Pro46 is unique in EH 0 , while Glu226 is replaced by a conserved Asp, and finally, most homologs show an Asn residue instead of Thr287 (Fig. 6). Therefore, as the retraction of the cap domain must be performed to allow entrance by the substrate, residues Phe91, Tyr190, and Tyr199 from the catalytic domain, which are located close to the catalytic triad, seem essential for substrate specificity (Fig. 5K). DISCUSSION In this study, the microbial community of the Sorghum bicolor rhizosphere was exposed to a chemical treatment prior to environmental DNA (eDNA) extraction to construct a metagenomic library. Plant roots can secrete exudates composed of a large variety of compounds into the soil, some of which may play important roles in the rhizosphere (32,33), and with effects that involve multiple targets, including soil microorganisms. This is why an amendment of the soil with technical cashew nut shell liquid (tCNSL), containing a mixture of phenolic compounds with long aliphatic side chains (up to C 22:0 ), was carried out directly in the rhizosphere and, later, for 3 weeks under controlled laboratory conditions, as we were interested in screening lipolysis-like activity. By applying metagenomics techniques, we retrieved an esterase, EH 0 , highly similar (99% identity) to the predicted a/b hydrolase from the genome of S. pruni (accession no. WP_066587239). The most homologous, functionally characterized protein is actually P95125.1, a carboxylic ester hydrolase LipN from Mycobacterium tuberculosis H37Rv, which shows only 41% amino acid sequence identity with EH 0 . That said, given the high identity of WP_066587239 and EH 0 , we expect both hydrolases to have similar properties, although this is yet to be experimentally confirmed. Indeed, we observed that there are only three changes in their sequences, which are located on the outside and in loops away from the key residues and the dimerization interface (see Fig. S2 in the supplemental material). EH 0 was classified within the previously described hormone-sensitive lipase (HDL) type IV family, which is one of at least 35 families and 11 true lipase subfamilies known to date (10,23,34). This family is reported to contain ester hydrolases with relative SASA values ranging from 0% to 10% and high levels of substrate specificity (19). Note that SASA, computed as a (dimensionless) percentage (0 to 1 or 0 to 100) of the ligand SASA in solution (19), is a parameter that describes the solvent exposure of the cavity containing the catalytic triad and the capacity of a cavity to retain/stabilize a substrate (19). For example, a SASA of 40% (over 100%) implies that 40% of the surface is accessible to the solvent, which facilitates the escape of the substrate to the bulk solvent; this is the case for enzymes with an active site on the surface where the catalytic triad is highly exposed. In contrast, enzymes that have a larger but almost fully occluded site that can better maintain and stabilize the substrate inside the cavity are characterized by relative SASA values of approximately 0 to 10%. This is the case for EH 0 , which has a SASA of 5.07% because of a large but almost fully occluded active site, an architecture that is known to better maintain and stabilize a higher number of substrates inside the cavity (19). Indeed, this enzyme houses a very long and narrow catalytic pocket, where helix a7 is very close to the catalytic triad with residue Glu226, making a direct hydrogen bond to the catalytic nucleophile Ser161. Therefore, it appeared clear that the cap domain must retract to allow entrance of the substrate to the active site. The movement of this domain was modeled by combining X-ray diffraction data with molecular dynamics simulation through the ensemble refinement procedure. This strategy showed a broad range of putative conformations at the cap domain, with Pro46 and Pro47 likely acting as hinges conferring a high plasticity on the N-terminal region of the cap. Remarkably, the presence of a number of prolines at this region, particularly these two sequential prolines, is a unique feature of EH 0 compared to its homologs and other substrate-promiscuous members of family IV (19). Mutational analysis confirmed the role of one of these prolines in the access and hydrolysis of the large and voluminous substrates and thus in the increase in the substrate promiscuity level. The above structural features differ from those of other substrate-promiscuous family IV esterases, tested over the same set of ester substrates, namely, EH 1AB1 (31) and EH 3 (35,36), capable of hydrolyzing a similar number of esters. The comparison of EH 1AB1 and EH 3 with EH 0 shows major differences related to the lid (Fig. 7A). Whereas EH 1AB1 and EH 3 show large and wide catalytic pockets with two possible points of access to the binding site ( Fig. 7B and C), EH 0 has a unique, narrow, and long entrance to the catalytic active site, as noted above (Fig. 7D). Therefore, in the case of EH 0 only a structural rearrangement of the cap domain would allow its adaptation to all different substrates, which likely implies that the cap domain of EH 0 exhibits more flexibility than those from EH 1AB1 and EH 3 . This is consistent with the fact that of the 80 esters that the three enzymes together are able to hydrolyze, 61 (or 76%) were common to all three, and EH 0 was the only one able to hydrolyze such bulky substrates as 2,4dichlorobenzyl 2,4-dichlorobenzoate or diethyl-2,6-dimethyl 4-phenyl-1,4-dihydro pyridine-3,5-dicarboxylate. Biochemical characterization of the novel esterase also revealed that the activity of EH 0 was in most cases stimulated in the presence of 10% organic solvents, particularly in 10% methanol and DMSO. Such activation is also a characteristic of some lipases. For example, the analysis of the lipase from Thermus thermophilus revealed that although the overall structure was kept stable with or without polar organic solvent, the lid region was more flexible in the presence of the latter. The flexible lid facilitates the substrate's access to the catalytic site inside the lipase, and the lipase displays enhanced activity in the presence of a polar organic solvent (37). The use of organic solvents offers more advantages over canonical aqueous biocatalysis for various reasons: higher solubility of hydrophobic substrates, minor risk of contamination, and higher thermal stability (38)(39)(40). EH 0 has a potential advantage in applications that require alkaline conditions due to its ability to act at the optimal pH of 9.0. Temperature-controlled tests indicated a mesophilic/slightly thermophilic profile of the esterase as expected from the original habitat at moderate temperatures. In addition, the structure of EH 0 is similar (31 to 33% identity) to those of extremophiles, namely, Pyrobaculum (3ZWQ) and Sulfolobus (5LK6) species. In summary, the present study evaluated the conformational plasticity of the cap domain in members of family IV and the role of several nonconserved prolines as putative structural factors regulating their broader substrate specificity than that of other members of the 35 families and 11 true lipase subfamilies reported so far (10). This high molecular flexibility is markedly different from that found in other family IV esterases and a family VIII b-lactamase fold hydrolase (EH 7 ) which has been recently shown to be highly substrate promiscuous. In this case, the broad substrate specificity is given by the presence of a more open and exposed S1 site having no steric hindrance for the entrance of substrates to the active site and more flexible R1, R2, and R3 regions allowing the binding of a wide spectrum of substrates into the active site. Conclusions. An activity-based metagenomics approach was used to study the microbial enzyme diversity in rhizosphere soil of Sorghum plants amended with CNSL soil. A novel esterase was found, which possessed a broad substrate promiscuity in combination with a significant pH and solvent tolerance. This work is crucial for deciphering structural markers responsible for the outstanding broad substrate specificity of EH 0 . Indeed, this work further provides important insights into the role of cap domains and their contribution to the diverse selectivity profiles and thus versatility of family IV esterases/lipases toward the conversion of multiple substrates. MATERIALS AND METHODS Plant material and outdoor seed germination. Seeds of Sorghum bicolor genotype BTx623 were obtained from the Agricultural Research Service of the United States Department of Agriculture (USDA) (Georgia, USA) as a gift. Field soil was sampled from the Henfaes Research Centre (53°14921.00N, 4°0 1906.50W, Gwynedd, Wales) in September 2014. The soil sample was composed of a mixture of five topsoil samples collected from randomly selected positions in the field. The soil was air dried, mixed thoroughly, and stored at room temperature for use in subsequent experiments. Two-liter pots were filled with soil, and two seeds of S. bicolor BTx623 were planted per pot. Plants were cultivated in a greenhouse at 20°C, and the soil moisture content was maintained with tap water. Enrichment with CNSL. Three grams of technical cashew nut shell liquid (tCNSL) dissolved in 70% ethanol was added to a pot of 20-day-old plants and thoroughly mixed with the soil. After 60 days, plants were pulled out of the pot, and the soil was shaken off; samples of rhizosphere soil attached to the plant roots were then brushed off and collected. tCNSL was provided by the BioComposites Centre at Bangor University (Wales, UK). Three biological replicates of laboratory microcosm enrichment were set up in conical 1-L Erlenmeyer flasks by mixing 10 g of the collected rhizosphere soil with 300 mL of sterile Murashige Skoog basal medium (Sigma) and 10 mg/L cycloheximide. tCNSL was dissolved in 70% ethanol and added to the medium to a final concentration of 0.1 g/L; flasks were incubated at 20°C in an orbital shaker. Fifty grams of oil slurry was sampled every 7 days, and fresh tCNSL-containing (0.1 g/kg) medium was added to replace the volume of the medium. Extraction of DNA and generation of metagenomic library. Samples collected after 3 weeks of flask microcosm enrichment were used for the construction of fosmid metagenomic libraries. Environmental DNA was extracted using the Meta-G-Nome DNA isolation kit (Epicentre Biotechnologies, WI, USA) according to the manufacturer's instructions. Briefly, 50 mL of the soil suspension from the flask enrichment was centrifuged at 400 Â g for 5 min. The supernatant was filtered through 0.45-mm and 0.22-mm membrane filters. This procedure was repeated with the initial soil sample four times: the remaining soil was resuspended in phosphate-buffered saline (PBS) and centrifuged, and the supernatant was filtered as before. Filters were combined, and the sediment on the filter was resuspended in extraction buffer and collected. DNA extraction was carried out according to the protocol described by the manufacturer. The quality of the extracted DNA was evaluated on an agarose gel and quantified with the Quant-iT double-stranded DNA (dsDNA) assay kit (Invitrogen) on a Cary Eclipse fluorimeter (Varian/Agilent) according to the manufacturer's instructions. The extracted metagenomic DNA was used to prepare two different metagenomic fosmid libraries using the CopyControl fosmid library production kit (Epicentre). DNA was end repaired to generate blunt-ended, 59-phosphorylated doublestranded DNA using reagents included in the kit according to the manufacturer's instructions. Subsequently, fragments of 30 to 40 kb were selected by electrophoresis and recovered from a lowmelting-point agarose gel using GELase 50Â buffer, and GELase enzyme preparation was also included in the kit. Nucleic acid fragments were then ligated to the linearized CopyControl pCC2FOS vector in a ligation reaction performed at room temperature for 4 h, according to the manufacturer's instructions. After in vitro packaging into phage lambda (MaxPlax lambda packaging extract; Epicentre), the transfected phage T1-resistant EPI300-T1R E. coli cells were spread on Luria-Bertani (LB) agar medium (hereinafter, unless mentioned otherwise, the agar content was 1.5% [wt/vol]) containing 12.5 mg/mL chloramphenicol and incubated at 37°C overnight to determine the titer of the phage particles. The resulting library, SorRhizCNSL3 W, has an estimated titer of 1.5 Â 10 6 clones. For long-term storage, the library was plated onto solid LB medium with 12.5 mg/mL chloramphenicol, and after overnight growth, colonies were washed off from the agar surface using LB broth with 20% (vol/vol) sterile glycerol, and aliquots were stored at 280°C. Screening metagenomic libraries: agar-based methods. Fosmid clones obtained by plating the constructed libraries on LB agar plates were arrayed in 384-microtiter plates (1 clone/well) or alternatively in 96-microtiter plates (pools of approximately 40 clones/well) containing LB medium and chloramphenicol (12.5 mg/mL). The plates were incubated at 37°C overnight, and the day after replication, the plates were produced and used in the screening assay. Glycerol (20% [vol/vol], final concentration) was added to the original plates, which were stored at 280°C. Gel diffusion and colorimetric assays were adapted for the screening of the desired activities. The detection of lipase/esterase activity was carried out on LB agar supplemented with chloramphenicol (12.5 mg/mL), fosmid autoinduction solution (2 mL/L) (Epicentre), and 0.3% (vol/vol) tributyrin emulsified with gum arabic (2:1, vol/vol) by sonication. The previously prepared microtiter plates were printed on the surface of large (22.5 cm by 22.5 cm) LB agar plates using 384-pin polypropylene replicators and incubated for 18 to 48 h at 37°C. Lipolytic activity was identified as a clear zone around the colonies where tributyrin was hydrolyzed (12). Extraction of fosmids, DNA sequencing, and annotation. The fosmid DNA of the positive clone was extracted using the Qiagen plasmid purification kit (Qiagen). To reduce the host chromosomal E. coli DNA contamination, the sample was treated with ATP-dependent exonuclease (Epicentre). The purity and approximate size of the cloned fragment were assessed by agarose gel electrophoresis after The Cap Domain of an Esterase from Sorghum Rhizosphere Applied and Environmental Microbiology endonuclease digestion simultaneously with BamHI and XbaI (New England Biolabs; in NEBuffer 3.1 at 37°C for 1 h using 1 U of enzyme per 1 mg DNA). DNA concentration was quantified using the Quant-iT dsDNA assay kit (Invitrogen), and DNA sequencing was then outsourced to Fidelity Systems (NJ, USA) for shotgun sequencing using the Illumina MiSeq platform. GeneMark software (41) was employed to predict protein coding regions from the sequences of each assembled contig, and deduced amino acid sequences were annotated via BLASTP and the PSI-BLAST tool (42). Cloning, expression, and purification of proteins. The selected nucleotide sequence was amplified by PCR using Herculase II fusion enzyme (Agilent, USA) with specific oligonucleotide primer pairs incorporating p15TV-L adapters. The corresponding fosmid was used as a template to amplify the target genes. The primers used to amplify the esterase gene characterized in this study were as follows: EH 0F , TTGTATTTCCAGGGCATGACCGAGCTCTTCGTCCGC; EH 0R , CAAGCTTCGTCATCATGCCGCCGCCTGTGCCATC. PCR products were visualized on a 1% Tris-acetate-EDTA (TAE) agarose gel and purified using the NucleoSpin PCR cleanup kit (Macherey-Nagel) following the manufacturer's instructions. Purified PCR products were cloned into the p15TV-L vector, transformed into E. coli NovaBlue GigaSingles competent cells (Novagen, Germany), and plated on LB agar with 100 mg/mL ampicillin. The correctness of the DNA sequence was then verified by Sanger sequencing at Macrogen Ltd. (Amsterdam, The Netherlands). 3D models of the proteins were generated by Phyre2. The intensive mode attempts to create a complete full-length model of a sequence through a combination of multiple template modeling and simplified ab initio folding simulation (43). The nucleotide and amino acid sequences of the selected nucleotide sequences are available at GenBank under accession no. MK791218. For recombinant protein expression, the plasmids were transformed into E. coli BL21(DE3) cells and subsequently plated on LB agar with 100 mg/mL ampicillin. To confirm the esterase activity of recombinant proteins, E. coli clones harboring the recombinant plasmid were streaked onto LB agar plates containing 0.5% (vol/vol) tributyrin and 0.5 mM isopropyl-b-D-galactopyranoside (IPTG), or purified enzymes were spotted directly on the agar. The plates were then incubated at 37°C overnight and visually inspected for the presence of signs of substrate degradation. E. coli clones were grown at 37°C to an absorbance of 0.8 at 600 nm, induced with 0.5 mM IPTG, and allowed to grow overnight at 20°C with shaking. Cells were harvested by centrifugation at 5,000 Â g for 30 min at 4°C. For purification of recombinant protein, the following protocol was applied. Cell pellets were resuspended in cold binding buffer (50 mM HEPES, pH 7.5, 400 mM NaCl, 5% glycerol, 0.5% Triton X-100, 6 mM imidazole, pH 7.5, 1 mM b-mercaptoethanol, 0.5 mM phenylmethylsulfonyl fluoride [PMSF]) and extracted by sonication. The lysates were then centrifuged at 22,000 Â g for 30 min at 4°C, and the supernatant was purified by affinity chromatography using nickel-nitrilotriacetic acid (Ni-NTA) His-bind resin (Novagen). The column packed with the resin was equilibrated with binding buffer, and after the addition of supernatant, it was washed with 6 volumes of wash buffer (50 mM HEPES, pH 7.5, 400 mM NaCl, 5% glycerol, 0.5% Triton X-100, 26 mM imidazole, pH 7.5, 1 mM b-mercaptoethanol, 0.5 mM PMSF) to remove nonspecifically bound proteins. His-tagged proteins were then eluted with elution buffer (50 mM HEPES, pH 7.5, 400 mM NaCl, 5% glycerol, 0.5% Triton X-100, 266 mM imidazole, pH 7.5, 1 mM b-mercaptoethanol, 0.5 mM PMSF). The size and purity of the proteins were estimated by SDS-PAGE. Protein solutions were desalted through the Amicon Ultra15 10K centrifugal filter device. Protein concentrations were determined using the Bradford reagent (Sigma) and the BioMate 3S spectrophotometer (Thermo Scientific, USA). Biochemical assays. Hydrolytic activity was determined by measuring the amount of p-nitrophenol released by catalytic hydrolysis of p-nitrophenyl (p-NP) esters through a modified method of Gupta et al. (44). Stock solutions of p-NP esters (100 mM p-NP acetate, 100 mM p-NP butyrate, 20 mM p-NP dodecanoate, and 20 mM p-NP palmitate) were prepared in DMSO-acetonitrile (1:1, vol/vol). Unless stated otherwise, the enzymatic assay was performed under standard conditions in a 1-mL reaction mixture (50 mM potassium phosphate buffer, pH 7.0, 0.3% [vol/vol] Triton X-100, 1 mM substrate) under agitation in a water bath at 50°C until complete substrate solubilization (note that solvents were present in the assays at a concentration of 1% for 100 mM stocks of p-NP butyrate and acetate or 5% for 20 mM stocks of p-NP dodecanoate and palmitate). After that, the multititer plate was preincubated at 30°C for 5 min in a BioMate 3S spectrophotometer (Thermo Scientific, USA), which was set up at this temperature, and then an appropriate volume of purified enzyme containing 0.25 mg was added to start the reaction. The reaction mixture was incubated at 30°C for 5 min and then measured at 410 nm in a BioMate 3S spectrophotometer (Thermo Scientific, USA). The incubation time for p-NP dodecanoate and p-NP palmitate was extended to 15 min. All experiments were performed in triplicate, and a blank with denatured enzyme was included. The concentration of product was calculated by a linear regression equation given on the standard curve performed by the reference compound p-nitrophenol (Sigma). One unit of enzyme activity was defined as 1 mmol of p-nitrophenol produced per minute under the assay conditions. Kinetic parameters were determined under standard conditions and calculated by nonlinear regression analysis of raw data fit to the Michaelis-Menten function using GraphPad Prism software (version 6.0). For the kinetics with p-NP butyrate and acetate, the concentrations were set up to 0.1, 0.5, 1, 2, and 10 mM, using stock concentrations of 100 mM in DMSO-acetonitrile (a maximum concentration of 5% DMSO-acetonitrile was used in the assay). For p-NP dodecanoate, concentrations of 0.05, 0.1, 0.5, 1, 2, and 3 mM were used, with a stock concentration of 20 mM substrate (a maximum concentration of 7.5% DMSO-acetonitrile was used in the assay). Raw data and information about precision and calculations are provided in Table S6 in the supplemental material. The optimal pH for enzyme activity was evaluated with p-NP butyrate by performing the assay in different buffers, specifically 20 mM sodium acetate buffer (pH 4.0), sodium citrate buffer (pH 5.5), potassium phosphate buffer (pH 7.0), and Tris-HCl (pH 8.0 to 9.0). The enzyme reactions were stopped by adding 1 mL of cold stop solution (100 mM sodium phosphate buffer, pH 7.0, 10 mM EDTA) to neutralize the pH and avoid changes in the equilibrium between p-nitrophenol and the deprotonated form p-nitrophenoxide, which would result in a decrease in absorption at the applied wavelength of 410 nm (45). The optimal enzymatic temperature was investigated with p-NP butyrate by performing the hydrolytic assay at different temperatures under standard conditions (see above). To determine the denaturation temperature, circular dichroism (CD) spectra were acquired between 190 and 270 nm with a Jasco J-720 spectropolarimeter equipped with a Peltier temperature controller in a 0.1-mm cell at 25°C. The spectra were analyzed, and melting temperature (T m ) values were determined at 220 nm between 10 and 85°C at a rate of 30°C per hour in 40 mM HEPES buffer at pH 7.0. CD measurements were performed at pH 7.0 and not at the optimal pH (8.5 to 9.0) to ensure protein stability. A protein concentration of 0.5 mg Á mL 21 was used. T m (and the standard deviation of the linear fit) was calculated by fitting the ellipticity (millidegrees [mdeg]) at 220 nm at each of the different temperatures using a 5-parameter sigmoid fit with SigmaPlot 13.0. Stability in organic solvents was assayed with p-NP butyrate under standard conditions in the presence of 10, 20,40, and 60% (vol/vol) of the water-miscible organic solvents ethanol, methanol, isopropanol, acetonitrile, and DMSO and a mixture of acetonitrile-DMSO (50% each). The effect of cations was investigated with p-NP butyrate under standard conditions by the addition of MgCl 2 , CuCl 2 , FeCl 3 , CoCl 2 , CaCl 2 , MnCl 2 , and ZnSO 4 at concentrations in the range of 1 to 10 mM. In all cases, the measured values were then expressed as the relative activity in comparison to the control reaction performed under standard conditions. The hydrolysis of esters other than p-NP esters, including bis(2-hydroxyethyl)-terephthalate (BHET), was assayed using a pH indicator assay in 384-well plates at 30°C and pH 8.0 in a Synergy HT multimode microplate reader in continuous mode at 550 nm over 24 h (extinction coefficient of phenol red, 8,450 M 21 cm 21 ). The acid produced after ester bond cleavage by the hydrolytic enzyme induced a color change in the pH indicator that was measured at 550 nm. The experimental conditions were as detailed previously (35), with the absence of activity defined as at least a 2-fold background signal. Briefly, the reaction conditions for 384-well plates (catalog no. 781162; Greiner Bio-One GmbH, Kremsmünster, Austria) were as follows: protein, 0.2 to 2.0 mg per well; ester, 20 mM; temperature (T), 30°C; pH, 8.0 [5 mM 4-(2-hydroxyethyl)-1-piperazinepropanesulfonic acid (EPPS) buffer, plus 0.45 phenol red]; reaction volume, 40 mL. The reactions were performed in triplicate, and data sets were collected from a Synergy HT multimode microplate reader with Gen5 2.00 software (BioTek). One unit of enzyme activity was defined as the amount of enzyme required to transform 1 mmol of substrate in 1 min under the assay conditions. Raw data and information about precision and calculations are provided in Table S7. Crystallization and X-ray structure determination of EH 0 . Initial crystallization conditions were explored by high-throughput techniques with a NanoDrop robot (Innovadyne Technologies) using 24 mg Á mL 21 protein concentrations in HEPES (40 mM, pH 7 50 mM, NaCl), protein reservoir ratios of 1:1, 1.5:1, and 2:1, and commercial screens Crystal Screen I and II, SaltRx, Index (Hampton Research), JBScreen Classic, JBScreen JCSG, and JBScreen PACT (Jena Bioscience). Further optimizations were carried out, and bar-shaped crystals of EH 0 were grown after 1 day of mixing 1.2 mL of a mixture of protein (1 mL, 24 mg Á mL 21 ) and seeds (0.2 mL, 1:100) with guanidine hydrochloride (0.2 mL, 0.1 M) and reservoir (0.5 mL, 11% polyethylene glycol 8000, 100 mM Bis-Tris, pH 5.5, 100 mM ammonium acetate). For data collection, crystals were transferred to cryoprotectant solution consisting of mother liquor and glycerol (20% [vol/vol]) before being cooled in liquid nitrogen. Diffraction data were collected using synchrotron radiation on the XALOC beamline at ALBA (Cerdanyola del Vallés, Spain). Diffraction images were processed with XDS (46) and merged using AIMLESS from the CCP4 package (47). The crystal was indexed in the P2 1 2 1 2 1 space group, with two molecules in the asymmetric unit and 62% solvent content within the unit cell. The data collection statistics are given in Table S4. The structure of EH 0 was solved by molecular replacement with MOLREP (48) using the coordinates from Est8 as a template (PDB code 4YPV). Crystallographic refinement was performed using the program REFMAC (49) within the CCP4 suite, with NCS (noncrystallography symmetry) medium restraints and excluding residues 17 to 36. The free R factor was calculated using a subset of 5% randomly selected structure-factor amplitudes that were excluded from automated refinement. Subsequently, heteroatoms were manually built into the electron density maps with Coot8 (50), and water molecules were included in the model, which, combined with more rounds of restrained refinement, reached the R factors listed in Table S4. The figures were generated with PyMOL. The crystallographic statistics of EH 0 are listed in Table S4. To extract dynamical details from the X-ray data, the coordinates of EH 0 were first refined using PHENIX (51) and then were used as input models for a time-averaged molecular dynamics refinement as implemented in the Phenix.ensemble-refinement routine, which was performed as described previously (52). Data availability. The sequence encoding EH 0 was deposited in GenBank with the accession number MK791218. The atomic coordinates and structure factors for the EH 0 structure have been deposited in the RCSB Protein Data Bank with accession code 7ZR3.
2022-09-14T18:01:53.666Z
2022-12-08T00:00:00.000
{ "year": 2023, "sha1": "1007181dcf1f6271018a99e739e8967827ea95e2", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/7528485/files/The%20mobility%20of%20cap%20domain%20is%20essential.pdf", "oa_status": "GREEN", "pdf_src": "ASMUSA", "pdf_hash": "58313c10c8c77a1651e1048b273f97e3df57c34d", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
8045503
pes2o/s2orc
v3-fos-license
Vacuum polarization energy losses of high energy cosmic rays The process of the vacuum polarization energy losses of high energy cosmic rays propagating in the extragalactic space is considered. The process is due to the polarization of Cosmic Background Radiation by a moving charged particle. With the goal of the description of the process, the photon mass, refractive indices and permittivity function for low and high energy photons are found. Calculations show the rather noticeable level of the energy losses for propagating protons with the enegies more than 10^6 - 10^7 GeV. The influence of the polarization energy losses on propagation of cosmic rays is discussed. Introduction The problems of the origin and propagation of high energy cosmic rays is widely discussed in recent years (see, for example [1,2,3] and literature therein). The photoproduction of mesons and e ± -pairs in the Cosmic Background Radiation was considered as the main reason of the energy losses of charged particles propagating in the extragalactic space [2,4,5]. However, the simulations of the spectral distributions of cosmic rays on this basis do not provide a close agreement with observed data [2], and some peculiarities in the spectrum like the "knee" [6] and GZK-cutoff [7,8] do not find recognized explanation. In this connection the determination of another reason of the energy losses is important and interesting. In this paper we consider the possible mechanism by which cosmic rays lose their energy in the extragalactic space. It is the polarization energy losses of a charged particle moving in the electromagnetic vacuum. In the presence of an external electromagnetic field the polarization of vacuum was considered first in the pioneer papers [9]. The vacuum polarization leads to different effects [10] such as the nonlinearity of the Maxwell equations, appearance of nonzero photon mass, birefringence of light, etc. The description of the various methods in the QED of vacuum one can find in literature ( see [11] and references therein). In parallel with other methods the traditional method, based on the introduction of the permittivity tensor, may be used for the effective description of different phenomena in the electromagnetic vacuum. In this case the vacuum described by the Maxwell equations which are similar to equations of the classical electrodynamics for continuous media [12,13]. As an example, one can point out the well known low energy permittivity and permeability tensors for constant electric and magnetic fields [10]. In paper [14] the connection between the polarization and permittivity tensors is established. The using of the permittivity tensor is convenient because this approach allows to consider the QED vacuum and matter with the unit point of view. 2 The mass of photon, propagating in an isotropic photon gas The photon mass m γ in an isotropic medium is defined by the refractive indexñ of monochromatic photons propagating in this medium where E γ is the photon energy. From this equation one can see the equivalence of finding of the photon mass and the refractive index. Note that in the general case the refractive index is a complex value. However, in a transparent medium its imaginary part is a negligibly small quantity. Our calculations of photon mass in an isotropic photon gas are based on the results of papers [15,16], where propagation of γ-quanta in the field of monochromatic and dichromatic laser waves was considered. In these papers the permittivity tensor was derived for such media. Knowing this tensor one can find the refractive indices of photon and other characteristics of the propagation process. The imaginary components of the tensor are connected with the energy losses due to e ± -pair production in the laser wave. The real components of the permittivity tensor are derived with the help of dispersion relations. In the case of an isotropic photon gas the energy losses one can describe by the following relation (see, for example [17]): where L γγ is the reciprocal of the mean free path for collisions, n(ǫ) is the differential photon gas number density for the photon energy equal to ǫ, s is the center of momentum frame energy squared, σ γγ (s) is the total cross section of e ± -pair production, E γ is the energy of photon propagating in a medium, s min = 4m 2 c 4 , s max = 4ǫE γ , ǫ min = s min /4E γ , m is the electron mass and c is the speed of light in the vacuum. All relations in this paper will be valid for the laboratory coordinate system, which determined as the system where the mean momentum of the background photons is equal to zero. From Eq.(2) one can find the imaginary part of refractive indexñ The explicit form of this relation is: where r e and λ e are the classical radius of electron and its Compton wavelength, z = 4m 2 c 4 /s and z m = 4m 2 c 4 /s max = m 2 c 4 /(ǫE γ ). The corresponding real part of the photon refractive index is: where function F re (z) is determined by the following relation: The functions L − , L + are equal to: The obtained here relations for refractive indices are valid for low and high energy photons propagating in a photon gas. Now one can calculate the refractive indices of the photon propagating in the extragalactic space. As the first approximation of this medium one can take the model of the space which is filled by the Cosmic Background Radiation. In this case the number density is defined by well known equation: where T is the temperature. For low energy photons one can find the following relation for real part of the refractive index: The real and imaginary parts of refractive indices are connected by the dispersion relation: At present temperature T = 2.726 o K the real part of refractive index for low energy photons is 5.1 10 −43 . These results are in agrement with the similar calculations [18,19,20] of the low energy refractive indices in an isotropic photon gas. Fig.1 illustrates the results of calculation of the photon refractive indices in the simplest model of the extragalactic medium (in the Cosmic Background Radiation). One can see that the real part is nearly constant in the wide energy range from 0 till 2 10 5 GeV with the flat maximum (≈ 6 10 −43 ) at E γ ≈ 3 10 5 GeV . At energies more than 2.5 10 6 GeV the real part of index is negative. The imaginary part has maximum value ≈ 4 10 −43 at E γ = 8 10 5 GeV . Knowing the refractive indices one can find the energy dependence of the permittivity function of a photon gas. However, this function is not uniquely defined and it depends on the relation between magnetic induction vector B and intensity of magnetic field H (see [12,13]). In [15] the Maxwell equations are written with H = B and in this case the permittivity function ε is Energy losses of particle moving in a medium Propagating in the Cosmic Background Radiation charged cosmic rays produce polarization of this medium. As result the cosmic rays lose the initial energy. This process is similar in many respects to the ionization losses of energy of charged particles moving in matter. It is clear that the vacuum polarization is many times more weak process, however it may be noticeable on a large distances of particle propagation in such a specific medium as the extragalactic space. Our consideration will be based on the Landau theory of ionization energy losses of relativistic particles in a matter [12]. However, a small adaptation of the theory will be done with the goal of the elimination of divergences in the final result. The Maxwell equations in a medium have the following form: where E is the intensity of electric field and B is the magnetic induction vector, ρ and j are the charge and current densities,ε is the permittivity operator (see [12]), t is the time. The charge and current distributions one can take in the following form: where e is the particle charge. The field potentials we write in the usual form: In accordance with [12] we use the further condition for potentials: The substitution Eq (17) in Eq(15) allows to get the following relations for potentials: These equations for Fourier components have the form: where F k = ρ(r) exp(−ikr)dr. One can see that the functions A k and ϕ depend on the time as exp (−ikv). As it shown in [12], the result of action ofε-operator on exp (−iωt)function is its multiplication on ε(ω)-function. Now from Eqs. (21)(22) one can get: The Fourier component of the intensity of electric field has the following form: Now we find the damping force, which act on the extended charge. For this purpose we use the following well known relation: where D is the electrical induction vector. After integration over some volume V one can get: where σ is the bounding surface. It is clear, that at r → ∞ the surface integral tends to zero. Then the damping force has the following form: Obviously that Fouirier component of electric intensity has the form: where Φ(k) is the known function (see Eqs. (23)(24)(25)). Then we can write the following relation: Obviously, the damping force is directed against the velocity of particle. Let us name k x v = ω and q = k 2 y + k 2 z , where x-coordinate axis is along the direction of motion, ω is the photon frequency. Now one can obtain the following equation for the absolute value of the damping force: One can see that these relation is differ from similar relation in [12] by F 2 (k)-multiplier. This multiplier is the charge distribution of the moving particle in k-space (the axial symmetry of the charge distribution around x-axis is assumed). For next we can write that ε ≈ 1 + ∆ε ′ + i∆ε ′′ , |∆ε ′ | << 1, ∆ε ′′ << 1. Then we extract the real and imaginary parts of the integrand function in Eq.(30). Thus, we get the sum of two integrals: I 1 + iI 2 . Besides, it needs to take into account that ∆ε ′ and ∆ε ′′ are even and odd functions of the frequency. One can show that I 2 = 0, and therefore the damping force is: where γ is the Lorentz factor of the particle. Besides, in this equation F is the charge form factor of the particles (i.e. the charge distribution in k-space in the particle rest frame). The Eq.(31) for the damping force is final. The two integrals in this equation are converged. The integral over q is converged due to the form factor of the particle, and the integral over ω is converged due to relation:ε → 1 at ω → ∞. Energy losses of cosmic rays For cosmic rays Eq.(31) one can simplify. It is well known that the maximum observed energy of cosmic rays is less than 10 12 GeV . One can see that energy losses depend on the real and imaginary parts of the permittivity function. However, one can neglect of this dependence in the denominator. It is really that γ −2 >> |∆ε ′ |, ∆ε ′′ (see Fig.1 and [21]). Now one can write the damping force for cosmic rays in the following form: whereq =hq and E γ =hω. Here we use relation: ε ′′ = ∆ε ′′ . For the next calculations we take the empirical electric proton form factor [22]: where Q is the three-dimensional transfer momentum, the empirical constant M 2 V = (0.84GeV /c) 2 . Then we find the integral overq and get the following relation: It is helpful to obtain the inexact but simple estimation relation for the damping force. It is possible to make for the large γ-factors (> 10 8 ) of the cosmic rays. This relation is: where < E γ > is some value of the γ-quantum energy range, where the integrand has noticeable quantities, E γ,b is the boundary value of the energy range and it is satisfied the condition E γ,b ≫ γM V v. For example, one can define theĒ γ by the following equation: According to calculations < E γ >≈ 8 10 7 GeV and the quantity of the integral in the numerator of Eq.(39) is equal to ≈ 10 −28 GeV 2 at E γ,b ∼ 10 13 GeV . The choice of E γ,b in the energy range from 10 10 till 10 14 GeV is weakly changed the result. Then we get the following estimation: where Z is the particle charge in the elementary one units. From here one can see that energy losses of high energy cosmic rays (protons) is about some hundreds of GeV per light year. This value is the same order as the e ± -pair production in the Cosmic Background Radiation. Fig.2 illustrates the calculation of vacuum polarization energy losses of protons in accordance with Eq.(34). Discussion The results of calculations of the vacuum polarization energy losses of the high energy cosmic rays show that losses are reasonably large, and, because of this, it is necessary to take into account this process for true description of the particle propagation in the extragalactic space. Fig.3 illustrates the relative polarization energy losses of protons and iron nuclei in the extragalactic space. The results of calculation [2] of the energy losses due to the photoproduction processes (p + γ → p + e + + e − , p + γ → p + π 0 , ...) are also shown on the figure. One can see the following peculiarities of the considered process: i) the spectral distribution of polarization energy losses are rather broad, and the flat maximum of the relative losses is at E p,max ≈ 4 10 7 GeV and E F e,max ≈ 1.4 10 10 GeV for protons and iron nuclei, correspondingly; ii) the relative value of the polarization energy losses has the same order as the losses due to the photoproduction processes (at proton energies in the range 10 7 − 10 10 GeV ); iii) the strong drop of polarization losses take place for particle energies ≪ E p,max , E F e,max ; iv) the spectral behavior of proton and iron polarization losses is the same, but there is valuable energy shift between the curves on Figs.2-3. As result at energies < 10 9 GeV the polarization losses of protons is more, than ones for iron nuclei. The reverse situation is observed at energies > 10 9 GeV ; v) our calculations show that the points of inflection are exist for the both curves on Fig.2 (when d 2 F /d 2 E = 0). It takes place when the energies of particles are equal to 1.5 10 7 and 1.4 10 10 GeV for protons and iron nuclei, correspondingly. The behavior of energy losses for iron nuclei one can understand if taken into account that the losses are proportional to the square of charge but the form factor cuts of the value of damping force more strong for iron, than for proton (the constant M V = 0.13GeV /c ). Now we make attempt to explain some experimental data in the observation of the cosmic ray spectrum. The absence of the clear photoabsorbtion threshold in the spectrum [2,5] one can explain by the common continuous character of the summary energy losses. Really, from fig.3 we see that the polarization energy losses sewed together with the photoabsorbtion ones. It means that clear photoproduction threshold can not be detected on this background. The "knee"-effect is another misunderstand area in the spectrum of the cosmic rays. One can suggest that the "knee" is a point where the character of cosmic ray propagation is changed. At energies above the "knee"-level the particles lose their relative energy and after passing the "knee"-point the energy losses are decreased and accumulation particles in this area takes place. Some remarks concerning the composition of cosmic rays. The composition is dominated by protons at the lowest energies, and then the fraction of light nuclei increases with energy [6,23,24]. However, at energies > 5 10 9 GeV the protons is again dominated. We can see on Fig.2 that at low energies the polarization losses of protons in many times exceed the same losses for iron nuclei. On the other hand, at energies > 5 10 9 polarization losses for iron nuclei are large. In particular, the composition of cosmic rays is determined by lifespan in which a cosmic particle keeps the energy. Note, the "knee" is observed only for proton fraction, and it is absent for iron nuclei with energies ≤ 10 9 GeV . From our point of view this fact is obvious. We believe that the iron "knee" is at energies more than 5 10 9 GeV near the point of inflection for the iron energy losses curve. In the case if the considered here mechanism of vacuum polarization energy losses is true the following statement take place: the initial number (or the speed of production) of cosmic rays ( with energies above the "knee"-point) is more, than it is commonly supported. These our conclusions have the qualitative character. However, no doubt they may be tested by simulation of the propagation process of cosmic rays. In conclusion, we have touched on the nature of the considered phenomenon. The using of methods of the electrodynamics of the continuous media allow to describe mathematically the mechanism of the vacuum polarization energy losses of relativistic particles without detailed description of the primary processes, which are responsible for this phenomenon. From this standpoint the moving charged particles polarize the medium. It requires some energy and the particles give back its to the medium. In the case of the electromagnetic vacuum this energy go into creation of virtual e ± -pairs. According to Eq.(39) the spectrum of the virtual pairs is the rather broad and its upper bound is near the energy of particles. Although the quantities of the permittivity function ε ′′ are rather small, the noticeable value of the energy losses is generated due to broad and high energy spectrum of virtual pairs in the laboratory coordinate system. It is obvious that these pairs are low energetic in the rest frame of the cosmic charged particle. Then the real photons from the Cosmic Background Radiation and virtual ones can interact effectively in between at the condition ǫE γv > m 2 c 4 , where E γv is the energy of the virtual state. It should be noted, that we do not consider the influence of the red shift on the energy losses. It needs to make in the case when propagation of the cosmic rays is investigated on long distances. Besides, we think that the contribution in the vacuum polarization energy losses of cosmic rays from another background fields is possible. Conclusion On the basic of determination such characteristics as the photon mass, refractive indices and permittivity function in the Cosmic Background Radiation the vacuum polarization losses of high energy cosmic rays are considered. The calculations show the high level of these losses for protons with energies more than ≈ 10 7 GeV . The proposed mechanism of losses leads to a revision of the existing propagation models of cosmic rays. With our point of view the propagation of the high energy cosmic rays in the extragalactic space is the dynamic process to a greater extent than it is expected. Experimental and theoretical ivestigations of these processes will help to understand the nature and origin of the cosmic rays in the Universe. The author would like to thank H. Zaraket for critical questions, remarks and useful references. The relative vacuum polarization energy losses of protons and iron nuclei propagating in the extragalactic space as function of the energy. On the right the similar value for photoproduction energy losses (from [2]) is presented for comparison.
2014-10-01T00:00:00.000Z
2002-06-20T00:00:00.000
{ "year": 2002, "sha1": "8d5728f51bcb65c8ba498817ce8a28f6d9426cfa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8d5728f51bcb65c8ba498817ce8a28f6d9426cfa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119224605
pes2o/s2orc
v3-fos-license
Light Curves and Period Changes of Type II Cepheids in the Globular Clusters M3 and M5 Light curves in the B, V, and I_c passbands have been obtained for the type II Cepheids V154 in M3 and V42 and V84 in M5. Alternating cycle behavior, similar to that seen among RV Tauri variables, is confirmed for V84. Old and new observations, spanning more than a century, show that V154 has increased in period while V42 has decreased in period. V84, on the other hand, has shown large, erratic changes in period that do not appear to reflect the long term evolution of V84 through the HR diagram. Introduction Type II Cepheids were among the first variable stars discovered within the globular clusters M3 (NGC 5272) and M5 (NGC 5904) (Pickering 1889;Packer 1890). Their initial discovery came long before the realization that Cepheids were pulsating stars (Shapley 1914), and even longer before Baade's (Baade 1956) discovery that Cepheids could be divided into two population groups. Until recently, the vast majority of observations available for globular cluster Cepheids were obtained photographically. This paper presents new B, V , and Cousins I c band CCD light curves of the type II Cepheids V154 in M3 and V42 and V84 in M5, as well as some new photographic data. These variable stars were first observed more than a century ago, raising the possibility that their long term period changes may indicate the direction and speed of their evolution through the instability strip. Our new light curves, together with earlier observations and, in the case of V42, observations from the All Sky Automated Survey (ASAS) (Pojmanski 2002), are used to rediscuss the long term period changes of these variables. New CCD Data Images of M3 and M5 covering 10 x 10 arcmin were taken with the 0.6-m reflector on the campus of Michigan State University during 2003-2005 using an Apogee Ap47p CCD camera. Additional Michigan State University observations were obtained of M5 in 2006 utilizing an Apogee Alta U47 CCD camera. Supplemental images were also obtained with two 0.4-m reflectors -in 2003 at the Brooks Astronomical Observatory of Central Michigan University with a Photometrics Star-1 camera and in 2004 with an SBIG ST-8 CCD on the Macalester College telescope. As expected for observations from low altitude midwestern sites, seeing was usually not good, often being around 3 seconds of arc and sometimes worse. Images were obtained with Johnson B, V , and Cousins I c filters. Exposure times ranged from one minute to six minutes depending on telescope and filter. Bias, dark, and flat field corrections were applied using standard techniques. The data were then reduced using Peter Stetson's DAOPHOT profile-fitting reduction package (Stetson 1987(Stetson , 1994. The resulting instrumental b, v and i magnitudes were reduced to the standard system using equations of the form: (1) (2) Color terms were determined from observations of Landolt standard stars and stars within the open cluster M67 (Schild 1983(Schild , 1985Landolt 1992). Occasionally, observations were not made through all three filters so that the usual means of obtaining color corrections could not be applied. In those cases, the color at the time of observation was determined from the typical color at the appropriate phase in the light curve. The color terms are sufficiently small and the light curves are sufficiently well established that this is not expected to be an important source of uncertainty. Zero-point corrections to the standard system were determined from 5-6 uncrowded local standard stars within each cluster for which magnitudes had been determined by Stetson (2000). The type II Cepheids tend to be bluer than the other bright, uncrowded stars within the fields of our CCD images, and thus the local Stetson standards are redder (by 0.5 to 0.8 in B − V ) than the target variables in both M3 and M5. The redder colors of the local standards will introduce error in the standard magnitudes for the Cepheids if there is a significant error in the coefficients of the color transformation equations. We have avoided extremely red local standards, and the transformation coefficients were determined separately for the Michigan State University, Central Michigan University, and Macalester College observations, which should minimize uncertainties from this source. Nonetheless, systematic errors of about 0.01 magnitude are possible. The photometry of the three Cepheids is listed in Table 1. The filter, variable star identification, heliocentric Julian date, observed magnitude, uncertainty, and the source of the individual observations are listed in columns 1 through 6, respectively. In column 6, MSU is used for observations obtained at the Michigan State University observatory, CMU for observations obtained at the Central Michigan University observatory, and MAC for observations obtained at the observatory of Macalester College. The uncertainties listed in Table 1 reflect the formal uncertainties in the photometry as propagated through the daophot program and the application of the transformation equations. Heliocentric Julian Dates were determined from UTC of mid-exposure without application of the, in these cases, insignificant ∆T correction for the difference between UTC and uniform ephemeris time (Sterken 2005). New Photographic Data Because relatively few observations of V42 and V84 in M5 had been published between 1976 and 2003, one of us (W.O.) searched the plate vault of the Yerkes Observatory for plates of this cluster taken during this time period. Fourteen useful plates, all taken in 1982-83, were located. These are IIa-O plates taken with a BG1 filter, so that magnitudes estimated from these plates approximate those in the B photometric bandpass. W.O. made eye estimates of V42 and V84 on the plates, interpolating among standard stars A, B, E, and F in Coutts Clement & Sawyer Hogg (1977), adopting the B magnitudes for these stars given in Arp (1962). The estimated uncertainty of each estimate is about 0.15 magnitude. The resultant photometry is listed in Table 2. ASAS Data V42 is sufficiently far from the center of M5 that V-band photometry for it has been obtained in ASAS (Pojmanski 2002). We have extracted 353 V -band observations labeled of quality A (the best quality) from the photometry list for the ASAS object designated 151825+0202.9. Because the pixels of ASAS are relatively big (15 arc seconds), we have used only data from the smallest ASAS aperture (2 pixels in diameter). The listed uncertainties on the magnitudes are typically 0.03 to 0.05 magnitudes. Periods and Light Curves The best periods for the variables were determined using the Phase Dispersion Minimization routine (Stellingwerf 1978) as implemented in IRAF 1 , the period04 program (Lenz 2004), and a discrete Fourier transform as implemented in the Peranso software suite 2 . These methods all yielded consistent results and provided periods which were then used to construct phased light curves. The results for each star are presented below. The adopted uncertainties in the derived periods are a combination of the uncertainties given by the period04 routine and from visual inspection of the light curves. V154 in M3 The type II Cepheid V154 in M3, discovered in 1889, was the first periodic variable star to be identified within a globular cluster (Bidelman 1990). However, because of its proximity to the center of the cluster, it has been omitted in many photometric studies of the variable stars in M3. The best period for our 2003-2004 data was found to be 15.29 ± 0.02 days, close to the period of 15.2842 days used by Arp (1955) and adopted in the O-C diagram of Hopp (1980). The B, V , and I c phased light curves of V154 in M3 are shown in Figure 1. The scatter about the mean light curves in Figure 1 is larger than the formal photometric uncertainties (typically 0.02 or 0.03 mag). V154 is close enough to the center of the cluster that there can be some blending with neighboring stars, especially on our nights of poorer seeing, and that is very likely responsible for some of the scatter in the light curves despite our use of a profile-fitting photometry technique. Bakos et al. (2000) note that in their observations the image of V154 is blended with that of an RR Lyrae star, V268, which would also be the case with our observations. Nonetheless, it is possible that some of the scatter reflects real changes in the variability of V154. In a study of the light curve of the field type II Cepheid W Virginis (main period = 17.27 days), Templeton & Henden (2007) noted that the light curve of that star showed a scatter of about 0.1 magnitude, much larger than the 0.01 mag expected from observational error alone. They concluded that the light curve of W Vir could not be completely described by a single periodicity, and were able to identify two additional periodicities that contributed to the observed light curve. Our data for V154 are less extensive than the W Vir data obtained by Templeton & Henden (2007), and are not adequate to reveal any secondary periodicity for V154 comparable to those found in W Vir. However, the possibility that some of the scatter in our light curves of V154 may reflect real cycle to cycle differences should be kept in mind. V42 in M5 In contrast to V154, V42 in M5 is relatively uncrowded. The derived period for V42 from our 2003-2006 observations is 25.735 ± 0.015 days, and this has been used to produce the light curves shown in Figure 2. The scatter in the light curve of V42, though smaller than that of V154, is still slightly larger than expected from the formal observational errors. Though some of this may reflect sources of observational uncertainty not included in the formal error analysis, we cannot exclude real fluctuations in the light curve as a source of scatter. For comparison, the period given by Coutts Clement & Sawyer Hogg (1977) is 25.738 days. The maxima, minima, and mean magnitudes derived from our B and V light curves are in good agreement with those from the more sparsely covered B and V light curves observed photoelectrically by Arp (1957). The light curves of V42 from the Yerkes and ASAS data are shown in Figures 3 and 4, respectively. The Yerkes data are plotted using the same 25.735 d period adopted for the CCD light curves. While the points are sparse and have large uncertainties, the maximum and minimum are consistent with the CCD photometry for B. The period that best fits the ASAS observations from JD 2451930 until 2455057 (2001-2009) is 25.720 ± 0.003 days, and this has been used to construct the ASAS light curve. The V amplitude from the ASAS data is smaller than seen in our observations or in those of Arp (1957), and the mean magnitude is brighter. This probably occurs because, even with the smallest aperture, the ASAS pixels are too big to eliminate the contribution of neighboring stars to the aperture photometry. Nonetheless, the large number of data points and the long interval of time coverage make the ASAS observations of V42 very useful for the discussion of its period changes. V84 in M5 V84 is in a more crowded field than V42, and its photometry undoubtedly suffers from that circumstance. The period determination for V84 also turned out to be more complicated than for V42. We first consider only our observations from 2003 through 2005. The V84 photometry for these years can be approximately fit with a period of 26.93 ± 0.02 days. Figures 5 and 6 show the light curves for this period for the CCD and Yerkes data, respectively. This period, however, leaves larger than expected scatter in the CCD light curves. It is also significantly longer than the period of 26.42 days used by Coutts Clement & Sawyer Hogg (1977). Arp (1955) suggested that, while a period of 26.5 days described the main pulsation of V84, the light curve of V84 might be better described with a period twice as long. We carried out a period search in the vicinity of twice 26.9 days and obtained a best fit with a period of 53.95 ± 0.03 days. Figure 7 shows the 2003-2005 light curve of V84 plotted with this period, which is approximately but not exactly twice 26.93 (53.86 d). The scatter in the observed curves is reduced, although there are some gaps in phase coverage. The light curves show some evidence of alternating deep and shallow minima. RV Tauri stars are known to sometimes show alternating deep and shallow minima, e.g., Gillet (1992), and the light curve of V84 might therefore be indicative of low-level RV Tauri type behavior. More complications arise when the observations from 2006 are included. The 2006 observations cannot be well-phased with the earlier data with either the 26.93 day or the 53.95 day periods, in both cases showing a significant phase shift between the two data sets (see Figure 8). In fact, no single period can provide a light curve for V84 that does not show large scatter when observations from 2003 through 2006 are combined. As we discuss below, abrupt changes in the period, and hence phase shifts, of V84 have been noted before, and the 2006 observations likely indicate another such jump. While the period solutions for V42 in M5 have shown little change over time, that is not the case for V84. This will be discussed in detail in §4.2, but we note that our period of 26.93 days is significantly longer than the 26.42 days used by Coutts Clement & Sawyer Hogg (1977) in plotting their light curve. In any particular observing season, lasting 3 or 4 months, the difference between a light curve with a 26.42 day period and a 26.93 day period is not large (less than about 0.05 in phase). However, over the span of two observing seasons, i.e. about a year, the difference can amount to a fifth of a cycle. The smaller number of observations in 2006 makes it impossible to determine the exact period during that year, and we cannot distinguish between a period of 26.4 and 26.9 days from the 2006 data alone. M3 The long term period behavior of V154 in M3 was studied by Hopp (1980), using observations made between Julian dates 2416604 and 2442862, a span of 72 years from 1904 to 1976. We have expanded the interval of the period study to just over a century. In Table 3, we list the heliocentric Julian date (HJD) representing the epoch of maximum as determined from a given set of observations, the phase of maximum light calculated from the ephemeris of Arp (1955) (JD max = 2424627.55d + 15.2854E, where JD max is the date of maximum light and E the cycle number), the estimated uncertainty of the phase of maximum, and the source of the data. We have combined the three closely spaced maxima reported in Arp (1955) into a single representative point. We have chosen to be conservative in assessing the accuracy of the times of maximum, and thus in some instances assign larger uncertainties than were used by Hopp (1980). Unpublished observations of V154 from the study of M3 variables by Strader et al. (2002) were used to add an additional epoch of maximum. Figure 9 shows a possible increase in period for V154 early in the observational record and, with more certainty, an increase in period after JD 2,450,000. A period of 15.2854 days adequately represents the observations made between JD 2,420,000 and JD 2,450,000. The sudden increase in phase for the more recent observations indicates an increase in period, but the gap in the observations before JD 2451256 makes it difficult to tell exactly when that period increase happened. If we assume an abrupt increase in period near JD 2,450,000, then we find that the period increased to about 15.296 days. That period is larger than the value of 15.29 ± 0.02 days found from the 2003-2005 observations alone, but it is within the estimated one sigma error bar of our period. M5 Coutts Clement & Sawyer Hogg (1977) studied the long term period behavior of V42 and V84 using mainly photographic data spanning the 87-year interval from 1889 to 1976. To their compilation of data we can add the results from our observations, plus several additional results from observations made by others since 1976, which extend the studied time interval to about 120 years for V42 and 109 years for V84. In discussing the period changes of V42, we have adopted the fiducial period (25.738 days) and epoch of zero-phase (HJD 2441102.7) used in Coutts Clement & Sawyer Hogg (1977). Additional epochs of maximum for V42 not included in Coutts Clement & Sawyer Hogg (1977) are listed in Table 4 which includes an epoch of maximum determined from unpublished photometry of V42 provided by T. M. Corwin (private comm.). Coutts Clement & Sawyer Hogg (1977) concluded that the period of V42 had been relatively stable with a period near 25.738 days since 1889, but that there had been a small period decrease of about 0.007 day. They noted that the change could have been occurring continuously, or that there could have been an abrupt period decrease in the 1940s. In the phase diagram for V42 shown in Figure 10, a more dramatic decrease in period is apparent. Until JD 2,435,000, the phase diagram is well described by a period of 25.738 ± 0.004 days. As found by Coutts Clement & Sawyer Hogg (1977), there is evidence for a slight decrease in period after that date. Between JD 2,435,000 and JD 2,441,000 the phase diagram can be well fit with a period of 25.731 ± 0.004 days. Between JD 2,441,000 and JD 2,455,000, the phase diagram is well fit with a period of 25.720 ± 0.003 days. As in the study of Coutts Clement & Sawyer Hogg (1977), the phase diagram does not let us determine whether the period changes are actually abrupt. However, the phase diagram in Figure 10 is slightly better fit by three straight line segments than by the parabola that would indicate a constant rate of period change. Burwell et al. (1995) in their abstract reported a period of 25.725 days based upon their unpublished photometry of V42, consistent with the decline in period found here. In addressing the period change behavior of V84, we adopted a fiducial period of 26.42 days and epoch of zero-phase of HJD 2441129.6, again consistent with those used in Coutts Clement & Sawyer Hogg (1977). Additional epochs and phase shifts, beyond those given in Table V of Coutts Clement & Sawyer Hogg (1977) are listed in Table 5. Following Coutts Clement & Sawyer Hogg (1977), we use only the shorter period for V84 and not the 53 day double period in discussing the period change behavior. The observations in the literature are often not adequate for addressing phase shifts using the longer period, but its neglect may introduce some extra scatter into the phase shift diagram. The shifts in phase versus Julian Date are shown in Figure 12. In that figure we are faced with a much more confusing situation than was evident in Figures 9 or 10, a circumstance to which Coutts Clement & Sawyer Hogg (1977) have already called attention. The scatter in the phase shifts implies changes in period (or jumps in phase), and the changes are sufficiently large that it is not always clear whether the count of cycles between observed epochs is correct. Barnard (1898) determined the period of V84 to be 26.2 days based upon visual observations obtained with the Yerkes 1-m telescope. Arp (1955) obtained photographic observations of the M5 Cepheids and later reported additional photoelectric observations for the pair (Arp 1957), confirming his earlier report of the existence of alternating cycle behavior for V84. The periods given in Arp (1957) are 26.62 ± 0.03 days for the shorter cycle, and 53.24 ± 0.2 days for the doubled cycle. Wallerstein (1958) used his observations and those of Arp (1955) to determine a period of 26.54 days but also found evidence for alternating minima. By far the most extensive previous study of the period of V84 is the study of Coutts Clement & Sawyer Hogg (1977). They determined that during the 1930s and 1940s, the period of V84 remained nearly constant at 26.42 days. They found that during the 1950s, the period increased by about 0.2 days before decreasing again. There may have been another period jump in 1970, but by 1971 the period had settled again at 26.42 days. Our light curves of V84, and Figure Location in the Color-Magnitude Diagram Long period type II Cepheids such as V154 and V42 (and also V84, if one includes variables with stronger RV Tauri-like behavior) tend to be brighter than expected from an extrapolation of the period-luminosity relation as determined from shorter period type II Cepheids (see, for example, Figure 5 in Bono et al. (1997)). In order to place V154, V42, and V84 onto a color-magnitude diagram, we derive their intensity-weighted mean V magnitude and magnitude-weighted colors, (B − V ) and (V − I). These values and their uncertainties are listed in Table 6. Absolute V magnitudes, as derived below, are also listed. The estimated uncertainty for < V > int is about 0.02 for V42 and 0.03 for V84 and V154. The estimated uncertainties for the mean colors are about 0.03 for V42 and 0.04 for V84 and V154. Arp (1957) derived a mean V magnitude of 11.22 for V42 with a mean B −V color of 0.60, in good agreement with our results. Arp (1957) did not obtain a large enough number of observations to derive mean magnitudes and colors for V84. Carney et al. (1998) refer to unpublished BV photometry of V42 which gave < V > = 11.15, which is slightly brighter than our value. We have not found in the literature any complete light curves of V154 on the BV I c system, but partial light curves are given in Benkő et al. (2006). It is not possible to calculate mean magnitudes from the Benkő et al. (2006) observations, and their observations show significant scatter at a given phase (as do ours), but their magnitudes may be slightly brighter than ours. To determine absolute V magnitudes (M V ) for the Cepheids, we referenced their brightnesses to the RR Lyrae variables in M3 and M5. The mean < V > int magnitudes for RR Lyrae stars in M3 and M5 are about 15.64 and 15.07, respectively (Cacciari et al. 2005;Reid 1996;Storm et al. 1991). We convert these to absolute magnitudes using an absolute magnitude of 0.59 for RR Lyrae stars of [Fe/H] = -1.5 and ∆M V /∆[Fe/H] = 0.214 (Cacciari & Clementini 2003). Assuming [Fe/H] = -1.5 for M3 and -1.2 for M5, (Zinn & West 1984;Cacciari et al. 2005;Yong et al. 2008), we obtain the absolute magnitudes shown in Table 6. The resultant locations of the variables in the color-magnitude diagrams are shown in Figure 13, adopting E(B − V ) = 0.01 for M3 and 0.03 for M5 (Cacciari et al. 2005;Reid 1996;Storm et al. 1991). Also plotted in Figure 12 are type II Cepheids in globular clusters from the tabulation in Nemec et al. (1994). Following Table 4 in Nemec et al. (1994), a few variables with slightly discordant measures by different observers are plotted more than once. All three of our variables, but especially V42 and V84, fall near the upper bound of the type II Cepheid instability strip, and near the transition to RV Tauri behavior (see also Wallerstein & Cox (1984) and Bono et al. (1997)). Discussion of the Period Changes Theory predicts that long period type II Cepheids enter the instability strip either while undergoing blueward instability loops from the asymptotic red giant branch as a consequence of helium shell flashes, or during final blueward evolution as the hydrogen burning shell nears the surface of the star (Schwarzschild & Harm 1970;Mengel 1973;Gingold 1985;Clement et al. 1988;Bono et al. 1997). Bono et al. (1997) found that, for the lower mass but brighter Cepheids, the instability strip could be crossed two or three times as a consequence of thermal pulses. The period of a pulsating star is linked to its density via Ritter's pulsation equation, Q = P √ ρ, where Q is the pulsation constant, P is the period, and ρ is the mean stellar density. The pulsation period of a Cepheid is often its most accurately known property, and, as noted long ago by Eddington (1918), a small change in the structure of the Cepheid will reveal itself as a change in pulsation period before it can be recognized in any other measured quantity. Each of our three stars showed long term period change behavior, but the changes are different in each case. V154 showed a modest increase in period consistent with movement to the red in the instability strip. V42 showed a decrease in period, consistent with movement to the blue. If these period changes indicate the long term evolution of these stars, V154 could be interpreted as being on the redward evolving, and V42 on the blueward evolving portion of the instability loops predicted by theory during shell helium burning. Alternatively, blueward moving V42 might be in the final blueward evolutionary phase. In neither case, however, is a parabolic fit to the phase diagram, implying a constant rate of period change, significantly better than the assumption of abrupt period changes. Using the theoretical timescales of Gingold (1976), Clement et al. (1988) found that one might expect a rate of period decrease of P −1 dP/dt = −0.0005 to −0.002 cycles per 100 years during the final quiescent blueward evolving stage. The observed decrease in the period of V42 is about ∆P/P = −0.0007 over 120 years, consistent with that expectation. Although V42 and V84 are both near the luminosity dividing type II Cepheid and RV Tauri behavior in the HR diagram (Wallerstein & Cox 1984;Bono et al. 1997), V84 showed period changes much more erratic than those of V42. V84 is not, however, the only type II Cepheid exhibiting period fluctuations. Clement et al. (1988) found that period fluctuations were not unusual in type II Cepheids in globular clusters, although rarely do they seem to reach the extent exhibited by V84. Apparently random period fluctuations have also been observed in the O-C diagrams of other variable stars Turner et al. 2009). V1, a 15.5 day period Cepheid in the globular cluster M12, does perhaps show jumps in the phase shift diagram on a scale similar to that of V84 (Clement et al. 1988). V84 shows evidence of RV Tauri behavior, and strong cycle-tocycle period fluctuations have been observed for RV Tauri stars in the field, e.g. Percy & Coffey (2005). However, Clement et al. (1988) do not report RV Tauri type behavior for V1 in M12, so that these irregular periods appear not to be limited only to long period variables near the RV Tauri domain in the HR diagram. Acknowledgements We thank the National Science Foundation for partial support of this work under grants AST0440061, AST0607249, AST0707756, and PHY0754541, and the Yerkes Observatory for granting access to their extensive archive of photographic plates. We thank Mike Corwin for providing unpublished observations of V154 and V42, respectively. We thank Christine Clement for helpful comments on a draft of this paper. Table V of Coutts Clement & Sawyer Hogg (1977), uncertainties are unknown for a couple of the points. Table V of Coutts Clement & Sawyer Hogg (1977). Nemec et al. (1994). Filled circles represent our results for V154, V42, and V84.
2010-03-30T20:54:51.000Z
2010-03-30T00:00:00.000
{ "year": 2010, "sha1": "8f97ac6f09d2507290b0c4508cbaaf4521d42f38", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1003.5924", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8f97ac6f09d2507290b0c4508cbaaf4521d42f38", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
39470002
pes2o/s2orc
v3-fos-license
Correlates of Antiretroviral Therapy Adherence among HIV-Infected Older Adults Background: Despite the success of antiretroviral therapy (ART), HIV-infected older African Americans experience higher mortality rates compared to their white counterparts. This disparity may be partly attributable to the differences in ART adherence by different racial and gender groups. The purpose of this study was to describe demographic, psychosocial, and HIV disease-related factors that influence ART adherence and to determine whether race and gender impact ART adherence among HIV-infected adults aged 50 years and older. Methods: This descriptive study involved a secondary analysis of baseline data from 426 participants in “PRIME,” a telephone-based ART adherence and quality-of-life intervention trial. Logistic regression was used to examine the association between independent variables and ART adherence. Results: Higher annual income and increased self-efficacy were associated with being ≥95% ART adherent. Race and gender were not associated with ART adherence. Conclusion: These findings indicated that improvements in self-efficacy for taking ART may be an effective strategy to improve adherence regardless of race or gender. Introduction Over the course of the last 30 years, the face of HIV has changed. 1,2 In the early years of the epidemic, HIV was considered to be a disease that primarily affected young, homosexual males. 3 However, with each passing decade, the prevalence of HIV infection among adults aged 50 and older (from here on referred to as "older adults") has continually increased to the extent that by 2004, the prevalence of HIV infection in older adults had exceeded the rates previously found in those younger than age 50. 1,4 In fact, between 2007 and 2009, the highest increase in HIV prevalence occurred among 80-to 84-year-old individuals. 5 In 2011, older adults accounted for approximately 17% of the individuals newly diagnosed with HIV, 19% of the people living with HIV (PLWH), 6 24% of the AIDS diagnoses, and over 50% of the deaths among PLWH. 5,6 By the year 2020, approximately 70% of the PLWH in the United States will be older than the age of 50. 7 In the United States, African Americans have been disproportionately affected by the HIV pandemic. 8 African Americans represented 46% of the new HIV diagnoses in 2010, despite the fact that they comprised only 12% of the US population. 8 Moreover, in 2013, it was reported that HIV-infected older African Americans were 11 times more likely to be diagnosed with HIV, and they were 7 times more likely than their white counterparts to die from the disease. 5 The disparity in the HIV prevalence rates may be related to the high proportion of African Americans living in environments where stigma, discrimination, and poverty contribute to the conditions conducive to the spread of HIV. 9,10 Antiretroviral therapy (ART) has been shown to be vital to increase the life expectancy and decrease mortality among PLWH. 11 When ART adherence is maintained at or above 90% to 95% of the prescribed regimen, it has been shown to be effective in preventing virologic failure and virologic resistance. 11,12 Despite these benefits and the ready availability of ART in the United States, 13 some groups continue to experience adverse outcomes. Older Americans have higher HIV/AIDS-related morbidity and mortality compared to their HIVinfected younger American counterparts, 12 and women have been shown, albeit inconsistently, to have suboptimal ART adherence. 14,15 It has also been reported that being of African-American race predicted suboptimal levels of ART adherence among HIVinfected older adults, although some studies do not support this finding. [16][17][18][19] Thus, it is not certain whether particular populations are at greater risk of poor adherence. In addition to demographic characteristics, the virus and its symptoms may also influence ART adherence. HIV-related symptoms commonly occur among adults aging with HIV. 20 The symptoms reported most often include pain, fatigue, depression, difficulty sleeping, and memory impairments. 20,21 Studies examining the symptom experiences of HIV-infected individuals have shown that HIV symptomatology increases with age, 22 women often experience more HIV-related symptoms than men, 20 and the number of symptoms reported ranged between 9 and 21 symptoms. 20,21 Therefore, as people are living longer with HIV, they will likely experience an increase in the number of HIV symptoms and increased symptom severity. 22 Subsequently, these symptoms have the potential to negatively impact overall ART adherence among individuals aging with HIV. 23,24 shown that approximately 20% to 50% of HIV-infected older participants have depression symptoms. 26,27 High levels of depression negatively impact ART adherence, engagement in care, symptom management, and quality of life. 28 However, whether depression's effect on ART adherence differs by race/gender groups remains inconclusive. 29 Medication adherence self-efficacy (the confidence in one's ability to take one's ART as prescribed, even under difficult circumstances) is an important psychosocial factor that has been shown to improve ART adherence among HIV-infected individuals and that treatment adherence self-efficacy has a direct influence on ART adherence, clinical outcomes, and health-care costs. [30][31][32] Few studies have evaluated the role of ART adherence self-efficacy among older adults, although it has been shown to be an important indicator of adherence behaviors among younger persons living with HIV/AIDS. 33 Similar to depression, selfefficacy may have a direct impact on medication adherence behaviors 34 ; as more persons living with HIV/AIDS move into older age, its effect on adherence among older adults warrants greater study. Given that HIV is now considered to be a chronic health condition, it is important to understand the factors that pose a risk for poor ART adherence among older persons living with HIV. The purpose of this study was to examine, among older persons living with HIV/ AIDS, the common challenges of ART adherence and to test whether race and gender contribute to lower medication adherence rates when other challenges were also considered. Participants Participants were recruited as part of PRIME, a study to test a telephone-delivered ART adherence and quality-of-life intervention for HIV-infected persons aged 50 and older. 24 Recruitment for PRIME occurred from 2007 to 2012 in 10 different community-based AIDS Service Organizations from 9 states across the United States (Arizona, California, Illinois, Massachusetts, Michigan, Pennsylvania, Texas, Washington, and Wisconsin) through posters, flyers, and direct mailings. PRIME study participants were eligible if they met the following inclusion criteria: (1) aged 50 years or older, (2) HIV-positive serostatus, (3) currently prescribed ART, (4) non-adherent to ART in the past 30 days, (5) <95% adherence in the past 30 days. Exclusion criteria for PRIME were (1) hearing problems, (2) suspected dementia (≥3 errors on a brief cognitive assessment, 35 and (3) acute psychosis. The institutional review board of Group Health Research Institute approved this study, and each study participant provided written informed consent prior to participating in the research activities. Substance use history-The 3-item Alcohol Use Disorders Identification Test 36 was used to evaluate the recent history of alcohol use. Scores ranged from 0 to 12, with higher scores indicating more alcohol use. In addition, history of intravenous drug use (IVDU) was assessed by asking participants two question: "Have you ever injected any drug on your own, without medical supervision?" and "Have you injected any drug in the past 3 months?" These questions were answered as either yes or no. Depression-Depression was measured using the Patient Health Questionnaire (PHQ-8), an 8-item modified version of the PHQ-9 depression scale 37 that omits 1 suicidal ideation item. The 8-item PHQ measures the frequency with which major depression symptoms were experienced over a 2-week period. 38 The scale ranged from 0 to 24, and clinical depression was defined as a score of 10 or more on the PHQ-8. 38 Treatment adherence self-efficacy-Participants were asked to indicate their level of confidence to take their HIV medications under various circumstances on a scale from 0 (you think you cannot do it at all) to 10 (you are certain you can do it). 30 Example questions include sticking to their treatment when daily routines changed, when they were busy, or when they were experiencing side effects. 30 This variable was analyzed as a continuous variable representing the sum of the 13 items on the treatment adherence self-efficacy scale. The scores ranged from 0 to 130. HIV-related medical information-Relevant HIV disease-related medical information included duration of HIV disease, perceived HIV symptom severity, if they had ever had a diagnosis of AIDS, and length of time on antiretroviral treatment. Perceived HIV symptom severity was evaluated using a 23-item checklist of current HIV symptoms. 39 Participants rated each symptom on a 5-point scale from 1 (not present) to 5 (very severe), which produced a symptom severity score (range 23-115). Higher scores indicated higher perceived HIV symptom severity. Antiretroviral therapy adherence-Adherence to ART was assessed using Golin Self-Report of Medication Non-Adherence questionnaire. 40 Participants were asked to estimate the amount of HIV medications that they had missed in the 30 days prior to the baseline interview by answering the following questions: "What is your best guess about how much of your prescribed HIV medication you have taken in the last month? We would be surprised if this were 100% for most people. For example, 0% means you have taken no medication; 50% means you have taken half your medication; 100% means you have taken every single dose of your medication (percentage of medication taken)." This adherence estimate was then dichotomized at >95% for analysis. Statistical Analysis Descriptive statistics were used to determine the distributions of each of the study variables. Due to low responses in some categories, during the analyses, several independent variables (sexual orientation [lesbian, gay, bisexual, and transgender or heterosexual], relationship status [married/partnered or single], educational level [less than high school or equal or greater than high school education], and employment status [unemployed/disabled or working]) were dichotomized in order to have adequate cell sizes. Spearman correlation analysis was used to test the relationship of the dichotomously measured 95% adherence and the independent variables. Only those independent variables significantly associated with ART adherence at a P < .10 were entered into the logistic regression model. Race and gender, as well as their interaction, were also included as independent variables. All analyses were conducted using the SPSS for Windows version 21. Sociodemographic Characteristics of the Study Population A total of 426 participants completed the PRIME baseline study procedures and were included in the present study. Approximately 60% were African American, 27% were female, and 55% self-identified as being heterosexual. Participant racial and gender groups included African-American women (n = 87), white women (n = 30), white men (n = 141) and African-American men (n = 168). Participants ranged in age from 50 to 75, and the mean age was 54 years. The majority of participants graduated from high school, 88% considered themselves disabled, and 83% earned less than US$20,000 per year. On average, the overall scores on the psychosocial measures indicated low depression (PHQ-9; mean = 7) and relatively high treatment adherence self-efficacy (mean = 108). The mean time since HIV diagnosis was 15 years, and participants had been taking ART for an average of 12 years. The overall mean level of ART adherence was 88%. There were no significant differences in their age (P = .21), employment status (P = .90), or past history of IVDU (P = .10) between the 4 racial/gender groups. Nor were there any statistically significant group differences in reported levels of depression (P = .17), HIV symptom severity (P = .39), or 95% ART adherence (P = .27). The characteristics of the study participants are presented in Table 1. Logistic Regression Analysis of Factors Associated With ART Adherence Logistic regression was utilized to determine whether the statistically significant variables identified in correlational analyses were associated with >95% ART adherence. Regression results indicated that the overall model was statistically reliable in predicting >95% ART adherence, χ 2 (11, N = 408) = 108.766, P < .0001 (Table 3). Demographic factors-Annual income (P = .019) was the only demographic factor that was significantly associated with >95% ART adherence. No statistically significant associations were noted between race (P = .61), gender (P = .74), or education (P = .07) and ≥95% ART adherence. Psychosocial factors-Higher treatment adherence self-efficacy was positively and significantly associated with ≥95% adherence (P < .0001). For every unit increase in selfefficacy, there was about a 4% increase in the odds of being >95% ART adherent. Depression (P = .14) was not significantly associated with >95% ART adherence. HIV disease-related factors-A potential trend was found for higher HIV symptom severity to be associated with lower ART adherence; however, the effect did not meet our statistically significant cutoff (P = .08). Neither the age at diagnosis (P = .51) nor the duration of ART (P=.14) were associated with >95% adherence. Discussion The current study examined the association of demographic, psychosocial, and HIV diseaserelated factors with ART adherence and whether race or gender influenced ART adherence in a regionally diverse sample of HIV-infected older adults. The study participants were mainly African American, heterosexual, single, and in their mid-50s. The mean ART adherence levels reported across each racial/gender group were between 84% and 92% at baseline, and these levels of ART adherence were higher than the levels reported in other studies that included HIV-infected older African Americans. 41 Some previous reports have indicated that within the subpopulation of HIV-infected older adults, African Americans and older women were less adherent to ART than their HIV-infected white male counterparts. 13 However, among the participants in this study (more than a quarter of whom were women), we did not find any statistically significant differences in the mean level of ART adherence across the racial/gender groups within this study, nor did we find that race or gender was associated with ART adherence. Annual income was associated with ART adherence. Similar trends between income and ART adherence have been found by others, but their results were not statistically significant. 42 This is an important factor for HIV-infected older adults with limited incomes. Depressive symptoms can negatively impact ART adherence among HIV-infected older adults. 28 Given that depression was initially associated with ART adherence in our bivariate analysis, it might be possible that other constructs, such as HIV symptom severity, impart similar weight on actual adherence behaviors and thus minimize the effects of each within multivariate regression models. Moreover, unlike other studies that attributed the suboptimal levels of ART adherence demonstrated by racial minorities 41,43 and women 43 to differences in rates of depressive symptoms, the present study did not find that depressive symptoms differed by race or gender. However, the nature in which the participants were enrolled in the current study may have attracted persons who were "higher functioning" overall (ie, they had the energy, motivation, and interest to engage in study activities), enabling the detection of subgroups who may be at greater risk for depression difficult. Treatment adherence self-efficacy was the only psychosocial variable that emerged as being significantly associated with ART adherence in the regression model. This finding is consistent with other reports of a positive association between higher self-efficacy and higher ART adherence among HIV-infected older adults. 32 Across each racial/gender group in the current study, the mean levels of self-efficacy for taking ART were relatively high with scores ranging from 105 to 111. Interestingly, African-American women and white American men reported the same mean treatment adherence self-efficacy levels (mean = 111). However, their self-reported levels of adherence were not significantly different from those of the other racial/gender groups. As greater numbers of these individuals age into their 70s and 80s, it will be important to monitor the nature of any self-efficacy changes. As HIV-infected older adults are living longer, they will likely experience an increasing number of HIV-related symptoms and symptoms related to other comorbid conditions, which may impact their adherence to ART. While our unadjusted analysis showed that greater symptom severity was associated with lower ART adherence, its effect was no longer evident in the multivariate model. These results are inconsistent with other studies and may be due to a relatively low symptom burden reported by participants in this study. Study Limitations The study findings should be interpreted in light of its limitations. Participants were recruited as a convenience sample and thus were limited to individuals who were willing to volunteer for an intervention study focused on "living well with HIV as you age." The majority of HIV-infected adults who participated in this study lived in the Northeastern, Southwestern, Midwestern, or Western United States. Therefore, these results may not be generalizable to older HIV-infected adults living in the Southeastern United States, the area with the highest prevalence of HIV-infected older adults. Another limitation of the study is that all of the data were collected as self-reported data. Although previous studies have shown high correlations between self-reported ART adherence and actual measured viral loads, self-reports are known to overestimate medication adherence. 44 In addition, selfreported adherence may be influenced by social desirability bias or recall bias. 45 Despite these limitations, the present study furthers our understanding about the influence of race and gender on ART adherence. The finding that there were no statistically significant differences in the demographic characteristics between the racial/gender groups, related to ART adherence, may suggest that older adults may be more homogenous in their ART adherence practices than first thought and that a different mechanism may be driving the higher mortality rates in older HIV-infected African Americans and older women. Implications for Future Research Over the next few years, the number of older adults who are either aging with previously diagnosed HIV or newly infected will continue to increade. 6 It has been estimated that by the year 2020, approximately 70% of the people infected with HIV will be older than 50. 6 Therefore, it is of vital importance that further research is conducted to identify and address factors influencing mortality in HIV-infected older adults from various racial backgrounds and gender groups. These investigations should also consider the complexity surrounding the synergistic effects of multimorbid conditions, the longevity of ART use, and the normal aging process on mortality rates in older adults living longer with HIV. Moreover, findings from this study specifically suggest that future research should be focused on developing effective ways to improve treatment adherence self-efficacy among older adults living with HIV.
2018-04-03T04:34:36.803Z
2016-04-12T00:00:00.000
{ "year": 2016, "sha1": "15e0ff302c6e42c4e642dde332caca039f5291ec", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325957416642019", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "61c70b2372ba3b856d5269ae880366feb08c23ac", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
84182164
pes2o/s2orc
v3-fos-license
Ice-sheet surface elevation change from crossover of ENVISAT data Understanding the current state of the polar ice sheets is critical for determining their contribution to sea-level rise and predicting their response to climate change. Surface elevation time series especially can be used to study ice-sheet dynamics and the mass or volume balance of the ice sheets, which are relevant to global climate change and sea-level rise. During the last two decades, satellite radar altimetry or airborne laser altimetry could obtain accuracy by an order of magnitude greater than the traditional airborne barometric altimetry, which has a precision of typically several tens of meters at best and only a limited coverage. The widest coverage comes from satellites, especially from the ERS1/2 and ENVISAT, which extends to 81.5° of latitude, covering almost all of Greenland and most of Antarctica. In this paper, an algorithm for time series analysis based on crossover was used to obtain 4-year (September 2002–March 2007) ice-sheet elevation changes from ENVISAT data. The height of the whole Antarctic ice sheet has a decline of about 0.4 ± 0.43 cm from September 2002 to March 2007. The time series data present clearly a seasonal and annual signal feature; that the ice sheet thickens in March. From the time series data, the seasonal and annual signal can be observed clearly. Introduction Over the last 100 years, the global sea level has risen by about 10-25 cm. The rate of mean global sea-level rise was 1.8 mm/yr (1). It is likely that much of the rise in sea level is related to the concurrent rise in global temperature over the last 100 years. On this time scale, warming and the consequent thermal expansion of the oceans may account for about 2-7 cm of the observed sea-level rise, while the observed retreat of glaciers and ice caps may account for about 2-5 cm. The rate of observed sea-level rise suggests that there has been a net positive contribution from the huge ice sheets of Greenland and Antarctica. The polar ice sheets in Greenland and Antarctica have sufficient volume such that very small changes can significantly alter global sea level. For example, a 30 cm change in sea level, corresponding to only a 0.4% change in total ice-sheet volume (2), would have substantial and serious social and economic impact on coastal areas. For example, a recent study (3) from aircraft and satellite laser altimeter surveys of the Amundsen Sea sector of west Antarctica shows that local glaciers are discharging about 250 km 3 /yr of ice to the ocean, almost 60% more than is accumulated within their catchment basins. This discharge is sufficient to raise sea level by more than 0.2 mm/yr. Now, satellite altimetry (microwave radar or Lidar) allows measurement of the ice-sheet height more accu-rately, thus obtaining meaningful quantitative estimates. Satellite radar altimeter measurements show that the average elevation of the Antarctic ice-sheet interior fell by 0.9 ± 0.5 cm/yr from 1992 to 1996 (4). Davis et al. (2) analyzed Antarctic ice-sheet elevation change (dH/dt) from 1995 to 2000 using 123 million elevation change measurements from ERS ice-mode satellite radar altimeter data. They found the average values in east Antarctica to be within 3.0 cm/yr, whereas drainage basins in west Antarctica had substantial spatial variability with average values ranging between À11 and +12 cm/yr. The east Antarctic ice sheet had a five-year trend of 1 ± 0.6 cm/yr. The west Antarctic ice sheet had a five-year trend of À3.6 ± 1.0 cm/yr due largely to strong negative trends of around 10 cm/yr for basins in Marie Byrd Land along the Pacific sector of the Antarctic coast. The continent as a whole had a five-year trend of 0.4 ± 0.4 cm/yr. In addition, ice-sheet heights also are used for studying the mass balance of the Antarctic ice sheet using ERS1/2 altimeter data from 1992 to 2003 (1). Time series of ice-sheet surface elevation changes will enable determination of the present-day mass balance of the ice sheets, study of associations between observed ice changes and polar climate, and estimation of the present and future contributions of the ice sheets to global sea-level rise. Short-term changes in surface elevation occur both seasonally and interannually, while longer term elevation changes are linked to climate change and global sea level. The most common technique used in analyzing satellite altimetry data to study height changes in ice sheet is the time series (dH/dt) based on the crossover geometry. Davis et al. (8) presented a new method to derive the seasonal elevation signal from a continuous time series of altimeter crossover data. The method uses the complete set of intrasatellite crossover data to provide fine temporal resolution and significantly reduces random measurement errors. Nguyen et al. analyzed the ERS crossover data and ICE-Sat crossover data using a Kalman filter and block Kriging to study height changes in east Antarctica. By combining a Kalman filter with block Kriging, this method also enables modeling surface heights as random fields (5, 6). Data and processing The ENVISAT, launched in March 2002 by the European Space Agency (ESA), has a repeat period of 35 days and coverage of ±81.5°. It is an advanced polarorbiting Earth observation satellite that provides a global-scale collection of radar echoes over ocean, land, and ice to measure ocean topography, water-level variation over the large river basins, lakes, land surface elevation, and monitoring of sea ice and the polar ice sheet. The RA-2, loaded by ENVISAT, is a nadir-looking pulse-limited radar functioning at the main nominal frequency of 13.575 GHz (Ku band). The secondary channel at nominal frequency of 3.2 GHz (S band) is also operated to compute corrections for the measured range compensating for the ionosphere delay. The RA-2 telemetry provides 18 range measurements per second corresponding to an along-track sampling interval of about 400 m. Over the ocean, it is common to average 20 of these measurements to yield a sampling interval of 1.1 s or about 8 km. The data-set used in this paper is the ENVISAT SGDR from September 2002 to March 2007, from cycles 9 to 56 (excluding cycles 16, 17, 30, 50, and 55). Radar altimetry has been used to measure sea-level and ice-sheet changes since 1978. However, the large footprint size of 3-8 km in radar creates great uncertainty in height measurements in the range of several tens of centimeters due to slope-induced errors over the ice sheet. Generally, the altimeter antenna boresight is pointed to the nadir. The antenna has a field of view of about 1.3°(half-power). The first part of the reflected echo will come from that part of the surface within the field of view closest to the satellite. Over flat surfaces, the closest point on the surface is at the nadir impact; however, over sloping terrain, this will not be the case. When the echo waveform is retracked, the resulting range measurement is a slant range to a point offset from the nadir. To use this measurement, it must be corrected for the slant and relocated to the point of first return offset from nadir. Over topographic surfaces, an onboard radar altimeter tracking system is unable to maintain the echo waveform at the nominal tracking position in the filter bank due to rapid range variation. This results in an error in the telemetered range known as tracker offset. Retracking is a term used to describe a group of nonlinear ground processing estimation techniques, which attempt to determine the tracker offset from the telemetered echoes and thereby estimate the range to the point of closest approach on the surface. To calculate precise ice surface elevations and changes in elevation, the altimeter range measurements need to be referenced to a common datum and corrected for tracker misalignment of the waveform due to interpulse variations, atmospheric delays, solid Earth and ocean tides, and slope-induced errors. The corrected altimeter range measurement is then subtracted from a precise orbit to calculate a corrected surface height. The following table shows the corrections in order of their magnitude. Furthermore, for an unknown reason, a change of behavior of the Ultra Stable Oscillator (USO) clock frequency occurred on February 2006 and impacted the quality of the RA-2 range parameter. Part of cycles 44 and 45 and most of cycle 46 are impacted by this anomaly. Since that time, except for the RA-2 B-side data, all ENVISAT altimetry data are impacted by the USO anomaly. When the anomaly occurs, the USO period increases rapidly during several hours to reach about 12500.090 picoseconds and from then starts to oscillate with a 0.005 ps amplitude. This change of frequency has a direct impact on the altimetric range measurement in both Ku and S bands. The increase of the period implies a range value shorter by about 5.6 m than the nominal value. Thus, the sea-level anomalies are higher than the nominal value by 5.6 m (7). The data preprocessing in this paper include the terms mentioned previously. They are transmission medium (tropospheric refraction, ionospheric refraction, and tides), slope correction, retracking correction, and USO clock correction. The latter is used for the cycles 46-56. Davis' algorithm for time series Davis' algorithm for time series (8) is based on crossover point height. Generally, the tracks of altimetry satellites have one ascending pass and one descending pass. Thus, the ice-sheet surface elevation differences (dH) are computed from surface heights (H) at the intersection or "crossover" between an ascending satellite track (H A ) and a descending satellite track (H D ) using the following equations: where subscripts A and D represent the ascending and descending tracks, respectively. The B A and B D terms represent possible time-invariant biases in the altimeter measurement that result from directional dependencies in the orbit error and/or the scattering characteristics of the ice sheet. The DH S ðtÞ term represents spurious change in surface elevation caused by a temporal variation in backscattered power from the ice sheet. If t A _ t D , Equation (1) is used to compute the elevation differences dH. When t D _ t A , Equation (2) is used. Therefore, the time interval dt ¼ t 2 À t 1 between the ascending and the descending tracks is always larger than 0 or dt _ 0. The repeat cycle of ENVISAT is 35 days. We can obtain a N  N matrix (Equation (3)) if each cycle data-set is crossed and compared. where row i represents the crossover data computed from cycle jði^j^N Þ with respect to cycle i. Each matrix element represents the mean elevation change between a fixed reference cycle i (row) and a given cycle j(column) including the ascending track in cycle i crossovers with the descending track in cycle j and the descending track in cycle i crossovers with the ascending track in cycle j. The average elevation change is computed by combining the mean dH AD and dH DA values in the following manner: where n T ¼ n AD þ n DA is the total number of crossovers for a given matrix element. n AD and n DA are the number of crossovers. This is an unbiased average. dH AD and dH DA are the mean change differences, which are presented as The corresponding standard error of the mean value is However, using Equations (1)-(6), we can compute many time series. For example, from row 1, we can obtain a time series whose fixed reference cycle is cycle 1. Subsequently, from row 2, the time series whose fixed reference cycle is cycle 2 is obtained. In order to use the complete crossover point in matrix dH (Equation (3)), we need to make the time series in the same fixed reference cycle by making the dH 1Âj values in row 1 for j ¼ 2 to N of the dH matrix be added to all elements of the corresponding matrix elements in row i ¼ j for all j _ i, i.e. where i _ 1; j ¼ i þ 1 to N : For example, dH 1Â2 element will be added to all elements in row i ¼ 2 for j _ 2 (e.g. dH 2Â3 ; dH 2Â4 ; etc.). This ensures that all rows of the dH matrix will be properly referenced to the same initial time period. Therefore, the resulting standard error for each matrix element for all rows where i _ 1 is The total number of crossovers in the i  j becomes The total number of crossovers in each column of the dH matrix will then be We let the weight for each matrix element be w iÂj ¼ n 0 iÂj =n j so that the final weighted column average for the j cycle is The final SE is Results and analysis Davis' algorithm (8) for time series utilizes the complete set of crossover data to provide the height change over studied area and give the errors of the height change. Figure 1 (3) and their fix referenced cycle is not the same cycle. For example, using the first third row in dH, three time series could be obtained (Figure 1(b)). When the fixed cycle is unified and referenced to the first cycle (September 2002) based on Equations (7)-(11), the final weighted column average height change will be obtained. Figure 2 gives the final height change over Antarctic ice sheet from the crossover of ENVISAT data-set. Figure 2 also plots the height change of east Antarctic (Figure 2(b)) and west Antarctic (Figure 2(c)), respectively. Generally speaking, the seasonal height change cycle can be affected by snow precipitation/accumulation, melting snow, snow densification (compaction), ice flow, and possibly temporal variations in the backscattered power from the ice sheet (8). In our processing, we ignore the DH s ðtÞ in Equations (1) and (2), which is a spurious change in surface elevation caused by the radar signal (backscattered power) penetration beneath the ice-sheet surface. Conclusion Sea levels are expected to rise and are one of the results of melting ice sheets (global warming) with adverse effects on many people living in coastal areas. Ice-sheet height change time series are therefore important for studying global climate. Analysis of the crossover data using Davis' method (8) Notes on contributors Yonghai Chu is an associate professor with a major in geodesy, especially in satellite altimetric application. Jiangcheng Li is a professor with a major in geodesy, especially in determination of geoid.
2019-03-21T13:04:33.889Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "31f7b145f99a96c7a413f712ce5f4eb328741346", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1080/10095020.2012.708160", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f4d66d2e70004b5c183d38dc7e05e0571819268f", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
263340078
pes2o/s2orc
v3-fos-license
Practical approaches to improving translatability and reproducibility in preclinical pain research Pain research continues to face the challenge of poor translatability of pre-clinical studies. In this short primer, we are summarizing the possible causes, with an emphasis on practical and constructive solutions. In particular, we stress the importance of increased heterogeneity in animal studies; formal or informal pre-registration to combat publication bias; and increased statistical training in order to help pre-clinical scientists appreciate the usefulness of available experimental design and reporting guidelines Introduction Chronic pain remains one of the most common and urgent health issues, with low back pain and headache disorders being the 4th and 5th leading cause of disability in 25-49 year-olds world-wide (Vos et al., 2020).Over the past 60 years, research has made great strides in improving our understanding of the underlying biology of chronic pain, with inflammation and dysfunctional neuro-immune interactions thought to be significant driving factors (Calvo et al., 2019;Fiore et al., 2023;Hore and Denk, 2019). And yet, surprisingly little of this work has translated into novel drug treatments.In fact, the two most widely used classes of pain-killers, opioids and non-steroidal inflammatory drugs (NSAIDs), have been available since the 19th century: morphine was first isolated by Friedrich Sertürner in 1804 (Trang et al., 2015) and aspirin discovered by Bayer in 1897, with "newer" NSAIDs, like ibuprofen, first marketed in 1969 (Brune and Hinz, 2004;Jones, 2001).There have been a few new additions, but they are usually drugs that were moved across indications, such as gabapentin, an anti-epileptic which was approved for the treatment of neuropathic pain in 2002 (Wiffen et al., 2017).Arguably the most successful drug discovery efforts have been made in the field of headache disorders, with triptans developed in the 1980s and treatments based on calcitonin-gene related peptide (CGRP) approved just several years ago (Ogunlaja and Goadsby, 2022).It is interesting to note that these targets were not initially discovered through work with animal models (Lassen et al., 2002), and that their mechanisms of action, especially those of anti-CGRP, are still not fully understood (Goadsby et al., 2017).In contrast, many drug trials which started with mechanistic evidence derived from animals, typically rodent studies, have failed.This includes efforts to interfere with the function of neurokinin-1, glycine receptors, nerve growth factor and the sodium channel subunit Na v 1.7 (Mogil, 2009). Some have argued that drug development efforts should de-prioritize target-focused strategies in favor of phenotype-based screening approaches (Swinney and Anthony, 2011).However, with a complex condition like pain, where biomarkers are lacking, this is easier said than done.Moreover, biologics, one of the most successful drug classes of the past decade, are entirely based on identifying a specific target.The question therefore remains: can we identify and address the root causes that account for the limited translational value of animal models in the pain field? There are likely to be several of reasons, as already eloquently discussed in previous articles (Klinck et al., 2017;Mogil, 2009;Sadler et al., 2022;Scannell and Bosley, 2016).Obviously, there are outright species differences, e.g. in sensory neuron gene expression (Jung et al., 2023).Assuming these differences have been taken into account, reasons for the failure in translation can be broadly divided into four categories.Firstly, it is difficult to make compounds that have good target engagement and favorable pharmacogenetics.This was, for example, what hampered the success of the Na v 1.7 blocker trialed by Pfizer (McDonnell et al., 2018;Mulcahy et al., 2019).Secondly, there are significant complexities associated with clinical trial design.For instance, the choice of patient group can be critical, as has been demonstrated in the case of oxcarbazepine, which appears to work much better as an analgesic in a particular sub-type of neuropathic pain (Demant et al., 2014).Thirdly, even if a drug has good efficacy, side effects can often be dose-limiting to the point of making the approach unviable, as was the case with the anti-NGF antibody tanezumab (FDA Briefing Document, 2021).Finally, there are significant issues with the quality of the pre-clinical work that is feeding into our drug development pipelines.Animal models have proved indispensable for toxicity screenings and elucidating basic causal pain mechanisms, but they have been less successful at forward translating novel drug targets.This might be due to three persistent challenges that we face in pre-clinical pain research: lack of face validity, significant publication bias, and poor reproducibility.In the following, we will discuss each of these problems in turn, and propose possible solutions that could help improve pre-clinical pain research (Fig. 1). Face validity of animal models Several reviews, including large systematic meta-analyses (Soliman et al., 2021;Zhang et al., 2022) as well as examinations of historical data (Sadler et al., 2022), have demonstrated that animals used in pain research, largely rodents like mice and rats, have traditionally been young, inbred and male.This is in contrast to human populations living with chronic pain, who tend to be older and majority female.Moreover, the paradigms by which we induce pain, most commonly via traumatic nerve injury or injection of complete Freund's adjuvant (Sadler et al., 2022), are only very poor mechanistic approximations of the conditions they intend to model, e.g.entrapment neuropathy or inflammatory arthritis.They are also usually only studied in the short term, while in most people, chronic pain persists over months and years.Finally, the outcome measures we use in the pain field are limited, and heavily skewed towards assessing evoked mechanical and thermal sensitivity thresholds.These are of poor clinical relevance and have more utility as stratification tools (Rice et al., 2018). The predominant use of homogenous rodent models is an extreme example of complexity reduction, designed to achieve the high control required to establish a causal link between a single variable and an outcome.However, it can have the unintended consequence of decreasing reproducibility and reducing the chance of finding robust and generalizable effects.Increasing heterogeneity is a potential solution to this problem (Voelkl et al., 2020).We can do so within a laboratory, by actively incorporating biological variation into study designs (Festing, 2014;von Kortzfleisch et al., 2020); or we can conduct multi-centre experiments which will capitalize on natural differences between labs (Voelkl et al., 2018).For example, Wodarski and colleagues (Wodarski et al., 2016) demonstrated the multi-centre approach to evaluating suppressed burrowing as robust and reproducible and therefore have supported the use of burrowing as an outcome to infer the global effect of pain on rodents. How can we improve on the homogeneous nature of past animal work?We should strive to increase heterogeneity on many different levels, as has been widely suggested (Currie et al., 2019;Sadler et al., 2022;Soliman et al., 2021).However, until we have done more of this work, it remains unclear which variations will have the best cost-benefit ratio.Or indeed which will have the most impact on a particular phenotype, e.g.there are reports that mouse inter-individual variability can have a much greater effect on exploratory behaviours than sex or estrous cycle (Levy et al., 2023).To identify the variables that matter, we will have to spend time working with more heterogeneous rodent cohorts, varying age, sex and strain; we should diversify the species we use for pain research (Klinck et al., 2017), especially for conditions in which non-rodent models offer better face validity, e.g.sheep for osteoarthritis; we should model longer time courses and/or use aging models where feasible; and finally, we should diversify our outcome measures to include assessments of complex behaviors that can serve as proxymeasures of spontaneous, non-evoked pain (Eisenach and Rice, 2022). The latter is not an easy task.There have been many suggestions of novel behavioral testing paradigms, such as conditioned place preference, machine-vision paradigms and species typical behaviors like burrowing and cage-lid hanging.However, so far, none of these have clearly 'won out', in that they are not yet widely adopted as standard measures within pain research.Given that the most popular behavioral tests, like von Frey or Hargreaves, involve significant subjective judgement from an individual experimenter, it seems obvious that more automated analysis methods that might mitigate observer bias will be very helpful.On the other hand, care must be taken that more data, and thus an increase in the number of analyzable variables, does not inadvertently cause a multiple comparison problem. There is also an additional challenge, specifically faced by those interested in neuro-immune interactions.Pain is a cardinal sign of inflammation.It is therefore crucial that we fully investigate and understand the local peripheral inflammatory environment in chronic pain N. Soliman and F. Denk conditions.In diseases, like osteo-and rheumatoid arthritis, where human tissue is more readily available, we can then use this knowledge to inform our pre-clinical animal models.However, in many other chronic pain conditions, like back pain or diabetic neuropathy, there is very limited access to relevant human tissues.Animal models have therefore often been developed 'blind' to neuro-immune interactions and instead focused solely on features of neuronal hypersensitivity.This is a significant weakness that needs to be addressed through future interdisciplinary collaborations (Renthal et al., 2021).For instance, immunologically, a neuropathy arising from external surgical trauma (as induced in animal models) is not at all comparable to one arising from sterile nerve entrapment.Consequently, the local immune environment that peripheral nerves find themselves in, is likely to be very different in these two examples. Publication bias Once an animal experiment has been conducted, it unfortunately is often destined to end up in a file drawer rather than in a scientific journal.This leads to significant publication bias in the pre-clinical sciences, as demonstrated in various meta-analyses which indicate that data which confirm the null hypothesis are oddly absent from the literature (van der Worp et al., 2010).For instance, out of 525 preclinical papers on stroke, only 1.2 % reported no significant findings (Sena et al., 2010).Moreover, a systematic review of pre-clinical studies of pregabalin, reported that the literature might overestimate its analgesic effects by 27 % due to publication bias (Federico et al., 2020).This figure is very similar to one identified by Currie and colleagues, whose work on chemotherapy-induced peripheral neuropathy indicated that missing experiments might have decreased the estimate of pre-clinical intervention effects by 28 % (Currie et al., 2019).However, not all pain preclinical systematic reviews have been able to identify and quantify potential publication bias (Soliman et al., 2021), likely due to methodological limitations (Zwetsloot et al., 2017). While it is not always easy to measure, publication bias is likely to have a substantial adverse impact on our ability to interpret past preclinical results.This risk has been recognized since the 1970s, with psychologist Anthony Greenwald noting on the same subject: "…the research-publication system may be regarded as a device for systematically generating and propagating anecdotal information."(Greenwald, 1975).Unfortunately, modelling data confirm Greenwald's fears: Nissen and colleagues convincingly demonstrate that publication of data confirming the null hypothesis is essential for rejecting false facts and preventing them from being canonized as true (Nissen et al., 2016).There are powerful real-life illustrations of how long and how strongly falsepositive results can prevail in a world biased towards significant effects.For example, it is estimated that there are at least 400 + studies on the association between the serotonin transporter gene SLC6A4 and depression, many of which reported significant results (Border et al., 2019).Critical voices appeared early on, questioning the methodology of the typical candidate gene analyses that led to these results (Colhoun et al., 2003); but it has taken nearly 20 years for studies to emerge that specifically refute the link between SLC6A4 and depression, using large, well-powered cohorts or meta-analysis methods.In the meantime, any casual reader of this literature would have assumed that there is a lot of evidence in favor of connection. How can we combat publication and outcome reporting bias?One obvious solution is pre-registration.It provides transparency and enables comparison of the completed study with what was initially planned.It also has the additional benefit of helping to avoid duplication.When pre-registration was made a legal requirement for clinical trials, it immediately affected the publication landscape: before 2000, when trials were not routinely logged on clinicaltrials.gov,57 % of studies claimed that their primary outcome was significant; after 2000, this figure dropped down to 8 % (Kaplan and Irvin, 2015).Similar benefits of pre-registration have been shown in psychology and neuroscience research (Soderberg et al., 2021).With pre-clinical work, it may seem unnecessarily laborious to pre-register, but we would argue that it is an important ethical matter: its benefits to the scientific community far outweigh the inconvenience to individuals.Systematic reviews and hypothesis-testing pre-clinical studies, e.g.elucidating the effect of analgesic compound × on pain-like behaviors, should be registered.Indeed, there are now dedicated repositories for just this purpose, e.g. on the Open Science Framework (OSF), PreclinicalTrials.eu, or Prospero.Many journals also offer the new format of "Registered Reports", where you submit your study design and analysis plan for review before any data are generated.Leading pain journals, as well as Brain Behavior and Immunity, have yet to introduce this category and are thus presented with an easy opportunity to support the fight against reporting bias. For those who are conducting hypothesis-generating work, it is still very useful to set up a scientific analysis and statistical plan, that can then be registered informally, either by uploading it into a private repository on OSF, or simply time-stamping it as a pdf.This practice will help scientists to more clearly identify the model they are investigating, think about the size of the effect they are likely to observe (see below) and identify whether there could already be a hypothesis-testing element within a larger hypothesis-generating project. Beyond this, we all need to make a concerted effort to publish as many of our experimental results as possible, whether they support the null hypothesis or not (Andrews et al., 2016).This is a goal that has also received increasing traction from funding agencies over the past several years.For example, several UK funders now have their own repositories, like the Wellcome Open Research or the NC3Rs Gateway, that encourage deposition of null data.Ultimately, however, in our capacity as peerreviewers, we all have to work together to achieve a cultural shift: basing our recommendations for publication not on apparent novelty, but rather on the rigor and quality of experimental design, execution and reporting practices. Reproducibility A final significant issue that limits pre-clinical translatability is that many of the published 'positive' results are not fully reproducible.This problem has been discussed specifically in the context of pain research, but clearly spreads far beyond this narrow specialty to include general neuroscience (Button et al., 2013), cancer research and, of course, psychology (Open Science Collaboration, 2015), where some have even gone as far as arguing that it should no longer be viewed as a quantitative discipline (Yarkoni, 2022). Reasons for poor reproducibility include a lack of pre-defined hypotheses, statistical shortcomings, poor experimental design (e.g.lack of blinding, inadequate controls, failure to verify the success of drug delivery in pharmacological studies) and poor reporting.In an interdisciplinary space, another common source of irreproducibility can be the imperfect transfer of techniques from one field to another.For example, knowledge of what a high-quality flow cytometry experiment looks like is still somewhat limited in the neuroscience community, while immunologists understandably may have a hard time assessing neuroscience results, e.g.immunostaining of cortical regions. What are some possible solutions?In terms of improving the reproducibility of interdisciplinary science, cross-field reviewing and research collaborations across field-specific boundaries are absolutely essential and should be supported by funders.Indeed, agencies are starting to recognize this, with interdisciplinary calls becoming a lot more common. In terms of more generic barriers to reproducibility, a lot of prior work has gone into making practical suggestions, including the generation of ARRIVE 2.0 reporting guidelines (Percie du Sert et al., 2020), the PREPARE checklist for planning animal research (Smith et al., 2018) and a recently published formal framework for Enhancing Quality in Preclinical Data (EQIPD).EQIPD makes suggestions on broad experimental principles, e.g. the need to pre-define a hypothesis and statistical N. Soliman and F. Denk analysis plan, as well as the utmost importance of randomization and blinding. Discussion and a call for better training It seems, therefore, that we are not short of solutions.We are short of people who implement them.For example, the ARRIVE guidelines, originally developed in 2010, are the most comprehensive checklist for the conduct and reporting of animal experiments.However, a randomized controlled trial found that journal-requested completion of an ARRIVE checklist did not improve compliance with the guidelines, suggesting that editorial policy alone is not sufficient to improve comprehensive and transparent reporting (Hair et al., 2019).One barrier to uptake may be education: the training, availability and accessibility of other tools designed to improve research quality are likely to facilitate implementation of the ARRIVE guidelines and ultimately improve the value of pre-clinical research over time (Vollert et al., 2020). Pre-clinical scientists often assume that since they are conducting hypothesis-generating work, pre-planning of experimental design and statistics is not all that relevant for them.Many are simply unaware of how much a fluid and flexible design will affect the confidence level that one can have in any given result.In fact, a typical pre-clinical paper tends to be seen as reliable if it tests a model with a great variety of complex techniques, e.g.behavior, electrophysiology, immunostaining and Western blotting.If these varied sets of data are reasonably well executed and support a particular narrative, we are often quick to accept this: a new promising 'target x' is born, pursued in future studies and drug development pipelines, and highlighted in reviews and press releases. However, this is a very risky approach.Given current norms around data reporting and statistics in hypothesis-generating studies, it is often unclear how often an experiment was repeated within a single modality.As in, how many times did the authors obtain a particular Western blot result?How often was a behavioral study conducted?This information is crucial.Without it, we are left with 1) significant selection bias, i.e. it is easy to forget or explain away when one blot in a series of three repeats does not quite support what we hope to see; 2) a high risk of false positives in each modality, due to the small samples sizes we all use.We therefore risk ending up with a series of underpowered and biased experiments which, as we know from meta-analyses, do not suddenly addup to a large, confident whole. There are many other examples of how there are failures in statistical training, ranging from incorrect selection of parametric tests to the omission of multiple comparison corrections when they are required, e. g. in the case of conducting 10 different behavioural tests or interrogating 7 different immune cell populations via flow cytometry.But there are even more basic failures, for instance when we consider the typical level of understanding of effect sizes in the pre-clinical space.The crucial distinction between 'observed' and 'true' effects is often entirely lost, with people basing their power calculations on a single observed effect derived from one small pilot experiment.This is a terrible practice, for reasons that are well-explained elsewhere (Albers and Lakens, 2018), as it risks leading to an endless series of underpowered experiments. Worse still, pre-clinical scientists often fail to consider whether their experimental design is suitable for the kinds of effect sizes they are likely to observe.Sex differences are a great example of this.There is no doubt that there will be sex difference in neuro-immune interactionsafter all, we know from decades of work in immunology, that women have a stronger adaptive immune response (Klein and Flanagan, 2016).However, how big do we expect this effect to be? Presumably, in most cases sex is a smaller modulatory factor that influences a larger effect, e.g. an inflammogen within a joint will cause significant pain, but this large effect might increase or decrease slightly in size depending on sex.Continuing with this example, most pre-clinical pain studies are wellpowered to observe only very large effects, such as Cohen's d of 1.3 or above.Let's assume that the effect of a pain-inducing injury or of a putative analgesic drug is indeed truly this large.In most cases, we would then expect the modulatory effect of sex on this injury or analgesic drug to be smaller, i.e. require a larger sample size.And yet, the pre-clinical literature abounds with very small-scale studies (n = 6-12) on sex differences in rodents.Their results are frequently debated at length, especially when their conclusions are conflicting.In reality, it is quite likely that none of the studies are powered to reliably detect sex differences in neuro-immune interactions with our current pre-clinical experimental designs.If so, we may be spending time and money debating spurious statistical noise. How can we improve on current training?Statistical educational material has greatly improved over the past several years, with many authors making a very complex and difficult branch of mathematics conceptually accessible to biologists.There are online textbooks (Lakens, 2022), lecture series, and introductory books for the general public (Spiegelhalter, 2019).Getting pre-clinical scientists to engage with these materials may be one way to help motivate them to adopt the many wonderful guidelines for experimental design and reporting that have been developed over the past decade, such as EQIPD or ARRIVE 2.0. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-10-03T13:05:47.308Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "04bad2b5d946233df7c0a05a4a2b4049cfd6cd77", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bbi.2023.09.023", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "c55280ba20f2c0796e9fe00904b3c9ddcc46ede8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269095225
pes2o/s2orc
v3-fos-license
FAM19A4 and hsa-miR124-2 Double Methylation as Screening for ASC-H- and CIN1 HPV-Positive Women The DNA methylation levels of host cell genes increase with the severity of the cervical intraepithelial neoplasia (CIN) grade and are very high in cervical cancer. Our study aims to evaluate FAM19A4 and hsa-miR124-2 methylation in Atypical Squamous cells with high-grade squamous intraepithelial lesions (ASC-H) and in CIN1, defined as low-grade squamous intraepithelial lesions (LSILs) by the Bethesda classification, as possible early warning biomarkers for managing women with high-risk HPV infections (hrHPV). FAM19A4 and hsa-miR124-2 methylation tests were conducted on fifty-six cervical screening samples from a subset of women aged 30–64 years old. Specimens were collected into ThinPrep PreservCyt Solution. Their HrHPV genotype and cytology diagnosis were known. A Qiasure (Qiagen) was used for FAM19A4 and hsa-miR124-2 methylation testing on bisulfite-converted DNA, according to the manufacturer’s specifications. The reported results were hypermethylation-positive or -negative. We found that FAM194A4 and hsa-miR124-2 methylation was detected in 75% of ASC-H cases with a persistent infection of hrHPV. A total of 60% of CIN1 lesions were found to be positive for methylation, and 83.3% were when the cytology was CIN2/3. In addition, as a novelty of this pilot study, we found that combined FAM19A4 and hsa-miR124-2 methylation positivity rates (both methylated) were associated with the HPV genotypes 16, 18, and 59 and covered 22 and 25% of ASC-H and CIN1 cases, respectively. The methylation of these two genes, in combination with HPV genotyping, can be used as an early warning biomarker in the management and follow-up of women with ASC-H and CIN1 to avoid their progression to cervical cancer. Introduction The epidemiological surveillance of human papillomavirus (HPV) infection and its related diseases is crucial for monitoring and evaluating the currently available antiviral prophylactic vaccines [1].HPV infection is the primary cause of cervical cancer among women [2].HPV infection positivity occurs in more than 80% of cervical cancer cases worldwide [3].Among women with normal cervical cytology, the highest HPV prevalences were found in Oceania (21.8%, estimated to be 30.9% in 2019) and Africa (21.1%), followed by Europe (14.2%),America (11.5%), and Asia (9.4%) [4].In addition, HPV infection rates are higher in developing regions (42.2%) than in developed regions (22.6%) [5][6][7].Nevertheless, its prevalence is quite high in Eastern Europe (21.4%).Adolescent girls and women under 25 were the most infected.However, in the African (East and West Africa) and American (Central and Southern America) regions, there was a rebound in HPV infections in adults over 45 years old [4,8].In Italy, HPV infection data emphasizes the importance of the 9-valent vaccine as well as their screening program for cervical cancer; 2918 new cases of cervical cancer show the burden of disease attributable to HPV in Italy, where there are 2065 cases of neck cancer in both genders and about 100 new cases of penile cancer per year in men [9].HPV infections are associated with cervical intraepithelial neoplasia (CIN), which can be divided into low-and high-scale risk grades.A low-grade CIN1 lesion can spontaneously resolve; in fact, it is also referred to as low-grade squamous intraepithelial lesions (LSILs).CIN2 refers to the abnormal changes in the epithelial cervical layer as a grey zone, since 50% of these lesions can regress, especially in young women [10].The last grade is CIN3, or high-grade squamous intraepithelial lesions (HSILs), which is the most severe form, for which surgical treatment is needed.Over the past few years, it has become evident that epigenetic events, and in particular differential HPV gene methylation events, substantially contribute to the regulation of the papillomavirus's life cycle [11,12] and, therefore, to infection progression.It is well-recognized and robustly proven that silencing tumor suppressor genes through the local hypermethylation of CpG-rich promoter regions contributes to cancer development [13,14].DNA methylation, in the 5 ′ position of a cytosine molecule in CpG dinucleotides [15], is a biochemical mechanism that induces the covalent binding of a methyl group to this region.Methylation analysis is a promising triage tool for high-risk HPV (hrHPV)-positive women [16].Studies have shown that the DNA methylation levels of host cell genes increase with the severity of the CIN grade and are very high in cervical cancer [17,18].The functional relevance of methylation-mediated gene silencing during HPV-induced carcinogenesis in the host has been demonstrated for a subset of the currently known methylation gene targets, including CADM1, MAL, PAX1, FAM19A4, and hsa-miR124-2 [18,19].It is worth mentioning that the absence of both FAM19A4 and hsa-miR124-2 methylation is associated with the high regression rate of CIN2 lesions, as was reported in the CONCERVE Study [20], which also highlights the regression in Atypical Squamous cells with high-grade squamous intraepithelial lesions (ASC-H) when negative for methylation.CIN1 and ASC-H represent a clinical dilemma since a variable percentage, from 5% to 20%, of this type of lesion progresses to HSILs and cancer.Our aim in this study was to evaluate both the FAM19A4 and hsa-miR124-2 methylation grades, in combination and alone, in the Atypical Squamous cells of a cytology of Undetermined Significance (ASC-H) and CIN1, also defined low-grade squamous intraepithelial lesions (LSILs), of women with high-risk HPV infections-with respect to the fact that CIN2/3 is also defined as high-grade squamous intraepithelial lesions (HSILs) by the Bethesda classification [21]-as well as in negative samples, as we speculate that it might be double FAM19A4 and hsa-miR124-2 methylation women that progress to cervical cancer (Figure 1). Materials and Methods From June until August 2023, in the central area of Catanzaro in the Calabria region, following a primary HPV screening, we collected data from 491 women aged from 30 to 64; 11% were at their one-year follow-up.Specimens were collected into ThinPrep PreservCyt Solution (Hologic, Marlborough, MA, USA).An explicative flowchart is in Scheme 1. Data were analyzed retrospectively.Multiplex real-time PCR utilizing Dual Priming Oligonucleotide (DPO) and Tagging Oligonucleotide Cleavage and Extension (TOCE) technologies (Anyplex II HPV HR Detection Seegene, distributed in Italy by Arrow Diagnostics) were used to detect 14 high-risk HPV genotypes simultaneously in a single reaction tube.A Qiasure (Qiagen) was used for FAM19A4 and hsa-miR124-2 methylation testing on bisulfite-converted DNA, according to the manufacturer's specifications.Based on a three-step reaction, this technique involves treating methylated DNA with bisulfite, which converts unmethylated cytosines into uracil.Methylated cytosines remain unchanged during this treatment.Once converted, the methylation profile of the DNA can be determined by PCR amplification (Zymo Research, Irvine, CA, USA) using an input sample of 2.5 µL of bisulfite-converted DNA and the PCR instrument.Additional quality assurance was employed using the housekeeping gene β-actin (ACTB) as a reference for successful bisulfite conversion, sample quality, and signal normalization.According to the manufacturer's instructions, the software runs the assays, followed by automatic quality assurance and data analysis, resulting in ∆Ct and ∆∆Ct value thresholds for FAM19A4 and/or hsa-miR124-2.Briefly, the ∆Ct values were calculated as the difference between the Ct value of the FAM19A4 or hsa-miR124-2 targets and the Ct value of the reference (ACTB).For normalization, the ∆Ct value of a calibrator sample (standardized low-copy plasmid DNA sample) is subtracted from the ∆Ct of the FAM19A4 or hsa-miR124-2 targets, resulting in a ∆∆Ct value.The reported results were methylation-positive for FAM19A4 and -negative for hsa-miR124-2, methylation-negative for FAM19A4 and -positive for hsa-miR124-2, or methylation-positive for FAM19A4 and -positive for hsa-miR124-2 (see Supplementary Materials). Materials and Methods From June until August 2023, in the central area of Catanzaro in the Calabria re following a primary HPV screening, we collected data from 491 women aged from 64; 11% were at their one-year follow-up.Specimens were collected into Thin PreservCyt Solution (Hologic, Marlborough, MA, USA).An explicative flowchart Scheme 1. Data were analyzed retrospectively.Multiplex real-time PCR utilizing Priming Oligonucleotide (DPO) and Tagging Oligonucleotide Cleavage and Exte (TOCE) technologies (Anyplex II HPV HR Detection Seegene, distributed in Italy b row Diagnostics) were used to detect 14 high-risk HPV genotypes simultaneously single reaction tube.A Qiasure (Qiagen) was used for FAM19A4 and hsa-miR124-2 m ylation testing on bisulfite-converted DNA, according to the manufacturer's spec tions.Based on a three-step reaction, this technique involves treating methylated with bisulfite, which converts unmethylated cytosines into uracil.Methylated cyto remain unchanged during this treatment.Once converted, the methylation profile DNA can be determined by PCR amplification (Zymo Research, Irvine, CA, USA) an input sample of 2.5 µL of bisulfite-converted DNA and the PCR instrument.Addi quality assurance was employed using the housekeeping gene β-actin (ACTB) as a ence for successful bisulfite conversion, sample quality, and signal normalization.Ac ing to the manufacturer's instructions, the software runs the assays, followed by auto quality assurance and data analysis, resulting in ΔCt and ΔΔCt value threshold FAM19A4 and/or hsa-miR124-2.Briefly, the ΔCt values were calculated as the diffe between the Ct value of the FAM19A4 or hsa-miR124-2 targets and the Ct value o reference (ACTB).For normalization, the ΔCt value of a calibrator sample (standar low-copy plasmid DNA sample) is subtracted from the ΔCt of the FAM19A4 or miR124-2 targets, resulting in a ΔΔCt value.The reported results were methylation tive for FAM19A4 and -negative for hsa-miR124-2, methylation-negative for FAM and -positive for hsa-miR124-2, or methylation-positive for FAM19A4 and -positiv hsa-miR124-2 (see Supplementary Materials). Bioinformatics We used bioinformatics tools, the MethHC version 2.0 platform released in 2021, for assessing methylation levels and the Kyoto Encyclopedia of Genes and Genomes (KEGG) to identify the biochemical pathways related to FAM19A4 and hsa-miR124-2. Results Using a retrospective approach, we included a subset of cervical samples with a 1-year follow-up within the screened cohort, resulting in a percentage (11%) of women with positive hrHPV infections and a median age 43 ± 9.4 (range: 30-64 years) being included in the study.As a control reference, we included in the trials eight samples negative for intraepithelial lesion malignancies (NILMs) and negative for hrHPV DNA (mean age 37 ± 5.18), as shown in Table 1.Twenty-four women with ASC-H (mean age 42 ± 8.96) and with a higher prevalence of the hrHPV genotypes 16, 18, and 31, highlighted in bold, had three coinfections.Twenty were diagnosed with CIN1 (mean age 40 ± 8.25) and a higher prevalence of the hrHPV genotypes 18, 33, and 56, with one coinfection, and twelve with CIN2/3 (mean age 49 ± 10.09), a higher prevalence of the hrHPV genotypes 16, 39, and 59, and two coinfections, as highlighted in bold.The median time between the baseline clinically collected sample and baseline cytology was 28 days.Table 1 shows the methylation positivity rates, stratified by cytology grade and hrHPV positivity.We found that FAM194A4 and hsa-miR124-2 methylation detect the CIN2/3 lesions at the highest risk of progression to cervical cancer and, with a long-lasting HPV infection, these increase with age.Concerning ASC-H, 75% of cases were recognized as positive for methylation, with a persistent infection of hrHPV.Of the CIN1 lesions, 60% were found to be positive for methylation in the methylation analysis, as shown in Table 2.The FAM19A4 and/or hsa-miR124-2 methylation tests were highest, equal to 83.3%, when the cytology was CIN2/3.The hrHPV genotypes for the cytology grades are also reported in the note regarding the asterisks.For the small sample size analyzed here, Fisher's exact test was conducted to create contingency tables.The data show that the association between ASC-H and NILMs and methylated gene groups is considered to be extremely statistically significant, with a two-tailed p-value of 0.0003; it is very statistically significant between CIN1 and NILM, with a two-tailed p-value of 0.0084; and between NILM and CIN2/3 methylated genes it is extremely statistically significant, with a p-value of 0.0007, as is shown in Figure 2, in panels A, B, and C, respectively. Table 3 shows the FAM19A4 and hsa-miR124-2 methylation results, in combination and alone, linked to the hrHPV genotyping positivity rates.We found that the combined FAM19A4 and hsa-miR124-2 methylation positivity rate was associated with the HPV genotypes 16, 18, and 59, as is highlighted in bold.In addition, we found that the positivity rate of FAM19A4 methylation alone was associated with the HPV genotypes 16, 18, 39, and 66, as highlighted in bold, and the positivity rate of hsa-miR124-2 methylation alone was associated with the HPV genotypes 16, 18, 31, 39, and 58, as highlighted in bold.Table 4 shows the FAM19A4 and has-miR124-2 methylation results in combination and alone.We found that the methylation of both the FAM19A4 and hsa-miR124-2 genes in ASC-H was 22.2%, while in CIN1 it was 25%, and in CIN2/3 20%.In addition, the methylation positivity results for FAM19A4 alone show the following percentage values: ASC-H, 33.4%; CIN1, 50%; and CIN2/3, 50%.Those of hsa-miR124-2 show the following percentage values: ASC-H, 44.4%; CIN1, 25%; and CIN2/3, 30%.Table 3. hrHPV genotyping results stratified by FAM19A4 and hsa-miR124-2 methylation positivity rates, both in combination and alone. Gene hrHPV Genotypes FAM19A4 and hsa-miR124- Table 4 shows the FAM19A4 and has-miR124-2 methylation results in combination and alone.We found that the methylation of both the FAM19A4 and hsa-miR124-2 genes in ASC-H was 22.2%, while in CIN1 it was 25%, and in CIN2/3 20%.In addition, the methylation positivity results for FAM19A4 alone show the following percentage values: ASC-H, 33.4%; CIN1, 50%; and CIN2/3, 50%.Those of hsa-miR124-2 show the following percentage values: ASC-H, 44.4%; CIN1, 25%; and CIN2/3, 30%.The methylation testing of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC)'s FAM19A4 and hsa-miR124-2 was conducted using the MethHC version 2.0 platform, released in 2021, which has counted 27,190 DNA methylation data points, 1732 expression data points, and 11,196 microRNA data points in 33 different types of tumors.The data on the methylation and expression of hsa-miR124-2 in CESC are shown in Figure 3; the plot shows its methylation on the y axis and its expression on the x axis.The level of hsa-miR-124-2's expression in normal tissue is the baseline, and it increases up to 0.2 in CECS.The KEGG analysis of hsa-miR-124-2, which highlighted its importance in cancer pathways, is present in the Supplementary Materials (Figure S1).No data are reported for FAM19A4, even when searching for it in the MethHC version 2.0 platform using the alias TATA4, or in KEGG.The methylation testing of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC)'s FAM19A4 and hsa-miR124-2 was conducted using the MethHC version 2.0 platform, released in 2021, which has counted 27,190 DNA methylation data points, 1732 expression data points, and 11,196 microRNA data points in 33 different types of tumors.The data on the methylation and expression of hsa-miR124-2 in CESC are shown in Figure 3; the plot shows its methylation on the y axis and its expression on the x axis.The level of hsa-miR-124-2's expression in normal tissue is the baseline, and it increases up to 0.2 in CECS.The KEGG analysis of hsa-miR-124-2, which highlighted its importance in cancer pathways, is present in the Supplementary Materials (Figure S1).No data are reported for FAM19A4, even when searching for it in the MethHC version 2.0 platform using the alias TATA4, or in KEGG. Discussion Human health and disease are not only maintained by the DNA code, but also by the precise regulation of gene transcriptions and their epigenetic biochemical regulatory apparatus.Non-coding RNAs, either long or microRNA, are also part of this regulatory mechanism, in which the methylation pa erns in normal cell physiology are often disturbed by aberrant cell growth or viral infections [22].DNA methylation is the most extensively studied epigenetic change in HPV-related cancers.Although cervical cytology, also known as Pap test screening, is an effective tool for identifying premalignant changes Discussion Human health and disease are not only maintained by the DNA code, but also by the precise regulation of gene transcriptions and their epigenetic biochemical regulatory apparatus.Non-coding RNAs, either long or microRNA, are also part of this regulatory mechanism, in which the methylation patterns in normal cell physiology are often disturbed by aberrant cell growth or viral infections [22].DNA methylation is the most extensively studied epigenetic change in HPV-related cancers.Although cervical cytology, also known as Pap test screening, is an effective tool for identifying premalignant changes in the cervical epithelium, a clinical debate is still ongoing with respect to the management of low-grade cervical abnormalities, known as ASC-H and CIN1.In this study, using a retrospective approach, we analyzed the methylation of FAM19A4 and hsa-miR124-2 in a subset of cervical samples of persistent HPV infections.A similar study was conducted in a large cohort of hrHPV specimens from several European countries.In any case, in that study, data from Italy were not present [23], and the double methylation of FAM19A4 and has-miR124-2 was not reported, as we pointed out here (see Table 3).The characteristics of the clinical patients in our subset of samples with positive hrHPV infections are in line with the literature.Indeed, globally, the most common HPV viral genotypes infecting women with a normal cytology are HPV 16, 31, 52, and 53.Similar data were found in a retrospective study about the genotype distributions in our region, with a prevalence of 16.9% for HPV 16, followed by 9.1% for HPV 31 [24].In cervical cancer, the major viral genotypes are HPV 16 and 18, with over 55% and 14% of cervical cancers associated with each, respectively [4].The overall mean age of the women was 43 ± 9.4.The ASC-H group had a mean age of 42 ± 8.96, with the occurrence of the hrHPV genotypes 16/18/31.The CIN1 group had a mean age of 40 ± 8.25 and was positive for the occurrence of the genotypes hrHPV 18 and 33, while the CIN2/3 group had a mean age of 49 ± 10.09 and was positive for the occurrence of the hrHPV genotypes 16, 39, and 59.The methylation of FAM194A4 and hsa-miR124-2 was highlighted in our results.In particular, the total methylation percentage found in ASC-H was 75%, while it was 60% for CIN1.The methylation was higher in CIN2/3, equal to 83.3%, as expected since DNA methylation is a well-established method for regulating gene expression.In ASC-H, methylation was already evidenced in the PAX1 gene [25] and other methylation markers such as ASCL1, LHX8, ST6GALNAC5, GHSR, ZIC1, and SST were found in CIN1.To assess their significance Fisher's exact test was conducted.The contingency table shows that, for ASC-H and NILM, these methylated genes are considered to be extremely significant, as their difference had a two-tailed p-value of 0.0003; the findings from our study support the need for regular cytological follow-ups of women with ASC-H, as recommended so far by the American Society for Colposcopy and Cervical Pathology's 2006 Consensus Guidelines [26].The difference in CIN1's methylated genes compared NILM's has a two-tailed p-value of 0.0084.As expected, CIN2/3 evidences the same statistical trend, with a p-value of 0.0007.The methylation of hsa-miR124-2 is linked to Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC).In our subset of samples, we found that hsa-miR124-2 shows the following positive percentage values: in ASC-H, 44.4%; in CIN1, 25%; and in CIN2/3, 30%.The methylation rate of both the hsa-miR-124-2 gene and the FAM19A4 gene (double methylation) in ASC-H was 22.2%, in CIN1 it was 25%, and in CIN2/3 20%.In addition, FAM19A4 methylation positivity alone shows the following percentage values: ASC-H, 33.4%; CIN1, 50%; and CIN2/3, 50%, respectively.It is important to remember that CIN2/3 lesions with an HPV infection have the highest risk of progression to cervical cancer and that this increases with age.Dovnik and Poljak also reported these results in a recent review.The authors showed that DNA methylation accurately predicts disease progression and is a valid triage tool for HPV-positive women, with CIN2 performing better than triage cytology [27].A negative methylation result for both FAM19A4 and hsa-miR-124-2 was associated with a low longterm risk of cervical cancer in a Dutch longitudinal study [28].We found that the genotypes 16 and 18 were frequently associated with FAM19A4 and/or hsa-miR-124-2 (even both) methylation positivity results in hrHPV infections. In our opinion, this class of double positive results for FAM19A4 and hsa-miR-124-2 methylation should be considered more closely, by applying an extended genotyping screening test instead of a partial (only 16 and 18 or others).Methylation tests such as FAM19A4 and hsa-miR-124-2, in combination with cytology or HPV genotyping, can be used as an early warning biomarker in the management of women with CIN1 or ASC-H.Both represent a clinical dilemma in terms of their management, since from 5.2% to 18.8% of these types of lesion progress to HSILs and cancer [29,30].The test is proposed as a substitute or addition to cytology for the reflex testing of HPV in screen-positive women.The worldwide prevalence of cervical HPV infections is estimated to be 11.7% (95% CI: 11.6-11.7%) in women with normal cytological findings and is significantly higher for women with an abnormal cytology even after surgery and vaccination [31][32][33].The advent of FDA-approved molecular testing for diagnosing HPV infections has led to a dramatic shift from cytology testing alone to a combination of cytology and molecular testing for primary HPV screening.This screening practice should also be considered for the gene methylation of hrHPV, as was recently pointed out by Salsa and co-workers [34].In fact, in their recent systematic review and meta-analysis, it was confirmed that DNA methylation-based markers constitute a promising tool for assessing hrHPV positivity and decreasing its overdiagnosis and overtreatment.With this type of screening program, the pressure on colposcopy units could decrease, improving waiting times and health costs.The advancement of cervical cancer screening programs over the decades has been constant, searching for the optimal balance between reliability and effectiveness.Methylation markers may be the next advancement, improving women's adhesion to their screening, cost-effectiveness, and improving women's quality of life.In line with these conclusions and pioneering this vision, in July 2019, the governments of the Netherlands and Turkey were the only in Europe to implement national HPV-based cervical cancer screening fully.Sweden, Finland, and Italy have implemented HPV-based screening in several regions (not yet in Calabria), and several other countries are at various stages of implementing this [35].Nevertheless, as a limit of the present study, the presence of HPV-66 in the panel diagnostic tool, with the scientific evidence, could be also classified as an intermediate risk type or HSIL [36].The reason why it has been removed from the highrisk group by the International Agency of Research in Cancer (IARC) is because HPV-66 is more prevalent in normal cytologies than in invasive cervical cancer and, therefore, it is wrong to still include it in HPV screening tests [37]. Table 1 . Baseline cytology grades; media age, plus or minus standard deviation; number of hrHPV genotypes; and their percentages. Table 4 . FAM19A4 and hsa-miR124-2 methylation positivity, in combination and alone, stratified by cytology grade and percentage. Table 4 . FAM19A4 and hsa-miR124-2 methylation positivity, in combination and alone, stratified by cytology grade and percentage.
2024-04-13T15:13:18.604Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "ed5ffc6bd280cc2adeac4408ec743413134ed16b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/13/4/312/pdf?version=1712836451", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e314408ab257d2bf36722a01abd7d3418a46007", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232478555
pes2o/s2orc
v3-fos-license
Multiview Pseudo-Labeling for Semi-supervised Learning from Video We present a multiview pseudo-labeling approach to video learning, a novel framework that uses complementary views in the form of appearance and motion information for semi-supervised learning in video. The complementary views help obtain more reliable pseudo-labels on unlabeled video, to learn stronger video representations than from purely supervised data. Though our method capitalizes on multiple views, it nonetheless trains a model that is shared across appearance and motion input and thus, by design, incurs no additional computation overhead at inference time. On multiple video recognition datasets, our method substantially outperforms its supervised counterpart, and compares favorably to previous work on standard benchmarks in self-supervised video representation learning. Introduction 3D convolutional neural networks (CNNs) [54,7,55,16] have shown steady progress for video recognition, and particularly human action classification, over recent years. This progress also came with a shift from traditionally smallscale datasets to large amounts of labeled data [30,5,6] to learn strong spatiotemporal feature representations. Notably, as 3D CNNs are data hungry, their performance has never been able to reach the level of hand-crafted features [56] when trained 'from-scratch' on smaller scale datasets [48]. However, collecting a large-scale annotated video dataset [20,6] for the task at hand is expensive and tedious as it often involves designing and implementing annotation platforms at scale and hiring crowd workers to collect annotations. For example, a previous study [47] suggests it takes at least one dollar to annotate a single video with 157 human activities. Furthermore, the expensive annotation process needs to be repeated for each task of interest or when the label space needs to be expanded. Finally, another dilemma that emerges with datasets collected from the web is that they are vanishing over time as users delete their uploads, and therefore need to be replenished in a recurring fashion [49]. and uses a shared model to perform semisupervised learning. After training, a single RGB view is used for inference. The goal of this work is semi-supervised learning in video to learn from both labeled and unlabeled data, thereby reducing the amount of annotated data required for training. Scaling video models with unlabeled data is a setting of high practical interest, since collecting large amounts of unlabeled video data requires minimal human effort. Still, thus far this area has received far less attention than fully supervised learning from video. Most prior advances in semi-supervised learning in computer vision focus on the problem of image recognition. "Pseudo-labeling" [37,63,60,50] is a popular approach to utilize unlabeled images. The idea is to use the predictions from a model as target labels and gradually add the unlabeled images (with their inferred labels) to the training set. Compared to image recognition, semi-supervised video recognition presents its own challenges and opportunities. On the one hand, the temporal dimension introduces some ambiguity, i.e. given a video clip with an activity label, the activity may occur at any temporal location. On the other hand, video can also provide a valuable, complementary signal for recognition by the way objects move in space-time, e.g. the actions 'sit-down' vs. 'stand-up' cannot be discriminated without using the temporal signal. More specifically, video adds information about how actors, objects, and the environment change over time. Therefore, directly applying semi-supervised learning algorithms designed for images to video could be sub-optimal (we will verify this point in our experiments), as image-based algorithms only consider appearance information and ignore the potentially rich dynamic structure captured by video. To address the challenge discussed above, we introduce multiview pseudo-labeling (MvPL), a novel framework for semi-supervised learning designed for video. Unlike traditional 3D CNNs that implicitly learn spatiotemporal features from appearance, our key idea is to explicitly force a single model to learn appearance and motion features by ingesting multiple complementary views 1 that augments labeled data. We consider visual-only semi-supervised learning and all the views are computed from RGB frames. Therefore our method does not require any additional modalities nor does it require any change to the model architecture to accommodate the additional views. Our proposed multiview pseudolabeling is general and can serve as a drop-in replacement for any pseudo-labeling based algorithm [37,63,60,50] that currently operates only on appearance, namely by augmenting the model with multiple views and our ensembling approach to infer pseudo-labels. Our method rests on two key technical insights: 1) a single model that nonetheless benefits from multiview data; and 2) an ensemble approach to infer pseudo-labels. First, we convert both optical flow and temporal difference to the same input format as RGB frames so that all the views can share the same 3D CNN model. The 3D CNN model takes only one view at a time and treats optical flow and temporal gradients as if they are RGB frames. The advantage is that we directly encode appearance and motion in the input space and distribute the information through multiple views, to the benefit of the 3D CNN. Second, when predicting pseudo-labels for unlabeled data, we use an ensemble of all the views for prediction. We show that predicting pseudo-labels from all the views is more effective than predicting from a single view alone. Our method uses a single model that can seamlessly accommodate different views as input for video recognition. See Figure 1 for an overview of our approach. 1 We use the term view to refer to different input types (RGB frames, optical flow, or RGB temporal gradients), as opposed to camera viewpoints. in the form of appearance, motion, and temporal gradients, so as to train the model from unlabeled data In summary, this paper makes the following contributions: • This work represents an exploration in semi-supervised learning for video understanding, an area that is heavily researched in image understanding [9,21,27,50,59]. Our evaluation establishes semi-supervised baselines on Kinetics-400 (1% and 10% label case), and UCF101 (similarly as the image domain which uses 1% and 10% of labels on ImageNet [9,27,50,59]). • Our technical contribution is a novel multiview pseudolabeling framework for general application in semisupervised learning from video, that delivers consistent improvement in accuracy on multiple pseudo-labeling algorithms. • On several challenging video recognition benchmarks, our method substantially improves its single view counterpart. We obtain state-of-the-art performance on UCF101 [51] and HMDB-51 [34] when using Kinetics-400 [30] as unlabeled data, and outperform video selfsupervised methods in this setting. Related Work Semi-supervised learning in images. Most prior advances in semi-supervised learning in computer vision focus on image recognition. Regularization on unlabeled data is a common strategy for semi-supervised learning. Entropy regularization [21] minimizes the conditional entropy of class probabilities for unlabeled data. Consistency regularization forces the model representations to be similar when augmentations are applied to unlabeled data [46]. VAT [41] uses adversarial perturbations while UDA [59] applies Ran-dAugment [10] for augmentations. Pseudo-labeling [37,63] or self-training [60] is another common strategy for semisupervised learning, where predictions from a model are used as pseudo-labels for unlabeled data. Pseudo-labels can be generated using a consensus from previous model checkpoints [36] or an exponential moving average of model parameters [53]. FixMatch [50] predicts pseudo-labels from weak augmentation to guide learning for strong augmentation generated from RandAugment [10]. Unlike any of the prior work above, we consider video, and our method leverages multiple complementary views. Semi-supervised learning in videos. Compared to images, semi-supervised learning for video has received much less attention. The work of [65] applies an encoder-decoder framework to minimize a reconstruction loss. The work in [29] combines pseudo-labeling and distillation [18] from a 2D image classifier to assist video recognition. However, none of the prior semi-supervised work capitalizes on the rich views (appearance, motion, and temporal gradients) in videos. To the best of our knowledge, we are the first to explore multiple complementary views for semi-supervised video recognition. Co-training [3] is a seminal work for semi-supervised learning with two views, first introduced for the web page classification problem. Co-training learns separate models for each view, whereas we share a single model for all views. Our idea has the key advantage that a single model can directly leverage the complementary sources of information from all the views. Our experiments demonstrate that our design outperforms co-training for this video learning setting. Self-supervised learning. Another common direction to leverage unlabeled video data is self-supervised learning. Self-supervised learning first learns feature representations from a pretext task (e.g., audio video synchronization [33], clustering [1], clip order [62], and instance discrimination [45] etc.), where the labels are generated from the data itself, and then fine-tunes the model on downstream tasks with labeled data. Self-supervised learning in video can leverage modalities by learning the correspondence between visual and audio cues [44] or video and text [67,40]. Appearance and motion [22] can be used to boost performance in a contrastive learning framework or address domain adaptation [43]. Self-supervised training learns task-agnostic features, whereas semi-supervised learning is task-specific. As suggested in [65], semi-supervised learning can also leverage a self-supervised task as pre-training, i.e. the two ideas are not exclusive, as we will also show in results. Multi-modal video recognition. Supervised video recognition can benefit from multi-modal inputs. Two-stream networks [48,17] leverage both appearance and motion. Temporal gradients [57,66] can be used in parallel with appearance and motion to improve video recognition. Beyond visual cues, audio signals [58,31] can also assist video recognition. We consider visual-only semi-supervised learning for video recognition. Like [57,66], we use appearance, motion, and temporal gradients, but unlike any of these prior models, our approach addresses semi-supervised learning. Multiview Pseudo-Labeling (MvPL) We focus on semi-supervised learning for videos and our objective is to train a model by using both labeled and unlabeled data. Our main idea is to capitalize on the complementarity of appearance and motion views for semi-supervised learning from video. We first describe how we extract multiple views from video ( §3.1), followed by how we use a single model to seamlessly accommodate all the views for multiview learning and how we obtain pseudo-labels with a multiview ensemble approach ( §3.2). Subsequently, §3.3 outlines three concrete instantiations of our approach and §3.4 provides implementation specifics. Multiple views of appearance and dynamics Many video understanding methods only consider a single view (i.e., RGB frames), thereby possibly failing to model the rich dynamics in videos. Our goal is to use three complementary views in the form of RGB frames, optical flow, and RGB temporal gradients to investigate this. Our motivation is that: (i) RGB frames (V ) record the static appearance at each time point but do not directly provide contextual information about object/scene motion. (ii) Optical flow (F ) explicitly captures motion by describing the instantaneous image velocities in both horizontal and vertical axes. (iii) Temporal gradients ( ∂V ∂t ) between two consecutive RGB frames encode appearance change and correspond to dynamic information that deviates from a purely static scene. Compared to optical flow, temporal gradients accentuate changes at boundaries of moving objects. Even though all three views are related to, and can be estimated from, each other by solving for optical-flow using the brightness constancy equation [26], with ∇ ≡ ( ∂ ∂x , ∂ ∂y ) , F being the point-wise velocity vectors of the video brightness V and ∂V ∂t the temporal gradients at a single position in space, x = (x, y) , and time t, we find empirically that all three views expose complementary sources of information about appearance and motion that are useful for video recognition. This finding is related to the complementarity of hand-crafted space-time descriptors that have been successful in the past (e.g. histograms of space/time gradients [11,32,14] and optical flow [12,56]). Learning a single model from multiple views One way to accommodate multiple views for learning is to train a separate model for each view and co-train their parameters [3]. However, each view only implicitly interacts with other views through predictions on unlabeled data. Another alternative is to use multiple network streams [48,17]. However, here, the number of parameters of the model and the inference time grow roughly linearly with the number of streams, and during testing each stream has to be processed. Instead, we propose to train a single model for all the complementary views by converting all the views to the same input format (i.e. we train a single model f , and it can take any view as input). By sharing the same model, the complementary views can serve as additional data augmentations to learn stronger representations. Compared to training separate models, our model can directly benefit from all the views instead of splitting knowledge between multiple models. Further, this technique does not incur any additional computation overhead after learning as only a single view is used for inference. Formally, given a collection of labeled video data X = ( where y i is the label for video instance x i , N l is the total number of labeled videos, and M is the total number of views, and a collection of unlabeled video data our goal is to is to learn a classifier f (x i ) by leveraging both labeled and unlabeled data. We use a supervised cross entropy loss s for labeled data and another cross entropy loss u for unlabeled data. For our training batches, we assume N u = µN l where µ is a hyperparameter that balances the ratio of labeled and unlabeled samples N l and N u , respectively. Supervised loss. For labeled data, we extend the supervised cross-entropy loss H to all the views on labeled data: where y i is the label, and A is a family of augmentations (e.g. cropping, resizing) applied to input x m i on view m. Pseudo-label generation from multiple views. For the unlabeled data, we use an ensembling approach to obtain pseudo-labels. Given an unlabeled video with a view u m i , let s m i denote the pseudo-label class distribution, which is required because some of the instantiations we consider in the next section can filter out samples if the prediction is not confident. Then, the pseudo-label can be obtained by where A again corresponds to the family of augmentations applied to input u m i , and the class with the highest probablity isq m i = arg max(q m i ). We explore the following variants to obtain pseudo-labelŝ s m i given the class distribution prediction from all the views. i Self-supervision. For each u m i , we directly use the most confident predictionq m i as its pseudo-label, that is,ŝ m i =q m i . This is the most straightforward way to generate pseudo-labels. However, each view only supervises itself and does not benefit from the predictions from other views. ii Random-supervision. For each u m i , we randomly pick another view n ∈ (1, . . . , M ) and use the prediction on that view as the pseudo-label. Then we haveŝ m i =q n i . iii Cross-supervision. We first build a bijection function b(m) for each view such that each view is deterministically mapped to another view and does not map to itself. Then for each u m i , we haveŝ m i =q b(m) i . This is similar to co-training [3] in the two-view case. iv Aggregated-supervision. For each unlabeled video, we obtain pseudo-labels by taking the weighted average of predictions from each view. Then we obtain the pseudo-label bŷ s m i = arg max(s m i ). Note that in this case, all the views from the video u i share the same pseudolabel. The advantage of this approach is that the pseudo-label contains information from all the views. We specify how to obtain the weight for each view in the implementation details. (4) where τ is a threshold used to filter out unlabeled data if the prediction is not confident. The total loss is = l + λ u u , where λ u controls the weight for the unlabeled data. Sec. 3.4 provides implementation details on the specific augmentations used. MvPL instantiations Our MvPL framework is generally applicable to multiple semi-supervised learning algorithms that are based on pseudo-labels. We instantiate our approach by unifying multiple methods in the same framework and analyze the commonality across methods. In this paper we concentrate on Pseudo-Label [37], FixMatch [50], and UDA [59]. On a high-level, these methods only differ in their utilization of unsupervised data in eq. (4). We summarize our instantiations next. Pseudo-Label. Pseudo-Label [37] uses the prediction from a sample itself as supervision. To apply our framework with Pseudo-Label [37], we simply use the same family of augmentations for obtaining pseudo-labels and learning from pseudo-labels, i.e. = A. FixMatch. The main idea for FixMatch [50] is to predict pseudo-labels from weakly-augmented data and then use the pseudo-label as the learning target for a strongly-augmented version of the same data. Given an unlabeled image, weaklyaugmented data is obtained by applying standard data augmentation strategies, A, that include flipping and cropping. Strongly-augmented data is obtained by applying a family of augmentation operationsÂ, such as rotation, contrast, and sharpness etc., using RandAugment [10], that significantly alter the appearance of the unlabeled data. Unsupervised Data Augmentation (UDA). Similar to FixMatch [50], UDA [59] also uses weak and strong augmentations by enforcing consistency between them in forms of predicted class distributions. To extend UDA with MvPL, we first sharpen the predicted class distribution s m i to obtaiñ s m i . We then replace the hard labelŝ m i in Eq. 4 withs m i . Strictly speaking, UDA [59] is not a pseudo-labeling algorithm per-se, because it uses soft labels (predicted class distribution with sharpening) as the learning signal. We show an illustration of how to apply our method with strong augmentations in Figure 2 Implementation Details Model network architecture. As a backbone we use: R-50 [25] following the Slow pathway in [16] with clips of T =8 frames sampled with stride τ =8 from 64 raw-frames of video. This is a 3D ResNet-50 [25] without temporal pooling in the convolutional features. The input to the network is a clip of 8 frames with a sampling stride of 8, covering 64 frames of the raw video. The spatial input size is 224 × 224. Inference. We follow the test protocol in [16]. The video model only takes RGB frames as input at inference time. For each video, we uniformly sample 10 clips along its temporal dimension. For each clip, we scale the shorter spatial side to 256 pixels and take 3 crops of 256×256. Finally, we obtain the prediction by averaging the softmax scores. Converting optical flow and temporal gradients. We precompute (unsupervised) optical flow using the software package of [38] that implements a coarse-to-fine algorithm [4]. We convert both the raw optical flow and RGB temporal gradients into 3-channel inputs that are in the same range as RGB frames. For optical flow, the first two channels correspond to displacements in the horizontal and vertical directions, respectively. The third channel corresponds to the magnitude of the flow. All three channels are then normalized to the range of 0 and 255. We obtain temporal gradients by subtracting the next RGB frame from the current RGB frame. We then normalize them to the RGB range by adding 255 and dividing by 2. Video augmentations. For weak augmentation, we use default video classification augmentations [16]. In particular, given a video clip, we first randomly flip it horizontally with a 50% probability, and then we crop 224×224 pixels from the video clip with a shorter side randomly sampled between 256 and 320 pixels. As strong augmentations, we apply RandAugment [10] followed by Cutout [13] (we randomly cut a 128×128 patch from the same location across all frames in a video clip). RandAugment [10] includes a collection of image transformation operations (e.g., rotation, color inversion, translation, contrast adjustment, etc.). It randomly selects a small set of transformations to apply to data. RandAugment [10] contains a hyperparameter that controls the severity of all operations. We follow a random magnitude from 1 to 10 at each training step. When applying RandAugment to video clips, we keep the spatial transformations temporally consistent across all frames in a video clip. Curriculum learning. We find it useful to first warm up the training in the first few epochs with only the labeled data and then start training with both labeled and unlabeled data. Training details. We implement our model with PySlowFast [15]. We adopt synchronized SGD training in 64 GPUs following the recipe in [19], and we found its accuracy is as good as typical training in one 8-GPU machine. We follow the learning rate schedule used in [16], which combines a half-period cosine schedule [39] of learning rate decaying and a linear warm-up strategy [19]. method Pseudo-Label [37] UDA [59] Table 1: Ablation study on UCF101 split-1. We use only 10% of its training labels and the entire training set as unlabeled data. We report top-1 accuracy on the validation set. Backbone: R-50, Slow-pathway [16], T × τ = 8×8. We use momentum of 0.9 and weight decay of 10 -4 . Dropout [52] of 0.5 is used before the final classifier layer. Please see supp. for additional details. For MvPL with FixMatch, we set the threshold τ to 0.3 (used for filtering training samples if the prediction is not confident). We set the ratio µ (a ratio to balance the number of labeled and unlabeled data) to 3 for Kinetics-400 [30] and set µ to 4 for UCF101 [51] and HMDB51 [35]. For aggregated-supervision (iv), we assign each view with the same weight w m as we found this works well in practice. §A provides further specifics. Experiments We validate our approach for semi-supervised learning for video. First, we present ablation studies to validate our design choices in Sec. 4.1. Then, we show the main results by evaluating our proposed method on multiple video recognition datasets in Sec. 4.2. Finally, we compare our method with existing self-supervised methods in Sec. 4.3. Unless specified otherwise, we present results on our method used in conjunction with FixMatch [50] using aggregatedsupervision to obtain pseudo-labels. Ablation Studies We first investigate ablation studies to examine the effectiveness of MvPL. Our ablations are carried out on UCF101 split-1 and use only 10% of its training labels and the entire UCF101 training set as unlabeled data (evaluation is done on the validation set). For all ablation experiments, we train the network for 600 epochs from scratch with no warm-up and use aggregated-supervision (iv) from all the views to obtain pseudo-labels, unless specified otherwise. MvPL generally improves pseudo-labeling techniques. Table 1a studies the effect of instantiating MvPL with various pseudo-labeling algorithms, as outlined in §3.3. MvPL consistently improves all three algorithms by a large margin with an average absolute gain of 33.7%. Pseudo-Label [37] receives a larger gain (+39.6), presumably as it only relies on weak augmentations, while UDA [59], and FixMatch [50] are using strong augmentations (RandAugment [10]) that lead to higher baseline performance for these methods. The results show that the MvPL framework provides a general improvement for multiple baselines, instead of only improving one baseline. This suggests that MvPL is not tied to any particular pseudo-labeling algorithm and can be used to generally to enhance existing pseudo-labeling algorithms for video understanding. Since FixMatch provides slightly higher performance than UDA, we use it for all subsequent experiments. Complementarity of views. We now study how the different views contribute to the performance. We report results in Table 1b for using MvPL on the FixMatch baseline from Table 1a, with different views added one-by-one. With RGB input alone, the FixMatch model fails to learn a strong representation (48.5%). Adding complementary views that encode motion information immediately boosts performance. We observe an absolute gain of +28.0% when additionally using Flow to RGB frames, and a +25.5% gain for using temporal gradients (TG). The last row in Table 1b shows that both optical flow and temporal gradients are complementary to RGB frames as adding both views can significantly improve performance by +30.6%. Here, it is important to note that test-time computation cost of all these variants is identical, since MvPL only uses the additional views during training. We make the following observations: Self-supervision (i) and random-supervision (ii) obtain relatively low performance. This could be because selfsupervision only bootstraps from its own view and does not take full advantage of other complementary views, and random-supervision randomly picks a view to generate pseudo-labels, which we hypothesize might hinder the learning process, as the learning targets change stochastically. For cross-supervision (iii), we show two variants with different bijections 2 : 1) RGB ⇐ Flow, Flow ⇐ TG, TG ⇐ RGB; 2) RGB ⇐ TG, Flow ⇐ RGB, TG ⇐ Flow. Both variants of cross-supervision obtain better performance than self-supervision because both optical flow and temporal gradients are complementary to RGB frames and boost overall model accuracy. For aggregated-supervision (iv) we examine two variants: (Exclusion): weighted average excluding self view; (All): weighted average from all the views. The Exclusion variant that uses aggregated-supervision from all the views obtains the best result with 79.1%. Here, we hypothesize that predictions obtained by an ensemble of all the views are more reliable than the prediction from any single view which leads to more accurate models. Curriculum warm-up schedule. Table 2 shows an extra ablation on UCF101 with 10% labeled data, described next. Table 2: Accuracy on UCF101 with 10% labels used and a varying supervised warm-up duration. Supervised warm-up with 80 epochs obtains the best results. Before semi-supervised training, we employ a supervised warm-up that performs training with only the labeled data. Here, we compare the performance for different warm-up durations in our 10% UCF-101 setting, i.e. the same setting as in Table 1 in which we train on UCF101 split-1, and use 10% of its labeled data and the entire UCF101 training set as unlabeled data. Table 2 shows the results. Warm-up with 80 epochs obtains the best result of 80.5% accuracy, 1.3% better than not using supervised warm-up. We hypothesize that the warmup allows the semi-supervised approach to learn with more accurate pseudo-label information in early stages of training. If the warm-up is longer, the accuracy slightly degrades, possibly because the model converges to the labeled data early, and therefore is not able to fully use the unlabeled data for semi-supervised training. Table 3: Results on K400 and UCF101 when 1% and 10% of the labels are used for training. Our MvPL substantially outperforms the direct counterpart of supervised learning. Backbone: R-50, Slow-pathway [16], T × τ = 8×8. Results on Kinetics-400 and UCF-101 We next evaluate our approach for semi-supervised learning on Kinetics-400, in addition to UCF-101. We consider two settings where 1% or 10% of the labeled data are used. Again, the entire training dataset is used as unlabeled data. For K400, we form two balanced labeled subsets by sampling 6 and 60 videos per class. For UCF101, we use split 1 and sample 1 and 10 videos per class as labeled data. Evaluation is again performed on the validation sets of K400 and UCF101. We compare our semi-supervised MvPL with the direct counterpart that uses supervised training on labeled data. Table 3 shows the results. We first look at the results in the supervised setting. With RGB input alone, the video model fails to learn a strong representation from limited labeled data: the model obtains 5.2% and 6.2% accuracy on K400 and UCF101 respectively when using 1% of the labeled data, and 39.2% and 31.9% on K400 and UCF101 when using 10% of the labels in the training data. On Kinetics, compared to the fully supervised approach, our semi-supervised MvPL has an absolute gain of +11.8% and +19.0% when using 1% and 10% labels respectively. This substantial improvement comes without cost at test time, as, again, only RGB frames are used for MvPL inference. The gain in UCF101 is even more significant. Overall, the results show that MvPL can effectively learn a strong representation from unlabeled video data. Comparison with self-supervised learning In a final experiment, we consider comparisons with selfsupervised learning. Here, we evaluate our approach by using UCF101 and HMDB51 as the labeled dataset and Kinetics-400 as unlabeled data. For both UCF101 and HMDB51, we train and test on all three splits and report the average performance. This setting is also common for self-supervised learning methods that are pre-tained on K400 and fine-tuned on UCF101 or HMDB51. We compare with the state-of-the-art approaches [1,42,44,2,22,64]. Table 4: Comparison to prior work on UCF101 and HMDB51. All methods use K400 without labels. "param" indicates the number of parameters, T inference frames used, in the backbone. "Modalities" show modality used during training, where "V" is Visual and "A" is Audio input. bones: (i) the R-50, Slow pathway [16] which we used in all previous experiments, and (ii) S3D-G [61], a commonly used backbone for self-supervised video representation learning with downstream evaluation on UCF101 and HMDB51. When comparing to prior work, we observe that our MvPL obtains state-of-the-art performance on UCF101 and HMDB51 when using K400 as unlabeled data, outperforming the previous best approaches in self-supervised learningboth methods using visual (V) and audio (A) information. In comparison to the best published vision-only approach, CoCLR [22], which is a co-training variant of MoCo [23] that uses RGB and optical-flow input in training, MvPL provides a significant performance gain of +5.9% and +11.4% top-1 accuracy on UCF101 and HMDB51, using the identical backbone (S3D-G) and data, and even surpasses CoCLR [22] by +3.2% and +3.5% top-1 accuracy when CoCLR is using Two-Streams of S3D-G for inference. Discussion. We believe this is a very encouraging result. In the image classification domain, semi-supervised and self-supervised approaches compare even-handed; e.g. see Table 7 in [8] where a self-supervised approach (SimCLR) outperforms all semi-supervised approaches (e.g. Pseudolabel, UDA, FixMatch). In contrast, our state-of-the-art result suggests that for video understanding semi-supervised learning is a promising avenue for future research. This is especially notable given the flurry of research activity in self-supervised learning from video in this setting [1,42,44,2,22,64], compared to the relative lack of past research in semi-supervised learning from video. Conclusion This paper has presented a multiview pseudo-labeling framework that capitalizes on multiple complementary views for semi-supervised learning for video. On multiple video recognition datasets, our method substantially outperforms its supervised counterpart and its semi-supervised counterpart that only considers RGB views. We obtain state-of-theart performance on UCF-101 and HMD-B51 when using Kinetics-400 as unlabeled data. In future work we plan to explore ways to automatically retrieve the most relevant unlabeled videos to assist semi-supervised video learning. A. Additional implementation details All our epoch measures in the paper are based only on the labeled data. Therefore, training 800 and 400 epochs on a 1% and a 10% fraction of K400 corresponds to the number of iterations that 24 and 120 epochs on 100% of K400 would take respectively. Similarly, training 1200 and 600 epochs on a 1% and a 10% fraction of UCF101 corresponds to the number of iterations that 48 and 240 epochs on 100% of UCF101 would take respectively (note we use µ = 3 for K400 and µ = 4 for UCF101, where µ is the ratio to balance the number of labeled and unlabeled data). The learning rate is linearly annealed for the first 34 epochs [19]. We follow the learning rate schedule used in [16] with a half-period cosine schedule [39]. In particular, the learning rate at the n-th iteration is η·0.5[cos( n nmax π)+1], where n max is the maximum training iterations and the base learning rate η is 0.8. We use the initialization in [24]. We adopt synchronized SGD optimization in 64 GPUs following the recipe in [19]. We train with Batch Normalization (BN) [28], and the BN statistics are computed within the clips that are on the same GPU. We use momentum of 0.9 and SGD weight decay of 10 -4 . Dropout [52] of 0.5 is used before the final classifier layer. The mini-batch size is 4 clips per GPU (4×64=256 overall) for labeled data and 4×µ clips per GPU (4×µ×64=256×µ overall) for unlabeled data. In §4.2, we use curriculum warm up with the following schedule. For K400, we train the model for 400, with 200 warm-up, epochs and 800, with 80 warm-up, epochs for the 10% and 1% subsets respectively. For UCF101, we train the model for 600, with 80 warm-up, epochs and 1200 with no warm-up epochs for the 10% and 1% subsets respectively.
2021-04-02T01:15:46.509Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "5f91f9040cbc6cae17d73b6bf889f334ca7492f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5f91f9040cbc6cae17d73b6bf889f334ca7492f7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234302951
pes2o/s2orc
v3-fos-license
Large‐scale eDNA metabarcoding survey reveals marine biogeographic break and transitions over tropical north‐western Australia Environmental DNA (eDNA) metabarcoding has demonstrated its applicability as a highly sensitive biomonitoring tool across small spatial and temporal scales in marine ecosystems. However, it has rarely been tested across large spatial scales or biogeographical barriers. Here, we scale up marine eDNA metabarcoding, test its ability to detect a major marine biogeographic break and evaluate its use as a regional biomonitoring tool in Australia. | INTRODUC TI ON Broad-scale biomonitoring of marine environments is integral for the detection of biological changes, stressors and shifting baselines over large spatial and temporal scales (Dafforn et al., 2016). Typically, these approaches utilize rapid assessment methods, such as underwater visual census (UVC), marine manta tow and baited remote underwater video (BRUV) surveys (Ellis et al., 2011;Gaertner et al., 2013;Piacenza et al., 2015) that provide information to distinguish broad-scale indicators and subsequently direct further research efforts to areas of interest. However, the application of these techniques is not suitable in marine environments with limited visibility and other safety hazards to divers, for example the presence of saltwater crocodiles (Crocodylus porosus). The advent of environmental DNA (eDNA) metabarcoding coupled with next-generation sequencing (NGS) has enabled the genetic detection and profiling of a wide range of biota present in environmental samples (e.g. water, scat and soil etc.). Environmental DNA metabarcoding has the potential to be utilized as a sensitive, cost-effective, and rapid broad-scale biomonitoring tool and is particularly well suited to marine environments (Thomsen et al., 2012;Thomsen & Willerslev, 2015;Valentini et al., 2016). Importantly, the collection of surface water (or at depth with a water sampler) for eDNA analyses bypasses logistical and safety hazards associated with visual surveillance work in turbid and dangerous marine environments. Furthermore, eDNA-derived compositional data can provide greater biological coverage to distinguish spatial and habitat variation, identify network associations, trophic structure, biological invasions and the presence of critically endangered species (Valentini et al., 2016). Whilst eDNA metabarcoding has demonstrated its applicability across small, yet highly sensitive, spatial (Jeunen et al., 2019;O'Donnell et al., 2017;Port et al., 2016;West et al., 2020) and temporal scales (Berry et al., 2019) in marine ecosystems, it is in a preliminary stage of being scaled up and tested across broader regional scales (Aglieri et al., 2020;Fraija-Fernández et al., 2020). The extensive coastline of north-western Australia (NWA) supports a diverse array of tropical marine habitats and biota, extending from offshore coral reefs on the edge of the continental shelf, to coastal intertidal sand, rock and reef habitats, constituting 12 distinct bioregions (Wilson, 2014). A profound change in the underlying geomorphology of the Canning and Kimberley basins, from Cretaceous-Cainozoic sedimentary (largely sandstone) rocks to Proterozoic metasedimentary, metamorphic and igneous rocks, has shaped various coastal marine habitats in these bioregions (Wilson, 2014). The Canning bioregion comprises coastlines typified by benthic soft substrates, such as intertidal sand and mudflat habitats with very little coral reefs, whilst the Kimberley bioregion is dominated by rocky, intertidal platforms, fringing coral and offshore coral reefs, and substantial mangrove habitat (Richards et al., 2018;Wilson, 2014). Environmental conditions and connectivity patterns across these bioregions are additionally shaped by various oceanic currents, immense tidal systems (macrotides ranging up to 11 m in the Kimberley), seasonal discharge and extreme turbidity from major rivers (Semeniuk, 1993;Thackway & Cresswell, 1998). Temperature varies between the bioregions and also across subregions, semiarid in the Canning, sub-humid in the southern Kimberley, humid in the northern Kimberley and sub-humid in the northeast Kimberley (Cresswell & Semeniuk, 2011). This environmental variation is purported to contribute to a major biogeographic break at Cape Leveque -the tip of the Dampier Peninsula, demarcating the border between the Kimberley and Canning bioregions -see Figure 1 (Travers et al., 2010;Wilson, 2014). A significant change in the fish assemblage composition across Cape Leveque (Hutchins, 2001a) likely reflects the latitudinal transition in benthic substrates, overlaid on a strong bioregional effect reflecting various habitat, tidal and riverine discharge influences (Travers et al., 2006(Travers et al., , 2010. Population connectivity studies in bony fish (stripey snapper; Lutjanus carponotatus and blackspotted croaker; Protonibea diacanthus) and corals (Isopora brueggemanni and Acropora aspera) further revealed a genetic transition zone across Cape Leveque, with dispersal and gene flow likely constricted by extreme tidal flushing at the head of King Sound Taillebois et al., 2017;Underwood et al., 2017). The aim of this study was to conduct a broad-scale multimarker eDNA metabarcoding survey across the extensive coastline of NWA in order to: (a) detect the purported biogeographic break across Cape Leveque using eDNA-derived bony fish, shark and ray and aquatic reptile taxonomic compositional data, (b) update distributional information for endangered elasmobranchs, such as sawfish (family Pristidae) and the northern river shark (Glyphis garricki), marine turtles (superfamily Chelonioidea) and for data-deficient taxa such as sea snakes (subfamily Hydrophiinae), and (c) evaluate the overall strengths and weaknesses of eDNA metabarcoding as a biomonitoring tool when used across a broad geographic region. Given the remoteness of NWA, long-term monitoring programmes are sparse, particularly for species that are not of commercial value . As such, there is a great potential to integrate eDNA metabarcoding as a long-term, large-scale biomonitoring tool the sustainable management and conservation of marine biodiversity in this unique marine region. K E Y W O R D S biogeographic, biomonitoring, elasmobranch, environmental DNA, Kimberley, large-scale, marine biodiversity, marine reptile, teleost, threatened species in NWA, capable of providing distribution information on a wide variety of taxa. | DNA extraction DNA was extracted from half of the membrane using a DNeasy Blood and Tissue Kit (Qiagen) with the following modifications: 540 μl of ATL lysis buffer, 60 μl of Proteinase K and a 3-hr digestion at 56ºC. Extracts were eluted in 100 μl of Buffer EB. This was completed within four weeks of collection. Extraction blank controls were processed in parallel with all samples to detect any cross-contamination. Genomic DNA extracts were then stored at −20°C. | Metabarcoding assay design, amplification and library sequencing Three PCR metabarcoding assays were employed: 16S Fish, COI Elasmobranch and 16S Reptile (Table 1) The 16S Reptile assay has been recently designed to amplify northern Australian aquatic reptiles, such as sea snakes, turtles and crocodiles (West et al., 2021). Following quantitative PCR-based (qPCR) quantification to optimize levels of input DNA (Murray et al., 2015), final qPCR was performed in a single step using fusion tagged primer architecture consisting of Illumina compatible sequencing adaptors, a unique index (6-8bp in length) and a respective primer sequence for each assay. All qPCR reactions were prepared in dedicated clean room facilities at the TrEnD Laboratory, Curtin University, and are described in detail in Section S1. Quantitative PCR amplicons were pooled at equimolar ratios based on qPCR ΔRn values and size-selected using We then consolidated our assay data into three taxonomic-based TA B L E 1 PCR assay information for marine eDNA metabarcoding across the Canning/Kimberley bioregions Target length (bp) Annealing temp (°C) Primer reference Note: Three primer sets: 16S Fish, 16S Reptile and COI Elasmobranch corresponding to the mitochondrial 16S rDNA and COI regions were applied to all collected seawater samples. In the primer name, "F" refers to the forward primer and "R" to the reverse primer. 16S Fish datasets: Actinopterygii (bony fish), Elasmobranchii (elasmobranchs, i.e. sharks and rays) and Reptilia (reptiles), which allowed us to examine community composition by discrete taxonomic group, rather than by individual metabarcoding assays which contained some overlap in taxonomic detections. | Statistics Community composition variation was analysed across the study region, with a particular focus across the purported biogeographic break at Cape Leveque. In order to control for the effect of differing habitats, variation was tested between sites within inshore, coastal and nearshore estuarine habitats independently. Presence-absence data for each taxonomic dataset (bony fish, elasmobranchs and reptiles) were converted to Jaccard similarity matrices and tested for compositional variation by a distance-based linear routine (DistLM) using a step-wise selection procedure and adjusted R 2 criterion in the PERMANOVA + add-on (Anderson et al., 2008) (Table S1); longitude was omitted due to collinearity with latitude. Temporal variation in sampling was not included, given earlier research indicated that season only has a small influence on inshore reef fish composition in this region (Travers et al., 2006). DistLM analyses are capable of handling unbalanced designs (Anderson et al., 2008), in this case, where there are an unequal number of sites in the spatial predictor variable groups. We did, however, run an additional DistLM routine on a subset (Sites 1-11 and 42-44) of the data assemblages to assess the bioregional influence across 14 sites (seven sites directly on either side of the purported biogeographic break). Site variation was visualized by principal coordinate analysis (PCO) using the stats function "cmdscale" and predictor variables overlaid using the vegan "ordisurf" function (Oksanen et al., 2019) in R Studio (v1.1.423; R Core Team, 2015). Observed taxonomic richness at each site was tested for significance between bioregions and subregions using ANOVA and graphed in ggplot2 (Wickham, 2016) in RStudio. Additionally, similarity percentage analyses (SIMPER) were conducted in PRIMER to identify the top inshore and coastal taxa that contribute to pairwise dissimilarity between bioregions and subregions, where significant in the DistLM analyses. This elucidated whether variation in community composition between the bioregions and subregions is driven by uneven taxonomic richness and/or variation in compositional diversity. | Sampling and sequencing statistics The three eDNA metabarcoding assays yielded a total of 57,311,878 sequencing reads. The mean number of filtered sequences (post-quality, denoizing and chimera filtering) was 75,800 ± 41,071 (Table S2). ASV accumulation curves based on the addition of each sampling replicate per site indicated that four one-litre water replicates (selected a priori to sampling) were just shy of maximizing ASV richness for each of the three assays ( Figures S1-S3). On fitting a polynomial curve to the median accumulation curve for each assay, it was extrapolated that an average of 6.9, 6.1 and 6.1 one-litre water replicates would be required to maximize ASV richness for 16S Fish, COI Elasmobranch and 16S Reptile assays, respectively. The rarefac- ASVs, 40,820 total reads). These species were targeted for fisheries and/or commercial research on the sampling vessels with the exception of pilchards, which were utilized as bait for BRUV deployments. Only compromized ASVs were removed from subsequent analyses. For example, we retained 15 barramundi ASVs that were not detected in filtration and/or extraction blanks. We also detected salmon (genus: Salmo; 1 ASV, 9 total reads), which has previously been detected as a sporadic reagent contamination in both our workflows and other laboratories , and as such was entirely removed. We also omitted all ASVs that produced detection hits for taxa outside of our targeted taxonomic groups of bony fish, elasmobranchs and reptiles. This included humans (Homo sapiens), chicken (Gallus gallus) and horse (Equus caballus). | Overall diversity A total of 310 taxa (ranging from family to species-level assignments; 4.9 ± 9.9 ASVs per taxa) were detected by the 16S Fish assay, 139 taxa (1.3 ± 1.0 ASVs per taxa) by the COI Elasmobranch assay and 181 taxa (2.9 ± 5.2 ASVs per taxa) by the 16S Reptile assay, prior to subsampling ( Figure 2). Collectively, the three metabarcoding assays yielded 453 identifiable taxa, representing 96 families within 41 orders of bony fish, elasmobranchs and aquatic reptiles (Table S3). Forty-four elasmobranch taxa (class: Chondrichthyes, subclass: Elasmobranchii) were detected from 11 families within four orders (Table S3). The two most speciose elasmobranch families were the Carcharhinidae (requiem sharks; 15 taxa) and Dasyatidae (stingrays; 14), which collectively comprised over half of the total detected shark and ray taxa (Table S3). We detected five elasmobranchs that are listed as either "Endangered" or "Critically Endangered" on the IUCN Red List and are under various national and state protection management (see Table 2); these taxa were the largetooth sawfish Only five reptile taxa (class: Reptilia) were detected from five families within three orders (Table S3): the saltwater crocodile (Crocodylus porosus), the black-headed python (Aspidites melanocephalus), Stokes's sea snake (Hydrophis stokesii), the white-bellied mangrove snake (Fordonia leucobalia) and the green turtle (Chelonia mydas). Given the low frequency of detection of these taxa, reptiles were excluded from all multivariate analyses. | Bony fish composition Bony fish composition was examined independently in each habitat type (inshore, coastal and nearshore estuarine), excluding the mid-shelf habitat which was only comprised of one site. In regard to inshore bony fish compositions, a distance-based linear model Figure 3a. Cumulatively, the formed models explained between 20.1% and 26.7% of total fitted variance between inshore fish assemblages (Table 3; Table S4). Taxonomic richness of inshore bony fish did not significantly differ between the two bioregions (Table S5; Figure S13). This indicates that the detected bioregional variation was not influenced by uneven taxonomic richness, but solely compositional variation. Similarity percentage analysis (SIMPER) was used to identify prominent inshore bony fish taxa contributing most to pairwise dissimilarity between the Canning and Kimberley bioregions (Table S6). This indicated a higher detection rate of sardinella (genus: Amblygaster), purple tuskfish (Choerodon cephalotes) and chub mackerels (genus: Rastrelliger) in the Canning bioregion, whilst the Kimberley region had a higher detection rate of Spanish mackerel (genus: Scomberomorus), giant trevally (Caranx ignobilis) and blue tuskfish (Choerodon cyanodus). In examining coastal fish assemblages (those restricted to the South and North Kimberley subregions), a DistLM analysis indicated that subregion was a highly significant predictor variable, explaining 17.9% of fitted variance (Table 3; Table S7; Figure 3b). For bony fish composition in nearshore estuarine sites (only surveyed in the South Kimberley), depth was the only significant predictor variable, explaining 12% of the fitted variance (Table S8; Figure 3c). | Elasmobranch composition Elasmobranch composition across all inshore sites was found to be driven by subregion and depth, explaining 24.3% of fitted variance (Table 4; Figure 4a; Note: These were constructed using a sequential step-wise selection procedure and adjusted R 2 criterion. Significant codes are as follows: 0 < 0.001 "***," 0.001 < 0.01 "**"and 0.01 < 0.05 "*." The predictor variables highlighted in bold are significant (p < .05). Full DistLM results, including marginal tests and best solutions, are provided in Tables S4, S7 and S8. the North Kimberley (Table 4; Table S9). Cumulatively, the significant spatial predictor variables that formed models explained between 23.6% and 34.3% of total fitted variance between inshore sites (Table 4). Taxonomic richness of inshore elasmobranchs did not differ significantly between the three subregions (Table S10; Figure S14); however, within the Dampier Peninsula only two sites (post-subsampling) had detectable traces of elasmobranch taxa. SIMPER analysis was used to identify prominent inshore elasmobranchs contributing most to pairwise dissimilarity between the three subregions ( Figure 4b). For the nearshore estuarine sites, there were no significant tested predictor variables that could explain the fitted variance (Table 4; Table S13; Figure 4c). | Bony fish compositional transitions across NWA The Canning and Kimberley bioregions have some of the least impacted marine and coastal ecosystems in the world (Halpern et al., 2008) with over 1,500 reported species of bony fish (Fox & Beckley, 2005;Moore et al., 2014Moore et al., , 2020. This synthesis is the result of numerous surveys and museum records since the 1880s (Hutchins, 2001b;Moore et al., 2014Moore et al., , 2020Paxton et al., 2006). Studies of inshore soft substrate and reef fish fauna across NWA (Travers et al., 2006(Travers et al., , 2010(Travers et al., , 2012 shelf region (Fox & Beckley, 2005;Hutchins, 1997Hutchins, , 2001a. In our study, which examined the inshore bony fish compositions that Note: These were constructed using a sequential step-wise selection procedure and adjusted R 2 criterion. Significant codes are as follows: 0 < 0.001 "***," 0.001 < 0.01 "**"and 0.01 < 0.05 "*." The predictor variables highlighted in bold are significant (p < .05). Full DistLM results, including marginal tests and best solutions, are provided in Tables S9 and S12. environments (Simpfendorfer et al., 2016). With the exception of reef-associated species (MacNeil et al., 2020), existing compositional data from coastal species are based on limited, and invariably biased, observations from commercial fisheries (Braccini & Taylor, 2016;Field et al., 2012;McAuley et al., 2005). In examining shark and ray eDNA-derived compositional data across the Canning and Kimberley regions, we detected a significant subregional influence on inshore species. This reflected variation across the purported biogeographic break between the Dampier Peninsula (Canning) and South Kimberley and also between the South and North Kimberley regions. This widespread subregional influence on inshore shark and ray composition across NWA was consistent with observations from elsewhere across northern Australia where composition and relative abundance have been shown to vary markedly at a range of spatial and temporal scales (Espinoza et al., 2014;Harry et al., 2011;Taylor & Bennett, 2013;White & Potter, 2004;Yates, Heupel, Tobin, Moore, et al., 2015;Yates et al., 2015a). Northern Australia has a comparatively high elasmobranch biodiversity that includes many large-bodied and highly mobile species (Last & Stevens, 2009). Variability in species composition is not only influenced by regional conditions, but also reflects the complex life-history strategies of many elasmobranchs that includes behaviour such as inshore nursery usage (Simpfendorfer & Milward, 1993;Yates, Heupel, Tobin, Moore, et al., 2015;Yates et al., 2015b), partitioning by size and sex (Knip et al., 2012;Yates, Heupel, Tobin, Moore, et al., 2015), and seasonal migration between tropical and temperate waters (Braccini et al., 2018;Heupel et al., 2015). Disentangling such patterns is beyond the capability of presence-absence data alone; however, these data can nonetheless assist in corroborating existing patterns in composition as well as identify new ones. | A new approach for surveying endangered, elusive and data-deficient taxa in NWA The coastline of NWA exhibits high turbidity resulting from immense tidal action and seasonal discharge from major rivers (Semeniuk, 1993;Thackway & Cresswell, 1998 Section S3 for further discussion). Despite a low detection rate, this is the first study to our knowledge, to have detected crocodiles and sea snakes using an eDNA approach under field conditions. A species-specific eDNA assay has previously been developed to detect the largetooth sawfish (P. pristis) across northern Australia (Simpfendorfer et al., 2016). A significant finding in our study was the detection of three out of the four globally endangered sawfish taxa (family: Pristidae) found in Australia using a metabarcoding approach. The largetooth sawfish is a euryhaline elasmobranch species that was once globally distributed in tropical marine, estuarine and freshwater environments of the Eastern and Western Atlantic, Eastern Pacific and Indo-West Pacific; however, population declines and extirpation have led to significant range contractions (Kyne, Carlson, et al., 2013). It is currently listed as Critically Endangered on the IUCN Red List of Threatened Species and is a protected species in Australia; Northern Australia may be the last viable stronghold for the Indo-Pacific population and likely comprises a large proportion of the remaining global population (Kyne, Carlson, et al., 2013;Last & Stevens, 2009 The northern river shark (G. garricki) is considered a rare species with limited distribution information and population estimates available (Field et al., 2013). All identified populations are considered to be of high conservation value and as such, recreational fishing of the species is banned under Australian federal law. The detection of the northern river shark in this study extends its current known distribution in Western Australia from scattered sightings in the King Sound (Compagno et al., 2008;Thorburn & Morgan, 2004, Ord River, King River and Joseph Bonaparte Gulf (Pillans et al., 2009) to the Gairdner River and the Walcott River in the South Kimberley region. These additional distribution records of endangered elasmobranchs will contribute to recovery plans and management arrangements and are already contributing to locations where additional surveys will be undertaken. | CON CLUS ION This large-scale eDNA metabarcoding study across the coastlines of North-western Australia was able to detect a purported marine biogeographic break between the Canning and Kimberley bioregions. This demonstrates that eDNA metabarcoding is a highly sensitive detection tool, capable of producing large amounts of high-resolution (e.g. to a species level) presence-absence data that can discern finescale patterns across large geographic regions. Further broad-scale applications of this technique could be used to potentially reveal marine biogeographic breaks in other regions. For example, significant phylogeographic structure in mantis shrimp and seahorses in southeast Asia is claimed to reflect historical oceanographic divisions; the former exhibits divergence along a sharp genetic break between the Indian and Pacific Ocean regions (previously separated during the Last Glacial Maximum), whilst the latter is separated into east and west lineages reminiscent of the terrestrial Wallace's Line (Barber et al., 2000;Lourie & Vincent, 2004). Environmental DNA metabarcoding could be used to assess whether these phylogeographic breaks reflect wider biogeographic partitioning in marine community compositions. The eDNA samples resulting from this study will be archived and available for further assay applications extending beyond the taxonomic groups targeted in this study. Additionally, our sequencing data can be retrospectively analysed with expanding databases to resolve ambiguous taxonomic assignments. Our georeferenced sites herein will be used as a baseline for future eDNA biomonitoring and notably will direct targeted surveying for the critically endangered elasmobranchs across NWA. We anticipate that eDNA metabarcoding will be integrated into broad-scale monitoring tool kits, particularly in northern Australia, where it circumvents many of the logistical and safety limitations of visual surveillance. At present, this technique is limited in its ability to provide quantitative data in relation to population sizes and biomass. However, its ability to produce multi-taxon and potentially even whole-ecosystem data, without the need for taxonomic expertise, is both time-and cost-efficient. Amidst global population declines and resource limitations, innovative approaches to whole-ecosystem and biodiversity surveying are required to underpin the best practice management of fisheries, tourism and commercial interests in this remote region of Australia. We advocate that eDNA offers a promising demand-driven solution that is fast gaining traction when planning and executing biodiversity surveys. ACK N OWLED G EM ENTS We acknowledge the Department of Primary Industries and Regional Development (WA), BMT Oceanica and the Australian Research Council (LP160100839) for project funding. We give special mention to the crew of the RV Naturaliste, Sam Moyle, Laura Fullwood, Dion Boddington and Gabby Mitsopoulos for fieldwork assistance. Gratitude is also extended to the Wunambal Gaambera, Dambimangari, Mayala, Bardi Jawi, Nyul Nyul, Jabirr Jabirr and Yawuru people, the traditional owners of the areas sampled and to their indigenous rangers for assistance with sampling. We would like to thank Kate Sanders for providing the sea snake databases for reptile assay development. For access to the Zeus supercomputer, which sped up much of our bioinformatic processing, we would like to thank Pawsey Supercomputing Centre (Kensington, WA). Lastly, we would like to give thanks to everyone at the TrEnD Laboratory for invaluable eDNA assistance across the duration of the project. PEER R E V I E W The peer review history for this article is available at https://publo ns.com/publo n/10.1111/ddi.13228. DATA AVA I L A B I L I T Y S TAT E M E N T Demultiplexed (unfiltered) metabarcoding sequencing data and taxonomic (read abundance) matrices are available for download on Dryad Digital Repository (https://doi.org/10.5061/dryad.8kprr 4xmm).
2021-05-11T00:03:00.123Z
2021-01-24T00:00:00.000
{ "year": 2021, "sha1": "b3cbca96534531f311b57f51b033107c5c71155b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddi.13228", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "ac61be4b08ed7df821f42911d13386d8bce24ddc", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
12892005
pes2o/s2orc
v3-fos-license
Automated parcellation of the brain surface generated from magnetic resonance images We have developed a fast and reliable pipeline to automatically parcellate the cortical surface into sub-regions. The pipeline can be used to study brain changes associated with psychiatric and neurological disorders. First, a genus zero cortical surface for one hemisphere is generated from the magnetic resonance images at the parametric boundary of the white matter and the gray matter. Second, a hemisphere-specific surface atlas is registered to the cortical surface using geometry features mapped in the spherical domain. The deformation field is used to warp statistic labels from the atlas to the subject surface. The Dice index of the labeled surface area is used to evaluate the similarity between the automated labels with the manual labels on the subject. The average Dice across 24 regions on 14 testing subjects is 0.86. Alternative evaluations have also chosen to show the accuracy and flexibility of the present method. The point-wise accuracy of 14 testing subjects is above 86% in average. The experiment shows that the present method is highly consistent with FreeSurfer (>99% of the surface area), using the same set of labels. INTRODUCTION Anatomical magnetic resonance (MR) imaging provides the ability to obtain quantitative measurements of brain structures. These measurements can be used to study the neurobiology of various diseases. The resulting quantitative measurements can be used to study subtle morphological changes if the methods are sufficiently reliable. The analysis of anatomical MR images has evolved from the tissue classification (labeling of voxels into constituent components of gray matter, white matter, and CSF) to labeling of anatomical regions of interest. The definition of anatomical regions of interest began with subcortical structures (e.g., caudate and putamen) because of their relative well-defined borders with surrounding structures such as white matter and ventricular CSF. With the development of three-dimensional (3D) MR sequences, segmentation of additional anatomical structures such as the hippocampus, amygdala, and globus pallidus has became possible. The segmentation of the cerebral cortex has been significantly more challenging. The human cerebral cortex is a highly convoluted structure with significant anatomical variability and heterogeneity across individuals (Uylings et al., 2005). While the surface of the cortex can be readily generated from MR images, the automated labeling remains a challenging task. The cerebral cortex can be divided into distinct regions based upon cytoarchitecture (Brodmann, 2006) function (Roland and Zilles, 1998), or cortical features. While divisions based on cytoarchitecture are possible using post-mortem brains, it is not currently possible to define the cortical layers from in vivo data collected on 1.5 or 3T MR scanners. It may be possible to collect data at 7T that reflects the cytoarchitecture of the cortex (Zwanenburg et al., 2012). While the neuroimaging community has significant interest in such approaches, relatively few anatomical imaging studies to date have been conducted at 7T. In the absence of cytoarchitecture information, parcellation schemes applied to in vivo data have defined functional distinct regions based on sulcal boundaries. Such parcellation schemes allow anatomical images to be segmented without the acquisition of functional data, and allow such schemes to be applied to the large number of retrospective imaging data that has been acquired. A number of groups have defined functionally relevant regions of the cortex using cortical features as the boundaries between regions. Caviness et al. (1996) and Rademacher et al. (1992) defined a parcellation scheme that divided the cortex into fortyeight regions based on 16 coronal planes and 31 major sulci. Desikan et al. (2006) divided the brain into 34 cortical regions of interest per hemisphere. In this study, curvature based information was used to guide the parcellation of the brain on an inflated representation of the cortex. More recently, a refined parcellation scheme was developed by Destrieux et al. (2010), which divides the cortical surface into 74 sulcal and gyral regions of interest. Our group has also developed a parcellation scheme of the cerebral cortex that utilized the cortical surface, MR images, and anatomical landmarks to generate 24 regions of interest per hemisphere. The reliability of this parcellation scheme was evaluated between expert anatomical raters (Crespo-Facorro et al., 2000a;Kim et al., 2003). In addition, the same method was applied to 25 patients with schizophrenia and 25 normal controls. The labeling of the cortex took a significant duration of time. We have estimated that it typically took approximately 24 h of human rater time to complete the manual labeling of the cerebral cortex using the guidelines that were developed. With such time-intensive techniques, it is clear that the manual parcellation of the cerebral cortex can only be applied to relatively small samples, suffers from rater bias and drift, and requires a significant time investment to train raters. Several large imaging studies are currently being conducted to study a variety of neurological and psychiatric disorders (e.g., Goldman et al., 2008;Nopoulos et al., 2010;Trzesniak et al., 2012). These studies would benefit from the ability to generate quantitative measurements of the cerebral cortex. Many of these studies are collecting thousands of MR scans. Therefore, it is not practical to apply manual methods to these studies. To study cortical morphology and the changes associated with disease, automated algorithms are required. A number of semi-automated and automated procedures have been proposed in the literature. Such methods typically employ registration and/or feature extraction to bring an atlas into correspondence with the new dataset. The new dataset is then labeled by either directly mapping the anatomical labels from the atlas onto the subject or by mapping probabilistic information from the atlas into subject space and then applying classifiers (statistical or artificial intelligence based) to generate a labeling of the subject data. Below we summarize some of the methods that have been employed to date to automate the labeling of the cerebral cortex. This is not intended to be a comprehensive overview, but to provide context for the work proposed in this application. Image based registration is one of first methods that was employed for automated labeling of the cerebral cortex. Collins et al. employed two methods to drive the registration using their Automatic Nonlinear Image Matching and Anatomical Labeling (ANIMAL) algorithm. This algorithm is initialized with a linear registration. In the non-linear portion of the algorithm, a hierarchical registration is used to refine an estimate of a local deformation vector at each grid node. In the first approach, the ANIMAL algorithm is coupled with sulcal constraints that are extracted from the MR images. The sulcal constraints were shown to improve the correspondence by more than 50% as compared to image registration alone (Collins et al., 1998). Collins et al. also combined the ANIMAL registration with tissue classification (INSECT) to enhance the labeling of the cortical surface. The tissue classification information is coupled with the maximum probability atlas to label cortical and subcortical regions of interest. The Kappa index for the resulting segmentations was 0.657 (Collins et al., 1999). In both of these approaches the gray matter ribbon was labeled. Other groups have also integrated cortical features within various image registration algorithms (Liu et al., 2004;Joshi et al., 2007a;Postelnicu et al., 2009;Auzias et al., 2011). All of these studies show that incorporation of cortical features into a volumetric based image registration algorithm substantially improves the registration of the cortical surface. Due to the large heterogeneity in the cortical surface across subjects, the choice of the atlas is an important consideration. Heckemann et al. (2010) utilized a combination of multiple atlases and tissue class information to label the cortical surface. The labels in this study were volumetric and were compared against a manual rater. The resulting overlaps were reported as Dice and Jaccard metrics. The resulting Jaccard metric ranged from 0.33 to 0.93 with a mean of 0.69. Sabuncu et al. (2010) improved Heckemann's method by proposing a probabilistic model to perform decision fusion of labels transferred from multiple atlases. Techniques based on surface registration have become the most widely used approaches to automated cortical labeling. These techniques utilize sulcal and gyral information on the reconstructed surface as anatomical features, which are used to drive the registration. Surface based registration has a number of advantages over image registration for alignment of the cortical surface. First, the registration problem can be simplified from 3D to two-dimensions (2D) since the cortical surface can be represented as a 2D manifold. Second, the sulci are typically used to define boundaries between cortical regions of interest. They are easier to represent on the cortical surface than on 3D image, and topographic features such as curvatures can be readily calculated from the cortical surface (Schaer et al., 2008). Third, the average atlas generated from image registration tends to blur gyral and sulcal features as compared to surface based registration (Van Essen et al., 1998). In addition, the blurring tends to take place across features vs. along the cortical surface. Fourth, due to the highly folded structure of the cerebral cortex, it is difficult to generate measurements along the cortical surface using a 3D volume alone (Fischl et al., 1999). Some of the surface based methods require the surface features to be labeled prior to the surface registration step (Bookstein, 1991;Van Essen, 2005;Joshi et al., 2007b), while others utilize the whole surface and anatomical features (Fischl et al., 2004;Desikan et al., 2006;Yeo et al., 2010). FreeSurfer is the most commonly utilized tool to perform automated labeling and utilizes surface registration to align the subject surface and atlas before using a non-stationary anisotropic Markov random field (MRF) to provide the anatomical labeling of the surface (Fischl et al., 2004). FreeSurfer has been shown to have good reliability. Desikan et al. (2006) utilized the intraclass correlation coefficients (ICCs) to compare volumes of manually and automatically labeled regions of interests. The ICCs for 32 regions were from 0.62 to 0.98 with a mean of 0.84. The one limitation of this software is the significant computational resources that are required to run the tool. Based on our experience, the computational time can be up to 20 h per dataset to run the complete pipeline, which includes image alignment, tissue classification, surface generation, topology correction, and automated labeling. Yeo et al. (2010) recently proposed a fast and landmark-free surface registration method. It was applied in the automated cortical surface parcellation using MR scans of 39 subjects. Thirty-six regions on each cortical surface were manually labeled by a neuroanatomist. A multi-resolution spherical diffeomorphic demons surface registration was performed to align cortical surfaces. The method was shown to be faster and achieved significantly higher overlap (Dice metric) between manually and automatically labeled regions. Other approaches have been proposed for labeling of the cerebral cortex. Klein et al. (2005) has developed an algorithm called Mindboggle. This algorithm utilizes linearly co-registered MR scans and extracts sulcal pieces. The sulcal pieces are then matched with a combination of atlas pieces to minimize a cost function. The resulting deformation is then used to warp the atlas labels onto the subjects. The sulcal pieces do not need to Frontiers in Neuroinformatics www.frontiersin.org October 2013 | Volume 7 | Article 23 | 2 be manually labeled and are only used to bring two surfaces into registration. We propose a fast fully automated method to parcellate the cortical surface. This method integrates with the BRAINS AutoWorkup procedure such that the entire process from raw scan to labeled surface is automated. The algorithm extends the prior work developed by Yeo et al. (2010). The reliability of the method is compared to manual parcellation as well as to FreeSurfer using the same datasets. DATA ACQUISITION The subjects in this study were enrolled voluntarily into a MR imaging protocol after informed written consent was obtained in accordance with the institutional review board at the University of Iowa. Fifty subjects were enrolled into a MR imaging study of schizophrenia, including 25 first episode patients (age: 19-39 years old, mean = 25.2) and 25 matched control subjects (age: 12-41 years old, mean = 25.6). Subjects were imaged using a multi-modal MR imaging protocol consisting of T1, T2, and proton density scans. The images were obtained on a GE Signa 1.5T MR scanner. The T1-weighted scans were acquired using a 3D spoiled recalled gradient echo sequence with the following scan parameters: TE = 5 ms, TR = 24 ms, Flip angle = 40 • , NEX = 2, FOV = 26 × 19.2 × 18.6 cm, Matrix = 256 × 256 × 192. The proton density and T2-weighted scans were acquired using a dual-echo fast spin-echo sequence with the following parameters: TE = 28/96 ms, TR = 3000 ms, slice thickness / gap = 3.0 mm / 0.0 mm, NEX = 1, FOV = 26 × 19 cm, Matrix = 256 × 192, ETL = 8. One subject had an incomplete manual parcellation and was excluded from further study. The remaining forty-nine subjects were divided into two groups: a training set and a testing set. The subjects in the training set were used to develop hemisphere specific cortical surface atlases as described below. To generate a population atlas that represents a wide age range, 35 subjects were selected for the training set. The training set consisted of 16 patients with schizophrenia and nineteen healthy controls. The subjects had an age range of 12-41 years old with a mean of 25.6 years. The remaining fourteen subjects were used as the testing set for the developed automated cortical parcellation algorithm. This set of subjects varied in age from 16 to 39 years with a mean 24.9 years. The sample included eight subjects with schizophrenia and six control subjects. IMAGE PRE-PROCESSING For this study, only the T1-and T2-weighted scans were used in the analysis. The images were analyzed using an updated version of the BRAINS AutoWorkup pipeline (Pierson et al., 2011). This pipeline links together a number of stand-along applications built upon the ITK library linked together through the TCL scripting language. This is a fully automated procedure to analyze structural MR images that includes AC-PC alignment (BRAINSConstellationDetector), image co-registration (BRAINSFit), bias field correction / signal intensity normalization / tissue classification / brain extraction (BRAINSABC), and neural network anatomical labeling of the caudate, putamen, thalamus, globus pallidus, hippocampus, amygdala, and nucleus accumbens (BRAINSCut) (Kim and Johnson, 2010). The resulting continuous tissue classified image, binary image representing the brain, and the neural network defined regions for the caudate, putamen and thalamus were used to generate the cortical surface using a pipeline combining the ITK version 4 libraries and VTK as described below. CORTICAL SURFACE GENERATION The cortical surface generation involved several steps including topology correction, surface generation, surface decimation, surface smoothing, and generation of cortical features. We have previously reported on the methods employed for cortical surface generation (Li et al., 2011). A brief summary is provided here. To separate the cortical hemispheres and remove the brainstem and cerebellum, a simulated T1 weighted image generated from the BrainWeb Simulated Brain Database was created (Cocosco et al., 1997). An expert rater manually defined the ventricles, right and left hemispheres as well as the cerebrum, brainstem, and cerebellum on the BrainWeb T1 weighted image. The BrainWeb anatomical T1 weighted image was co-registered with each of the subjects AC-PC aligned anatomical T1 weighted image using a diffeomorphic demons registration (Vercauteren et al., 2009). The resulting deformation field was applied to BrainWeb based representations of the following structures: right hemisphere, left hemisphere, ventricles, brainstem, and cerebellum. The union of the warped ventricle label and the neural network defined subcortical regions (caudate, putamen, and thalamus) was calculated. Voxels on the continuous tissue classified image that overlap with the results from the union operation calculated in the previous step were assigned an image intensity value corresponding to pure white matter. This prevented the surface from entering the ventricles and eliminated the possibility that the surface could jump from the insula to the putamen. The brainstem and cerebellar regions were then used to remove these structures from the image by replacing the voxels that overlap these structures with a value of zero. The left and right hemisphere definitions were then used to limit the portion of the image considered during surface generation. This produced a separate surface for each hemisphere. The cortical surface for each hemisphere was generated at the boundary between gray matter and white matter on the tissue classified image (i.e., greater than or equal to 50% white matter on the continuous tissue classified image). This avoids the problem of the "buried cortex" and resulted in a surface that could readily be corrected for topological defects to produce a genus zero surface. The topological defects were removed by first binary thresholding the tissue classified image and then performing topological correction on the binary image [see Li et al. (2011) for details]. After topology correction was performed, the cortical surface was generated and decimated. Incremental edge collapse mesh decimation was applied to remove unnecessary number of vertices and triangles on the surface (Gelas et al., 2008). The decimation stops when the specified number of triangles remained on the resulting surface. In this pipeline, the number of triangles was reduced from approximately 250,000 to 70,000 after the decimation. The decimated surface was then smoothed using five iterations of Laplacian smoothing with a relaxation factor of 0.1. After the smooth cortical surface was generated, the geometry features of the surface were calculated and associated with each vertex. Four geometry features were used in the automated parcellation method (Figure 1). The definitions for each geometry feature are as follows: • Inferior-Superior Distance (IS-Distance): The inferior-superior distance is measured from each vertex point on the surface to the AC-PC line. The locations of the anterior commissure (AC) and posterior commissure (PC) were automatically estimated by the AutoWorkup pipeline. The feature helps to identify the location of the temporal pole (largest negative value) and the superior aspect of the central sulcus (largest positive value). • Anterior-Posterior Distance (AP-Distance): The anteriorposterior distance is measured from each vertex point on the surface to the PC point. The feature helps to identify the frontal pole (largest negative value) and the occipital pole (largest positive value), as well as the central sulcus (approximately zero). • Hull-Depth: The Euclidean distance in millimeters is measured from each vertex point to the closest point on a convex hull enclosing the cortical surface. The feature helps to identify deep grooves such as the insula as well as major sulci. • Mean-Curvature: The mean curvature is calculated at each vertex point. It helps to identify secondary sulci and gyri. SPHERICAL MAPPING OF THE CORTICAL SURFACE After the genus zero cortical surface was generated, the vertices and triangles on the surface were then mapped onto a sphere using surface parameterization (Gelas and Gouaillard, 2007;Li et al., 2011). The mapping is performed as follows: (1) split the genus zero surface into two half surfaces with a shared boundary; (2) generate a smooth boundary between the half surfaces; (3) map each of the half surfaces onto a unit disk with a fixed boundary; (4) project each disk onto a hemisphere using inverse stereo projection; 5) connect the two hemispheres to form a sphere. This mapping from the cortical surface onto the spherical domain provides a stable projection without significant dependencies on the selection of the polar points. Since the parameterization generates a one-to-one mapping, the geometry features calculated above can be readily projected onto the sphere and associated with the corresponding vertices. CORTICAL SURFACE REGISTRATION After mapping the cortical hemisphere onto the sphere, a multiresolution spherical diffeomorphic demons registration was used to align the subject and atlas surfaces in the spherical domain. The multi-resolution deformable registration used different geometry features at each resolution level. The features were used in ascending order of geometry detail, with IS-Distance and AP-Distance having the coarsest geometry detail, Mean-Curvature having the finest, and Hull-Depth in the middle (Figure 1). To ensure sufficient movement of the surface vertices, the sphere was resampled onto a uniform icosahedral mesh. The mesh refinement (IC4, IC5, IC6, IC7) and surface features used at each level of the multi-resolution registration are summarized in Table 1. The subject's geometry features were smoothed and normalized on the icosahedral meshes before registration. The smoothing was applied by calculating the weighted average scalar values of the center vertex and its first order neighbor vertices . Different weights were given to the center vertex vs. neighbor vertices. Parameter λ was used to control the weights, and the larger the value of λ the greater the amount of smoothing. Since surface features in levels (IC4, IC5, and IC6) are intrinsically smooth while level IC7 is more noisy, a relatively small λ = 0.5 was used for the former levels while λ = 1.0 was used for IC7. Given the variation in the geometric information and dynamic range of each scalar, different normalization procedures (piecewise rescaling, histogram matching, and clamping) were employed for the various scalar measures ( Table 1). After the normalization, the normalized scalars had the following range: (1) IS-Distance and AP-Distance were rescaled between −1 and 1; (2) Hull-Depth histogram was matched to the target surface; and (3) Mean-Curvature was clamped between −1 and 1. A flowchart of the overall registration algorithm is shown in Figure 2. The registration starts from the lowest resolution level IC4. To initialize each refinement level of registration, the deformation field from the previous level is used to warp the vertices on the original icosahedral mesh for the current level to their present location. A rotational transform based on the current scalar values is calculated followed by the diffeomorphic demons registration. The resulting rotation calculated at this level of registration is concatenated with the previous deformation and the process is repeated until all registration levels are completed. The rotational registration was added to each level since it was found that the diffeomorphic demons registration was trying to overcome a global rotation that resulted from changing the geometric features being used to drive the registration. Based on our initial evaluation of this rotational transform, we found that the rotation typically was approximately 1-2 • and resulted in fewer iterations of the diffeomorphic demons registration for convergence. This is likely due to the fact that the rigid registration does not distort the shape of the triangles. The rotational registration was performed by calculating a versor transform to minimize the difference between the fixed sphere and the moving sphere. The versor transform consisted of only rotations about the sphere center. The cost function was calculated using the mean squared metric between the normalized geometry features on the fixed sphere and the warped moving sphere. A gradient descent optimizer was used to search for the optimized rotation angles. This algorithm has several user tunable parameters, which are described in Table 2. The values of these parameters used in the current study are also included in the table. The value of the maximum step length shown in the Table 2 is defined for IC4, and is divided by 2(i − 1) where i is the current refinement level. FIGURE 2 | The flowchart of the surface registration used to align surfaces for parcellation. The inputs are spheres with geometry features and the output is the deformation field defined as vectors at vertices on the fixed sphere. Yeo et al. (2010) extended the diffeomorphic demons registration from 3D images to 3D spheres with fixed radius, on which points are defined by 2D spherical coordinates.. The spherical registration uses the scalar values associated with vertices to bring the two surfaces into correspondence. By constraining the velocity field in tangent planes of the sphere, the exponential map of the velocity field transforms points to its local neighborhood on the sphere. In that way, the extended diffeomorphic demons algorithm maintains the topology of the sphere during registration. The constraint optimization problem (keeping the velocity vector in the tangent plane) was solved by introducing a local coordinate chart, which maps the tangent vector on the sphere S 2 to the tangent vector at the origin of R 2 . A deformation field smoothing technique was also utilized by Yeo et al. to perform regularization of the deformation field. Table 3 lists the parameters that can be adjusted for the spherical diffeomorphic demons registration. The value of σ needs to be adjusted based upon the shortest edge length, which is determined by the level of resolution, as the edge length becomes smaller when the resolution level of icosahedral mesh goes higher. The registration runs either for the specified number of iterations or until the similarity metric reaches a user defined convergence threshold. CORTICAL PARCELLATION SCHEMES For this study, two parcellation schemes were used to evaluate the automated cortical parcellation method employed. The first parcellation scheme was based on a manual parcellation of the cerebral cortex that was previously developed by our group (Crespo-Facorro et al., 1999, 2000bKim et al., 2000). This allowed us to compare the reliability as compared to a trained expert manual raters. The second parcellation scheme was based on FreeSurfer and allowed us to directly compare two automated methods for cortical parcellation. Manual parcellation was performed on each hemisphere of the cerebral cortex by a trained and reliable anatomical rater. The parcellation of the cerebral cortex was performed by referencing the cortical surface, volumetric images, and anatomical landmarks. Each hemisphere of the cerebral cortex was divided into forty-one sub-regions as previously reported (Crespo-Facorro et al., 1999, 2000bKim et al., 2000). While manual parcellation remains the Ògold-standardÓ for evaluation of automated methods, it is an imperfect Ògold-standardÓ containing mislabeled regions at the individual subject level. The resulting parcellations were reviewed and regions split by reference planes (e.g., rostral, intermediate and caudal inferior temporal gyrus) were combined while a few small regions (e.g., Heschl's gyrus) were merged with surrounding regions. The motivation for the consolidation of regions was two fold. First, the reference planes could readily be defined after the initial parcellation to further subdivide the regions. Second, the manual parcellation of the cortex was time consuming and when applying to a large sample a number of inconsistent boundaries were identified upon secondary review especially related to smaller regions of interest. Given that we are dealing with an imperfect gold standard, some regions were consolidated to generate a more consistent parcellation across all subjects. Aside from the consolidation of regions, no additional attempt was made to refine the anatomical definitions even though errors in the anatomical definitions were noted in this review. The total number of regions was reduced to 24 regions per hemisphere [see Figure 3]. Table 4 summarizes the combination of regions Automated Region used in this study as compared to the previously reported guidelines Manual Region. Table 4. The surface is shown in a lateral (left) and a medial (right) view. To compare the proposed automated surface parcellation method with another commonly used surface analysis tool, the MRI scans from the fourty-nine subjects were also processed using FreeSurfer to label the cortical surface into 34 regions (33 Frontiers in Neuroinformatics www.frontiersin.org October 2013 | Volume 7 | Article 23 | 6 cortical regions of interests and an unlabeled region) for each hemisphere as described by Desikan et al. (2006). An example of the FreeSurfer cortical parcellation is shown in Figure 4. Table 5 lists regions of interest defined by FreeSurfer with the corresponding index used in Figure 4. It should be noted that the parcellation scheme developed by (Desikan et al., 2006) included a frontal pole region that was excluded from their reliability study. We have combined this region with the medial orbital frontal cortex region for this study. SURFACE ATLAS GENERATION The atlas representation was created by selecting one of the 35 training subjects as the template surface at random and then registering the remaining 34 subjects to this surface. The registration parameters used for this process are provided in Table 2 and Table 5. The surface is shown with a lateral (left) and a medial (right) view. Table 3. A separate atlas representation was generated for the right and left hemisphere. Once all of the surfaces were mapped onto the template surface, the average features were calculated on a per vertex basis. The average geometry features at each point in the spherical space were calculated. The resulting average geometry features are shown in Figure 5. The deformation field generated by aligning the training subject with the template was used to map manual labels from the individual subjects into the atlas space. After all of training subjects' labels were mapped into the atlas space, the label with the greatest probability was used to label each vertex. This step was performed separately for the manual and FreeSurfer labels resulting in two parcellation schemes defined on the atlas sphere. In addition, the atlas sphere contained the mean geometric features used to co-register the atlas onto each of the subjects in the testing set. SURFACE PARCELLATION To generate the cortical parcellation based on either of the atlases described above, the anatomical T1 and T2 weighted images from the testing set described in Section 2.1 were analyzed using the BRAINS AutoWorkup procedure. The cortical surface was generated as described in Section 2.2-Section 2.4. The surface registration on spheres was the same as used to generate the cortical atlas except that the atlas sphere was mapped onto the subject sphere. After the registration, the atlas-based labels were propagated from the atlas onto the subject surface. These fourteen subjects were used to assess the reliability of the automated parcellation by comparing the results defined either manually or with FreeSurfer. VALIDITY METRICS To quantitatively assess the reliability of the automated method, two metrics were used: DiceÕs coefficient and vertex accuracy. The Dice index was used to evaluate the similarity between the automated parcellation described here and the previous parcellation (either manual or FreeSurfer). The Dice index, D = 2(A ∩ B)/(|A| + |B|) was calculated based on surface area. A Dice coefficient of 0.0 corresponds to no overlap between the automated parcellation and the previous parcellation, while a Dice coefficient of 1.0 corresponds to identical regions of interest. The Dice coefficient was computed for each region in the testing set and mean and standard deviation reported. Vertex accuracy provides the percentage agreement between the gold-standard label and the automated label for each vertex in atlas space. A vertex accuracy of 0 corresponds to no agreement between the automated and gold-standard labels at that vertex across the testing set, while a vertex accuracy of 100 corresponds to complete agreement across the testing set between the manual and automated labeling for the vertex. The vertex accuracy was summarized both using a histogram and visually on the atlas surface. MANUAL VS. AUTOMATED LABELS The Dice coefficient between the automated and manual parcellation is shown in Table 6. The Dice coefficient ranged from 0.68 (fusiform gyrus) to 0.91 (insula and superior temporal gyrus). The mean Dice coefficient across all regions of the cortical surface was 0.84. Similar reliability was obtained for both the right and left hemisphere. The automated parcellation starting from the original DICOM images was completed in approximately 2 h of computer time and required no manual intervention. The quality of the resulting parcellation is shown in Figure 6. This figure shows the manual and the automated labels on a subject from the testing sample with the median Dice coefficient. The vertex accuracy in the atlas space was mapped onto the template surface for visualization (Figure 7). Approximately three quarters of the surface vertices were labeled correctly (with accuracy >90%). As expected, the locations with low accuracy were located along the borders between parcellated regions. In general, these regions with a large percentage of errors are thin bands between regions. Broader regions of uncertainty do exist and are often located where several regions intersect such as the cuneus, pre-cuneus, posterior cingulate, and unlabeled regions. Figure 8 shows the histogram for the vertex labeling accuracy. Approximately only 5% of the vertices were labeled with poor accuracy (< 60%). COMPARISON WITH FREESURFER The reliability of the automated parcellation pipeline proposed in this application was also compared against FreeSurfer using the Desikan atlas composed of 34 regions. The overall average Dice coefficient across 36 regions and fourteen testing subjects is 0.80, with a median of 0.86. Thirty-three out of thirty-six regions, which accounts for more than 99% of the whole cortical surface, were labeled with Dice coefficients greater than 0.70. The remaining three regions (entorhinal, temporalpole, parahippocampal) were small regions that account for less than 1% of the total cortical surface area. Figure 9 shows the labeled cortical surface using FreeSurfer and the proposed parcellation method. DISCUSSION The Dice index evaluated the overlapping between two parcellations, automated vs. manual (gold standard) or automated vs. FreeSurfer. Yeo et al. (2010) previously utilized the Dice coefficient to compare the spherical diffeomorphic demons registration to manual parcellation of cerebral cortex to compute the average Dice coefficient as 0.89 across regions and subjects. Using the automated pipeline proposed here, the Dice coefficient of 0.86 resulted from using the same evaluation was just slightly Table 4. The subject is chosen by having the median Dice coefficient. The surface is shown in a lateral (top row), ventral (central row), and medial (bottom row) view. FIGURE 7 | The accuracy over the testing set visualized on a surface. The top row shows accuracy on the left hemisphere and the central row shows accuracy on the right hemisphere. The bottom row shows the accuracy on the left hemisphere of the same subject as the top row but with a different scaling. Top two rows share the scale of 50∼90%, while the bottom row uses the same scale (25%∼75%) as being utilized in FreeSurfer's paper (Fischl et al., 2004). FIGURE 8 | The distribution of the accuracy in the atlas space. Each bin is labeled by the minimal value of it. For example, the first bin "0" represents accuracy 0∼10%, bin "10" represents accuracy 10%∼20%, etc. The frequency is calculated by the number of vertices having accuracy represented by the bin. lower. In addition, we have presented the Dice coefficient for each region of interest averaged across subjects. This work was evaluated using a separate sample from that previously utilized by Yeo. Therefore, inter-subject variability in the cortical folding pattern could slightly influence the resulting reliability measures. For example, in a population of one hundred, there are 74 people that have a single post-central sulcus, and 26 having a double parallel pattern at the same location (Ono et al., 1990). In addition, secondary gyri and sulci are not always defined by geometry features used in aligning cortical surfaces together. Shallow sulci have small curvatures and hull depth that are not distinct across the cortex. The Dice coefficient comparing the regions defined with the proposed method and FreeSurfer provides a measure of similarity between the two automated methods. Here, there is no Frontiers in Neuroinformatics www.frontiersin.org October 2013 | Volume 7 | Article 23 | 9 Ògold standardÓ but the reliability of FreeSurfer has been previously published (Fischl et al., 2004;Desikan et al., 2006). The median Dice across the 36 regions for the testing set was 0.86. The vertex accuracy was previously reported by Fischl et al. (2004) in the evaluation of FreeSurfer. To directly compare the method outlined in this paper to FreeSurfer, the same dynamic range for point accuracy as utilized by Fischl et al. was used to scale the results shown in Figure 7. Very few of the vertices (1.4%) were labeled incorrectly (with accuracy <25%). However in Figure 3 of Fischl et al. paper, a fairly significant number of vertices had an accuracy of lower than 25%. This would suggest that the method proposed in this paper is able to label the cortical surface with the same or better accuracy as compared to FreeSurfer. However, this was not directly tested here. As expected and evident in Figure 7, the largest errors all occur at the border between regions. The histogram in Figure 8 showed the distribution of vertex accuracy over the testing set. The distribution of accuracy was almost identical as for two hemispheres, and approximately 75% of the surface was labeled correctly (accuracy>90%). This is almost twice the percentage reported in FreeSurferÕs paper (Fischl et al., 2004). Even with high reliabilities (Crespo-Facorro et al., 1999, 2000bKim et al., 2000), it is still arbitrary to precisely define the borders in manual parcellation (Fischl et al., 2004). The true boundaries are based on cytoarchitecture and are not visible on conventional T1 and T2 weighted images similar to those utilized for this study. The average time for FreeSurfer to finish parcellating both hemispheres was approximately 20 h, while the automated parcellation proposed only needs 2 h on average. In the mean time, the proposed pipeline was developed with open source toolkits and is flexible to work with any parcellation scheme. The flexibility of our method allows any set of parcellation labels to be readily integrated into the pipeline by simply mapping the labels onto the surface atlas. In this paper, we have developed a fully automated cortical labeling algorithm that is integrated into the BRAINS software. The resulting reliability was similar to FreeSurfer and could be completed in just a fraction of the CPU time. In this initial evaluation of the algorithm, no manual intervention was performed. Based on five regions having significantly different surface area measurements as compared to the manual definition (lingual gyrus, fusiform gyrus, inferior temporal gyrus, and posterior cingulate gyrus), the addition of additional anatomical information would likely significantly improve the results in these regions. For example, isthmus-cingulate cortex (label 9 in Figure 4) can go over to parahippocampal gyrus (label 15) in our parcellation as shown in Figure 9. It can be improved by locating the nearby structure, splenium of the corpus callosum and using it as the macroscopic ventral border as suggested by Jones et al. (2006). We have created a general framework that allows any feature defined on the surface to drive the registration. In addition, standard VTK file formats are used allowing the user to readily define these scalar measures on the surface. For example, in future work, we are planning to couple automated landmark identification (Lu, 2010) as a preprocessing step. A manual rater would then be able to manually correct the location of the landmarks before using a spherical thin-plate spline to initialize the registration of the atlas with the subject.
2015-07-17T22:55:48.000Z
2013-10-22T00:00:00.000
{ "year": 2013, "sha1": "accef0f61a2b6d4c9df43c0f72e75f258a4b8a57", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fninf.2013.00023/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "accef0f61a2b6d4c9df43c0f72e75f258a4b8a57", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology", "Computer Science" ] }
233647447
pes2o/s2orc
v3-fos-license
On the Significance of the Rapid and Healthy Development of Green Marketing in Driving Ecological Economy With the growing health-related and green awareness of modern people, countries around the world have formulated relevant policies to protect the environment, yearn for nature, and promote sustainable development. People’s lives and consumption patterns have changed due to its impact. As green products enter the market on a global scale, green consumption has become a trend, and green products have also accounted for more market shares in various industries. Therefore, a new marketing method— Green Marketing was born. Introduction Green marketing refers to the management process that takes sustainable development as the goal and realizes the unification of economic benefits, consumer needs and environmental benefits so that other market entities can meet their needs. So how to distinguish green marketing from traditional marketing? The fundamental difference between them is that green marketing emphasizes the environment, while traditional marketing is a comprehensive marketing activity with sustainable development in many aspects such as society and enterprises. As an important part of society, enterprises need to assume the social responsibility of protecting the ecological environment and promoting economic development. Among them, the question of how green marketing promotes the development of ecological economy is worthy of our consideration. This article also discusses and analyzes this issue in detail. The significance, connotation and characteristics of green marketing The reasons for the production and rapid development of green marketing are many, mainly divided into its external and internal factors. First of all, as far as its external factors are concerned, there are: the deterioration of the global ecological environment, the establishment of green organizations, the formulation of green regulations, the setting of green barriers, scientific and technological progress. Internal factors are: First, the rise of the wave of green consumption, the increase of green market demand. Second, considerable economic benefits. More than 80 per cent of Britons and 67 per cent of French people consider environmental issues when choosing goods and are willing to pay more for green products and reusable packaging, according to the survey. In addition, green products require more investment than ordinary products, encouraged by national policies and related systems, but are increasingly in demand in the market and have a higher starting point for pricing, with potential consumers at the upper and middle classes, thus having a higher profit margin. For the connotation of green marketing, under the premise of sustainable development view, enterprises from the perspective of green environmental protection, in the whole process of product production-manufacturing-sale, that is, to meet the consumer demand, to achieve the purpose of corporate profitability, comprehensive multi-consideration, and thus maintain the balance of the three parties. Green marketing is not a gimmick to promote customer consumption, nor is it a means for enterprises to seek more profits, it is a sustainable development, to achieve multi-party balance process, its main goal is to meet the needs of consumers without endangering the environment, to achieve harmony between man and nature, coexistence and prosperity. Therefore, it can have a strong role in promoting the development of ecological economy and promote the role of the future development trend. | Shiwen Gong perspective of economics. Economic analysis can help us have a deeper understanding of the inner links between economy, society and green marketing. The role of marketing in the market economy system With the continuous development of the economy and the continuous progress of technology, the marketing model is constantly changing. We understand that marketing has a great role in promoting the development of the ecological economy. The ultimate goal of economic development is to maintain a long-term balance between society, enterprises and even individuals in many aspects, and marketing needs to meet the needs of consumers on the premise of ensuring the interests of enterprises. Therefore, economic development and marketing activities have the same goals. Economic growth should not be accelerated indefinitely, but should be sought. This speed should be established on the basis of the national economic macro balance, while maintaining this balance [1] . This balance emphasizes that economic development is a sustainable development of mankind. The process of coordination is also a process of gradual realization. And this process can also be regarded as a social situation. The traditional view is that the purpose of marketing is only to adjust the economic balance between producers and consumers. However, in contrast to the current macro environment, the social environment, economic development and marketing are constantly being strengthened, and marketing also assumes important social responsibilities, such as: solving the unemployment problem, ensuring labor conditions, retraining and education of employees in management institutions, Collect collective members to participate in the enterprise management process to ensure, protect and improve ecological and economic indicators. Consumption of scarce resources is a combination of marketing and economics The situation of the two disciplines of integrated economics and marketing, both of which have been studied in greater depth in the area of resource scarcity (see figure 2-1). Figure 2-1. Economic-Marketing System Some resources are limited, so we cannot develop the economy and science and technology at the same time, we cannot develop scarce resources without control, but must consider the optimal allocation of resources in order to achieve the goal of sustainable development, in the development of the economy at the same time to strengthen the protection of the economy. In other words, the government, enterprises, individuals and so on in the consumption of resources at the same time must consider the scarcity of resources. Green marketing can have different degrees of influence on the whole process from resource development to consumption. In the initial production to the final consumption of many different economic links of the operation, marketing of the four steps of the initial product, and even the final product to the market, to promote to customers also play an indispensable role, and this process is also the process of resources from development to consumption. In addition, the economic cycle can determine the flow of resources at a macro level, but marketing needs to be targeted at different links to the need for a reasonable allocation and adjustment of resources. In the production phase, marketing can determine what kind of products and services to provide in response to the needs of consumers, and thus more detailed allocation of resources, so as to avoid unsymulation of resource development. In the final stage of consumption, promotion can not only promote consumption-driven economy, can guide the trend of consumption in the market and demand trends, but also from the impact of consumer choice and overall awareness, so that consumers have a correct understanding. Because consumption is the ultimate goal, and other links serve that end goal, economics and marketing must study consumption from different perspectives [2] . In the case of underdeveloped market economy, enterprises face a relatively single market environment, enterprises can launch market activities are relatively simple, enterprises to make major business decisions do not need to rely on complex marketing and cumbersome market research, but need to be collected from business activities or simple market research to assist in the analysis and make a final decision. With the deepening of China's economic system reform and the more and more mature market economy system, many aspects of the market will be more and more extensive. The market activities and business scope of various enterprises are also expanding day by day, the behavior of enterprises will be market-oriented, and the concept of green marketing will also have an impact on their economic decision-making, thereby stimulating ecological economic development. The so-called marketing-oriented enterprises are not simply based on sales to determine the output of enterprises. Conclusion In summary, green marketing is different from ordinary marketing. It combines many factors on the basis of ordinary marketing, such as social environmental benefits and market economic trends. It is a process leading to sustainable development and its main purpose. Business opportunities can be obtained even in the case of resolving environmental crises, driving economic development, achieving the goal of maximizing corporate profits and increasing consumer satisfaction, while achieving harmony between man and nature, coexistence and common prosperity. In today's social development, people's demand for social green is increasing and their attention is increasing year by year. People's green consumption awareness is getting stronger and stronger. The government has also implemented a number of environmental protection measures. In this general environment, the decision of enterprises to implement green marketing is an opportunity but also has multiple risks. The implementation of green marketing for different enterprises must also fully consider its duality and the enterprise's risk tolerance. Therefore, enterprises should introduce before green marketing, analyze your own internal and external environment, and then choose the correct green marketing strategy. From this we can know that green marketing is the general trend of the current market development. Whether it is for society, enterprises, consumers, etc., it has different degrees of benefits, and it is of great significance to the development of ecological economy, but the same He is also a new thing, so there are many problems, which also require the joint efforts of the whole society to overcome. Finally, in order to promote the further development of green marketing in our country, we propose the following suggestions: First, the government can actively adopt government intervention in solving externalities. These government interventions can be embodied in many aspects, but also recognize the importance of "green" to its business activities, so as to consider its impact in many aspects such as production, management, and operation. Factors can improve product competitiveness and help companies occupy the commanding heights of market competition. While obtaining considerable profits, they can also promote the development and promotion of ecological economy.
2021-05-05T00:08:11.999Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "afe471768fe6f38736c85144921b49e4e5499bfa", "oa_license": "CCBYNC", "oa_url": "http://ojs.piscomed.com/index.php/L-E/article/download/1666/1516", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ff5918bc11683dfb2e847b9e71db12155b64b578", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
226327737
pes2o/s2orc
v3-fos-license
High-Resolution Three Dimensional MRI Findings After Plugging Surgery for Superior Semicircular Canal Dehiscence: A Case Report : Superior semicircular canal dehiscence (SSCD) is known as abnormal communication of the superior semicircular canal (SCC) to the intracranial space secondary to a bony defect in the canal. Patients who are subjected to surgical repair usually have intractable symptoms, and recently, plugging of SCC using a transmastoid approach has been widely recommended. In this report, we describe a case of incomplete plugging for SSCD in a 37-year-old woman, along with the high-resolution three dimensional magnetic resonance imaging (3D MRI) findings using Pöschl view reconstruction. Postoperative MRI of 3D T2-wieghted sampling perfection with application optimized contrasts using different flip angle evolution (SPACE) Pöschl plane demonstrated an incomplete plugging of the SCC with partially visible perilymphatic fluid in the posterior limb above the common crus. A 3D fluid-attenuated inversion recovery (FLAIR) sequence showed an enhancement involving the vestibule and SCC, suggesting labyrinthitis. Although there are few reports about incomplete plugging for SSCD, this case could demonstrate postoperative status and complication after plugging of SSCD using a high-resolution 3D MRI sequences with Pöschl view reconstruction. Introduction Since the description of superior semicircular canal dehiscence (SSCD) for the first time by Minor et al. in 1998, SSCD has been recognized as a clinical syndrome with symptoms of both auditory and vestibular dysfunction (1). Patients usually present vertigo, conductive hearing loss, autophony, tinnitus, or Tullio's phenomenon, which is vertigo with or without nystagmus induced by loud noise. These symptoms are associated with a bony dehiscence in the roof of the superior semicircular canal (SCC), which can create a third mobile window into the labyrinth, consequently cause dissipation of the acoustic energy of sound waves, abnormal mobility of the endolymph, and reduction in bone conduction threshold (2). Surgery is recommended in patients with persistent intractable symptoms after undergoing conservative treatment. Generally, plugging of the osseous defect, resurfacing of the SSC roof, or reinforcement of the round or oval window are commonly used surgical options (3). Unfortunately, revision surgery is required when symptoms are persistent after surgical repair in order to resolve incomplete closing of the dehiscence. Recently, few studies have reported that highresolution magnetic resonance imaging (MRI) techniques facilitate visualization of fluid signal in SCC, which in turn enable the evaluation of postoperative status including canal occlusion. Here, we report a case of imaging findings of incomplete plugging for SSCD using high-resolution three dimensional (3D) MRI by using Pöschl view reconstruction. Case Presentation A 37-year-old female patient was referred to our hospital with a five-year history of autophony, vertigo and Tullio phenomenon. Unenhanced high-resolution temporal bone computed tomography (CT) scan was performed on a 128-section scanner (Somatom definition AS; Siemens Medical Solutions, Erlangen, Germany) with a 0.6-mm of the slice thickness, and SSCD was confirmed in her right temporal bone ( Figure 1A). Subsequently, she underwent SSCD repair by canal plugging via the middle cra-nial fossa approach. However, autophony and Tullio phenomenon with hearing disturbance remained for 2 years after plugging surgery. Vestibular evoked myogenic potential (VEMP) was conducted to assess the presence of residual third mobile window, and VEMP thresholds was 80 dB in the right side. Postoperative temporal bone CT revealed fluid attenuation in SCC with a thin radiopaque covering above the previous SSCD ( Figure 1B). Follow-up 3T MRI examination (Magnetom Skyra 3.0T; Siemens Medical Solutions, Erlangen, Germany), including isotropic 3D highresolution T2-weighted sampling perfection with application optimized contrasts using different flip angle evolution (SPACE), pre-and post-contrast 3D fluid-attenuated inversion recovery (FLAIR) images with a 0.6-mm slice thickness, was performed. The two radiologists evaluated the presence of filling defect of SCC above the ampulla and common crus, presence of an abnormal signal intensity within the inner ear, and contrast enhancement on 3D high-resolution reconstructed images. Moreover, we reconstructed images in the Pöschl view, which is oblique plane parallel to SCC. 3D T2-wieghted SPACE Pöschl plane reconstructed image demonstrated a focal filling defect of SCC above the superior ampulla and common crus (Figure 2A), but a short segmental perilymphatic fluid signal remained in the posterior limb of SCC above the fenestration site, suggesting an incomplete plugging ( Figure 2B). 3D-FLAIR sequence showed an enhancement involving the vestibule and SCC, suggesting labyrinthitis ( Figure 2C). Consequently, she underwent revision surgery via middle cranial fossa approach for repairing residual SSCD. The materials used in the previous surgery were removed, and then bone wax with soft tissue was applied for SSCD, and the bone chip was finished with glue. After revision operation, autophony and Tullio phenomenon improved and the remaining mild high-frequency sensorineural hearing disturbance was made tolerable with the assistance of a hearing aid. Discussion Herein, we have reported the imaging finding of incomplete plugging for SSCD using high-resolution 3D MRI sequences with Pöschl view reconstruction for evaluating postoperative status and presence of complication. The patient in our study showed a residual perilymphatic fluid signal above the fenestration site of the common crus on 3D T2-wieghted SPACE Pöschl view, suggesting an incomplete plugging. Additionally, a faint contrast enhancement is noted at the vestibule and SCC on contrast-enhanced FLAIR image, suggesting labyrinthitis. These MRI findings allowed us to perform revision surgery and help determine the extent of SSC occlusion for achieving an anatomical guide to prepare for the surgery. SSCD symptoms are caused by a third window via a bony defect in the canal, which allows for the sonic pressure wave to be dissipated to the intracranial space (1,4). When the patient presents intractable or debilitating symptoms after conservative treatment, operative intervention can be considered. So far, canal plugging, canal roof resurfacing and capping, and round window reinforcement are well-known surgical methods for repairing SSCD (3). Among them, recent studies demonstrated that plugging of SCC using a transmastoid approach is preferable than resurfacing owing to its lower revision and complication rates and a shorter hospital stay (5). Plugging of SCC blocks the extra window with bone dust, bone wax, or muscle, and then covers the operative site with muscular fascia, powdered bone, a bone chip, or cartilage (6). One report suggested that more than 90% of the patients who underwent canal plugging showed postoperative improvement in vestibular and hearing symptoms (7). Recent studies have reported the utility of postoperative imaging evaluation after plugging for SSCD. Dournes et al. reported that a postoperative CT scan could detect complications such as a fistula or pneumolabyrinth (2). Moreover, CT is helpful for visualizing the condition and location of radiopaque materials including bone cement for reconstruction of the tegmen tympani or filling the bony defect of SSC (8). However, CT had some limitations to confirm complete closure of the third window, which is resulted by the radiolucent characteristic of surgical materials, such as the plugging wax or the covering muscular fascia or cartilage (2). In such cases, high-resolution MRI with heavily T2-weighted sequences could be used to assess successful closure of SSCD by detecting the presence of a bright signal intensity (SI) fluid in plugged SCC (8,9). Also, postoperative MRI can be used to exclude other complications such as labyrinthitis, vascular injury or encephaloceles (10,11). Furthermore, a recent report demonstrated that co-registrated CT/MRI combining 3D reconstruction of the CT and MRI, successfully identified the exact location of a residual defect (8). In our case, we obtained 3D high-resolution T2-wieghted, pre-and post-contrast FLAIR sequences, and could therefore reformat 3D images with an intended direction, such as the Pöschl plane. It is known that the Pöschl's plane reformatted parallel to SCC, or Sten- ver's plane reformatted perpendicular to SSC, which are derived from a high resolution temporal bone CT, can accurately demonstrate SSCD (3,12,13). These reformatted MR images optimized for SCC allow highlight the exact location of the residual perilymphatic fluid signal through a high spatial resolution and high contrast between fluid and bone of high-resolution T2-weighted images. Moreover, 3D FLAIR with gadolinium enhancement allowed for the detection of breakdown of the blood-labyrinth barrier, based on which labyrinthine inflammation or hemorrhage could be clarified (14). Consequently, we could provide accurate informs to clinicians about the most likely site of a persistent defect for revision surgery. After revision surgery, symptoms directly related to SSCD, such as autophony or the Tullio's phenomenon, tend to better improve than other associated symptoms, including headaches, chronic dizziness, or disequilibrium (11). In addition, few previous reports revealed that the hearing outcome after revision surgery might be similar or worse than that of previous operation (11,15). Therefore, proper patient selection before revision surgery is mandatory for satisfactory improvement of symptoms. However, this study has presented only one case of high resolution 3D MRI sequences using Pöschl view reconstruction. Welldesigned future prospective studies including comparison with temporal bone CT is necessary to confirm the clinical benefit of these MRI sequences with Pöschl view reconstruction whether it could guide an optimized outcome for patients. In conclusion, we demonstrated a case of incomplete plugging for SSCD, which was visualized using highresolution 3D MRI sequences with Pöschl view reconstruction. If the patient has residual symptoms after surgical intervention for SSCD, performing high-resolution 3D MRI could be helpful to assess the patency of SCC and accompanying complications. Ethical Approval: The institutional review board of our hospital waived informed consent for use of the data due to its retrospective nature. Informed Consent: The institutional review board of our hospital waived informed consent for use of the data due to its retrospective nature.
2020-10-29T09:07:48.910Z
2020-10-25T00:00:00.000
{ "year": 2020, "sha1": "61c7c216dfb592fffce51f6a8f0b636e6ab5e609", "oa_license": "CCBYNC", "oa_url": "https://iranjradiol.kowsarpub.com/cdn/dl/9f4b9df2-5065-11eb-b4ca-7fd97aa5201b", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d0d1b68f8fba2ca5b13ef53a361b1627b01c68d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226270654
pes2o/s2orc
v3-fos-license
Finite-element analysis of the influence of tibial implant fixation design of total ankle replacement on bone–implant interfacial biomechanical performance Purpose: Implant loosening in tibia after primary total ankle replacement (TAR) is one of the common postoperative problems in TAR. Innovations in implant structure design may ideally reduce micromotion at the bone–implant interface and enhance the bone-implant fixation and initial stability, thus eventually prevents long-term implant loosening. This study aimed to investigate (1) biomechanical characteristics at the bone–implant interface and (2) the influence of design features, such as radius, height, and length. Methods: A total of 101 finite-element models were created based on four commercially available implants. The models predicted micromotion at the bone–implant interface, and we investigated the impact of structural parameters, such as radius, length, and height. Results: Our results suggested that stem-type implants generally required the highest volume of bone resection before implantation, while peg-type implants required the lowest. Compared with central fixation features (stem and keel), peripherally distributed geometries (bar and peg) were associated with lower initial micromotions. The initial stability of all types of implant design can be optimized by decreasing fixation size, such as reducing the radius of the bars and pegs and lowering the height. Conclusion: Peg-type tibial implant design may be a promising fixation method, which is required with a minimum bone resection volume and yielded minimum micromotion under an extreme axial loading scenario. Present models can serve as a useful platform to build upon to help physicians or engineers when making incremental improvements related to implant design. Introduction Total ankle replacement (TAR) is a promising procedure for patients with end-stage ankle arthritis and regains popularity among foot and ankle surgeons in these decades. The fundamental rationale of TAR is to replace the damaged portion of tibial and talar bone with artificial implants, thus to relieve pain and restore ankle function. Implant loosening in tibia after primary TAR is one of the common postoperative problems in TAR, the incidence of which is in the range of 10.4-34%. [1][2][3] Previous systematic review shows that implant loosening in the tibia is associated with implant structure, 4 which may be reduced by refining design features to existing implants. Better implant structure design may reduce micromotion at the bone-implant interface and enhance the bone-implant fixation and initial stability, 5 thus eventually prevents long-term implant loosening. 6,7 For tibial components of total ankle implants, a variety of fixation configurations have been designed, and new implant designs are emerging. However, no total ankle implant showed a comparable long-term result of total knee or hip implants, and biomechanical studies of different TAR implant designs are lacking. The finite-element (FE) method is a valuable tool to investigate the mechanical characteristics at the bone-implant interface in joint implant research. [8][9][10][11] FE models of the tibial bone-implant constructs (the assembly of resected tibial bone and the tibial component of TAR implant) were developed and validated in early studies [12][13][14] but have not been applied in parametric design exploration of the fixation configuration above the tibial tray of TAR implant. Such studies are necessary to evaluate different tibial implant features and seek out the right structure design with the best performance. In this study, we developed bone-implant FE models to investigate the effect of the fixation method on biomechanics at the bone-implant interface. The tibial components of existing TAR implants have primarily chosen four types of fixation configurations: stem, keel, bar, and peg type design. Then, we reconstructed these reference geometries from four commercially available ankle prostheses: Mobility (DePuy Synthes, Raynham, MA, US), Salto Taralis (Integra Lifesciences, Plainsboro, NJ, US), STAR (Stryker, Kalamazoo, MI, US), and Infinity (Wright, Memphis, TN, US) implants. Parametric analysis was conducted to evaluate the performance of different design factors, such as diameter, width, height, and length, and to find optimized implant design. A detailed understanding of these parameters will eventually enhance the performance of total ankle implants. Geometry reconstruction and model development The protocol of the study was approved by the ethics committee of our institution. The tibial fixation configurations above the tibia tray of four existing total ankle systems (geometry 1: Mobility, 15 geometry 2: Salto Taralis, 16 geometry 3: STAR, 17 and geometry 4: Infinity 18 ) were reverse engineered, from which four reference geometries of the tibial fixation configuration were recreated in Rhino (McNeel, Seattle, Washington, USA) and were depicted in Figure 1. Parametric variations of the reference geometries of fixation configuration of a tibia component were generated, aligned, and assembled with a tibial tray extruded from the resection surface of the tibia with the same cross-sectional shape as the contact surface. (We use 3 mm as the thickness of all tibial trays, which was same as that of STAR implant system. 17 ) The parametric design table for the dimensions, such as length, width, and diameter of the fixation features, was listed in Table 1. Porous coating or cement was not modeled for simplification. Journal of Orthopaedic Surgery 28 (3) Unlike hip or knee osteoarthritis, ankle osteoarthritis is mainly secondary to trauma. [19][20][21] Thus, a younger population predominates in ankle osteoarthritis. To model the ankle anatomy of the young population, we collected images of the right ankle of a 24-year-old male volunteer under neutral position by computed tomography scanning (Sensation 64, Siemens Healthcare, Germany; slice thickness ¼ 0.6 mm, pixel size ¼ 512 Â 512) and then, the tibial bone was segmented from this dataset using a medical image processing software (Mimics, Materialise NV, Leuven, Belgium). The recommended tibial bone resection level ranges from 5 mm to 11 mm among different implant systems. [15][16][17][18]22,23 Thus, to keep consistency, the tibial bone was resected at 10-mm level superior to tibial plafond with the protection of medial malleolus under the guidance of senior foot and ankle surgeons (XM and CZ). Model development After obtaining the 3-D solid models of all bones, a further procedure, including meshing and material property assignment, was performed using 3-Matic (Materialise NV). The quadratic tetrahedral element with a maximum element length of 1.5 mm was used for a meshing purpose. Mesh size was defined by a mesh convergence study (see Supplemental Appendix 1). Element-based material assignments based on the empirical relationship from the literature 11,24 for the implant geometries and bone were listed in Table 2. FE models were analyzed in ANSYS workbench FE software (ANSYS, Inc., Pennsylvania, USA). Frictional interaction with a frictional coefficient of 0.5 was used at the bone and implant interface to characterize the initial unbonded condition. 13,25 A fixed boundary condition was applied at the proximal surface of the tibial bone. A worstcase scenario force of five times body weight 26 (3414 N) was applied uniformly at the distal surface of the tibial component. This loading condition was a maximum ankle joint force during walking, which was commonly used for ankle implant testing in documents of "Summary of safety and effectiveness" submitting to the US Food and Drug Administration. 27,28 We chose the bone-implant interface micromotion as an indicator of initial stability. Previous studies showed small interface micromotion (40-100 mm) might induce osteolysis and aseptic loosening of the implant, 29 while motions above 150 mm would promote the formation of fibrous tissue and jeopardize osseointegration at the bone-implant interface. 30 The micromotion values at the bone-implant interface were calculated by the sliding The radius of the bottom surface with a fixed R S /r S ratio contact algorithm in ANSYS (ANSYS, Inc.). 31,32 The structural contact variables "SlidingDistance" in ANSYS calculated the amplitude of total accumulated slip increments when the contact status was sliding 33 that was the relative displacement of the contact elements as they were debonded from the target elements at the bone-implant interface. Besides, periprosthetic von Mises stress, principal strain, contact surface, and bone resection volume were analyzed. The modeling process was shown in Figure 2. Results A total of 101 models were generated. Peak micromotion, average and standard deviation of the bone resection volume, and the contact surface of each model were shown in Figure 3. Also, Figure 4 showed the contact pressure and micromotion contour plot of the reference geometries. Generally, of all geometries, stem-type implants required the largest volume of bone resection before implantation, while peg-type implants required the least. Stem-, keel-, and bar-type geometries had a slighter small contact pressure (peak contact pressure for each geometry was 26.793, 24.631, and 24.281 MPa, respectively), and exhibited a high contact area (average contact area for each geometry was 1794.8 mm 2 , 1930.9, and 1855.5 mm 2 , respectively). Peg-type geometry showed the opposite (peak contact pressure was 29.151 MPa, and the average contact area was 1333 mm 2 ), but the difference in peak contact pressure among them was not remarkable. Compared with central fixation features (stem and keel), peripherally distributed geometries (bar and peg) were associated with lower initial micromotions, which were well below 100 mm. For each fixation configuration, contact pressure was concentrated in the out layer of the contact surface, and high Journal of Orthopaedic Surgery 28 (3) micromotion was located in the fixation structure or at the posterior-lateral corner of the surface under vertical loading. As shown in Figure 5(a), the peak micromotion increased with the height for stem-type configuration. The peak micromotion reached a maximum of 153.76 mm. With the increase of the radius of the bottom surface of the cylinder (R S ) alone, peak micromotion elevated slightly. For R S with a fixed ratio of R S and r S , its peak micromotion first increased and then decreased ( Figure 5(b) and (c)). The majority of stemmed implant exhibited a high micromotion larger than 100 mm, except those with a height lower than 12 mm. The peak micromotion for varying geometry parameters of keel-type geometries was presented in Figure 6. No obvious trend was observed for micromotion when varying the length. Peak micromotion was in the range of 73.8-123.9 mm. However, as the height of the keel increased, peak micromotion first increased and then decreased. For the radius of the keel-type geometries, the peak and micromotion slightly increased with the radius with or without a fixed ratio of R and w, changing around 100 mm. For bar-type geometry (Figure 7), peak micromotion was below 100 mm and showed the trend to increase with increasing radius (R B ), length (L B ), height (H B ), and distance (D B ) between two bars. It was expected, given that as the edge of bars extending into the trabecular bone with lower bone density, bone cannot support the implant well, resulting in an increasing sliding distance. As shown in Figure 8(a) and (c), an increase in the radius of each peg (R P ), and the length of each anterior pegs (L P ) with a fixed ratio of L P and the length of each posterior pegs (l P ) was associated with increased peak micromotion. For the anterior-posterior slope of pegs (A P ), pegs with 45 of the slope had the lowest peak micromotion, 22.72 mm. Discussion In this study, extensive FE simulations were employed to explore the design variation of tibial component fixation. The micromotions of 101 different tibial components from four reference geometries at the bone-implant interface were investigated. The reference geometries (i.e. stem, keel, bar, and peg type) are representatives of current commercially available TAR implants. The influence of different implant design features on micromotion at the boneimplant interface was then analyzed under vertical compressive load. Our results suggested that the geometry of the tibial component had a significant impact on the micromotion at the bone-implant interface. We found the highest micromotion in stem-type implants, next to keel-type implants, followed by that in bar-type implants, the lowest micromotion was observed in peg-type geometries. Central stem fixation design in TAR (such as Mobility (DePuy), INBONE II (Wright), and Buechel-Pappas Ankle System (Endotec, Inc, Orlando, FL, US)) was influenced a lot by experience in total knee arthroplasty. 34 Stemmed tibial implant can help align the prosthesis and aid implant stability in the presence of bone deficit. 35 Our results suggested that to preserve initial stability, the height of the stem should be lower than 12 mm, and higher stem exhibited large micromotion ( Figure 5(a)). It was agreed with a clinical study showing that BP implant with a longer stem was associated with a higher implant loosening rate and revision rate compared to Mobility implant. 36 However, a deep intramedullary stem with a height higher than 32 mm (such as INBONE II implant with extra cylindrical segments) was not investigated in this study due to the limit of the current model geometry, which may make a difference to the result and should be investigated in future studies. Both central keel type (such as Salto Talaris (Integra Lifesciences), Ankle Evolutive System (Zimmer Biomet, Warsaw, IN, US), and Agility (DePuy)) and parallel cylindrical bar type (such as STAR (Stryker), Box (MatOrtho Ltd, Leatherhead, Surrey, UK), trabecular metal total ankle (Zimmer Biomet)) fixation shared a similar biomechanical result and they had a lower micromotion compared with stem implants. Reduced peak micromotion was found in geometries with a short length, lower height, and smaller radius. It was noteworthy that extra-wide stem ( Figure 5(c)), extra-long keel (Figure 6(a)), extra high keel ( Figure 6(b)), and bar (Figure 7(d)) resulted in a decrease of peak micromotion, indicating that initial stability was achieved by large, deep, and wide fixation geometry anchored into cortical bone or high-bone-density trabecular bone proximal to the implant. However, such a fixation method required large bone volume resection. Also, from an operative perspective, the surgical preparation for the implantation of stem, keel, and bar-type implants required creating an anterior cortical tibial window for insertion, [15][16][17] which may jeopardize the integrity of tibial cortical bone, thus may weaken the support to the tibial implant. 37 Alternative option 38 is to place multiple cylindrical segments individually reaming through the talus from the plantar surface of the foot, which may endanger the blood supply of talus, causing talus necrosis. 39 Thus, a small-sized, lower but thin stem, keel, or bar geometry was recommended to reduce initial interfacial micromotion. Compared to the above three types of tibial implants, our data of peg design (such as Infinity (Wright), Cadence (Integra LifeSciences)) showed promising results. It required the least bone resection volume and yielded the lowest micromotion, thus promoted initial implant stability and reduced the risk of loosening. For peg-type design, shorter length, 45 of anterior-posterior slope, and shorter radius can reduce the micromotion. Also, results showed no impact of peg number and arrangement on the micromotion (as shown in Figure 8(d) and (e)). However, from the operative perspective, anteriorly positioned pegs were easier for implantation than that of posteriorly positioned. Two design features of the tibial component were recommended by foot and ankle experts: 34 decreased distal tibial bone resection and minimized disruption of the anterior tibial cortex. Higher distal tibial bone resection may waste large bone with high bone density resulting in poor initial stability and bring difficulty to future revision surgery. 40 Our data also provided similar recommendations: for all types of fixation features, low and small size of tibial implants were recommended to use in surgeries to achieve better initial bone-implant fixation. Peg-type tibial implant also required no disruption of the anterior tibial cortex and seemed to be the most suitable implant design. The current study has several limitations. Firstly, current models were not validated by experiments. However, boneimplant models have been widely used for implant design, and we verified our models by mesh convergence test. Comparative results of different design features can provide valuable information but should not be directly applied to clinical practice. And only one type of critical loading scenario was considered in this study. It is noticed that other types of loading conditions exist besides axial compressive load during gait, 26 and multidirectional loading is likely to have bigger impacts in the ankle. 41 However, a vertical load is a dominant joint force that the ankle would experience during normal walking, and a magnitude of five times body weight is nearly the maximum value it can reach. Therefore, to evaluate an ankle implant under extreme conditions, a vertical overload should be the first condition to be tested. Nevertheless, future studies need to account for real-life loading conditions over an extended period. For simplification and consistency, only geometry above the tibial tray was evaluated in the present study. Current tibial trays were generated by extrusion from the resection surface of the tibial bone. Therefore, full coverage of tibial bone was achieved, and no underhang of the tibial component should be considered. Future research should be conducted to study the influence of different tibial tray design, position, and alignment. Bone varied at different resection levels of distal tibia, 42,43 but only one resection level was analyzed in this study. Besides, the bone material property of only one volunteer was used. Therefore, the impact of different recommended resection levels and between-subject variations of mechanical property should be considered in future studies. Lastly, only the interfacial micromotion was compared and discussed in this study. Although the related strain or stress parameters or bone remodeling phenomenon were all valuable for immediate postoperation and long-term clinical outcomes, they were beyond the scope of this work and should be considered in future studies. The findings of this study may help guide the choice of ankle prostheses and inform future implant design, thus aiding surgeons in achieving better postimplantation outcomes through an enhanced understanding of what role geometric features of the implants play in preventing loosening. Developing these models is the initial step in building a platform to examine the impact of implant design (structure and material) by changing its geometric or material parameters under varying operating conditions instead of designing and performing complex experimental studies for implant design. It could be expanded to account for more design features or long-term effects during the osseointegration by adding more anatomic structures. Future attempts should be directed at developing methods 8 Journal of Orthopaedic Surgery 28 (3) to enhance the accuracy and applicability for which the current model can serve as a template. Conclusions We have developed bone-implant FE models with a density-dependent material property to examine the impact of design parameters of the tibial component of TAR implant on bone-implant micromotion. Our results suggested that the initial stability of all types of implant design can be optimized by decreasing fixation size, such as reducing the radius of the bars and pegs and lowering the height. Peg-type tibial implant design may be a promising fixation method with a minimum micromotion, bone resection level, and no disruption on the anterior cortical bone under an extreme axial loading scenario. Such models can serve as a platform to build upon to help physicians or engineers when making incremental improvements related to implant design. Future integrated computational and experimental studies could guide the identification of key implant design specifications to maximize clinical performance. Author contribution The first three authors (JY, CZ, and W-MC) contributed equally to this work. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received financial support for the research, authorship, and/or publication of this article: This project was supported by the Scientific and Technological Innovation of Shanghai Science and Technology Committee (Grant No. 18441902200), Natural Science Foundation of Shanghai (Grant No. 19ZR1407400), and National Natural Science Foundation of China (Grant No. 81702109). Supplemental material Supplemental material for this article is available online.
2020-11-07T14:06:53.511Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "b9dbccb84e8258413a553173bbf56cd78b4f51d8", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2309499020966125", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "4e01b46815d92068a39c32220035c91d31b46d8b", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238717243
pes2o/s2orc
v3-fos-license
Building Competence in Science and Engineering : Next Generation Science Standards science and engineering practices (NGSS S&E) are ways of eliciting reasoning and applying foundational ideas in science. Studies have revealed one major impediment to implementing the NGSS, namely, insufficient teacher preparation, which is a concern at all teaching levels. The present study examined a program grounded in research on how students learn science and engineering pedagogical content knowledge and strategies for incorporating NGSS S&E practices into instruction. The program provided guided teaching practice, content learning experiences in the physical sciences, engineering design tasks, and extended projects. Research questions included: To what extent did the Program increase teachers’ competence and confidence in science content, with emphasis on science and engineering practices? To what extent did the program increase teachers’ use of reformed teaching practices? This mixed-methods, quasi-experimental design examined teacher outcomes in the program for 24 months. The professional development (PD) findings revealed significant increases in teachers’ competence and confidence in integrating science and engineering practices in the classroom. These findings and their specificity contribute to current knowledge and can be utilized by districts in selecting PD to support teachers in preparing to implement the NGSS successfully. Building Competence in Science and Engineering Content: Research to Inform Practice The Next Generation Science Standards (NGSS) document attempts to provide educators and students nationwide with an internationally benchmarked education by articulating conceptual science performance expectations.Little exists in implementation strategies and national studies have already identified impediments to NGSS implementation, such as the lack of resources for effective science education, limited instructional time devoted to science, and insufficient teacher training [1].In a recent national study on teacher readiness, most middle-and high-school teachers indicated they have no engineering training and are ill-prepared to effectively implement NGSS; engineering emerged as the content area of greatest need and created the greatest degree of anxiety.Teacher preparedness is a concern at all levels as the mandates of NGSS require conceptual and exploratory learning, which are not always employed in all science classrooms [2].There is an urgency to identify the type of professional development (PD) that will prepare teachers to meet the challenges of the NGSS.It requires an investment of resources to develop the appropriate tools to support teachers [3].We must align the resources spent on PD with the demands teachers will face with NGSS, and also conduct the necessary research required to learn from it to inform practice. The present study examined a program that aimed to prepare middle-school teachers for NGSS by building competence and confidence in using science and engineering practices in the classroom.We investigated PD that would potentially meet the demands teachers will face during NGSS implementation.We propose to use the information we have learned from this study's results to provide recommendations for teacher PD, as lead states begin the NGSS-adoption process. Literature Review Some teachers embrace an educational innovation with enthusiasm and incorporate it into their classroom teaching.Yet, others discard it and continue with their familiar teaching practices after only a few attempts [4], as all teachers are not amenable to innovation [5].For instructors to persist in their efforts to implement new strategies, they need to have the expectation that they will succeed [5].Individuals' beliefs about their competence and outcomes expected of their actions serve to enhance interest in a specific area, and a strong self-efficacy helps individuals overcome setbacks and persist in the face of challenge [6].Teachers' low self-confidence and lack of competence in content become significant impediments to an innovation, such as the NGSS, as teachers will have to contend with not having the necessary equipment, materials, or training for successful implementation [1]. The Development of NGSS NGSS is a set of science standards.Science and its fellow technology, engineering, and mathematics (STEM) disciplines are, and have been, for at least ten years, the focus of concern and reform efforts from educators nationwide [7,8].The reasons for this are diverse.The concern that the United States is lagging behind other nations in STEM areas, and that this gap could potentially emerge as economic disaster in the future [8], has resulted in an influx of federal funding for STEM education and research, and the encouragement of institutions to pursue this research [7]. In the summer of 2011, a writing team of 41 educators worked on the first draft of the NGSS.On 9 April 2013, the finalized NGSS document was released.NGSS describe what all students should know and be able to do by the time they graduate from high school.NGSS are based on learning progressions of core ideas in the discipline, concepts that cut across disciplines and practices that will allow students to use their disciplinary knowledge in thoughtful ways.A difference from earlier 1996 standards from the National Research Council, NGSS Science and Engineering Practices are characterized as ways of identifying the reasoning behind, discourse about, and application of the core ideas in science [9]. The specific eight science and engineering practices outlined in the NGSS are: asking questions and defining problems, developing and using models, planning and carrying out investigations, analyzing and interpreting data, using mathematics and computational thinking, constructing explanations and designing solutions, engaging in argument from evidence, and obtaining, evaluating, and communicating information (2013).The process of creating the NGSS was driven by 26 lead states that contributed resources and provided support.They are expected to be trailblazers in the adoption of the NGSS.However, the choice to implement curricula and the form these curricula will take is ultimately at the discretion of individual states. NGSS, Models, and Modeling Instruction Recent research contributes to current understanding of how students develop and use models in middle-and high-school classrooms.Current modeling research has focused on argumentation in science education [10,11], model-based inquiry [12], software scaffolds supporting modeling practices [13] constructing and revising models [14], and integrating conscious and intuitive knowledge [15]. Educators often discuss the important role models play in science education [16].Scientific disciplines are guided in their inquiries by models that scientists use to create explanations for data and to further investigate nature.The design, use, assessment, and revision of models and related explanations play a primary role in scientific inquiry and should be a prominent feature of students' science education [12]. Researchers investigating modeling nationally and internationally have significantly influenced the conceptualization of modeling articulated in NGSS [13,14].They have advocated for the role of models and modeling in school science, and also argue that modeling is a core practice in science and a central part of scientific literacy [13].Scientific modeling includes the elements of the practice (constructing, using, evaluating, and revising scientific models) and the knowledge that guides and motivates the practice such as understanding the purpose of models [13]. NGSS [17] employs the use of core-science and engineering practices (identified above), which are at the foundation of modeling instruction, an evidence-based pedagogy for science education that was developed in the 1980s.Modeling instruction integrates a student-centered teaching method with a model-centered curriculum [18,19].It applies structured-inquiry techniques to teaching fundamental skills in mathematical modeling, proportional reasoning, and data analysis, which contribute to critical thinking, including the ability to formulate hypotheses and evaluate them with rational argument and evidence. Modeling pedagogy has three elements: the models, the modeling cycle, and classroom discourse management [18,19].An understanding of these elements is the pedagogical content knowledge [20] needed for successful classroom adoption and implementation.A model is a representation of structure-a conceptual representation of a real thing [18,19].The models around which learning is centered in modeling are basic relationships among quantities that form the content core of a discipline and these models are developed by students into tools for making sense of physical reality-for making predictions. Modeling has been defined as an activity.With its foundation in the modeling cycle [18,19] is a three phase process: model construction, which takes place in the context of a paradigm lab that discovers a link between two physical quantities at the beginning of each instructional unit; model validation, in which students refine the basic model they have constructed by testing it in disparate initial conditions; and model deployment, in which students use the model to solve problems from diverse contexts. Teachers learn modeling instruction by participating in a modeling workshop-an intensive, three-week 90-h immersion experience.Teachers are engaged in laboratory investigations and activities, creating experiments, collecting, analyzing, interpreting data, and engaging in classroom discourse to achieve collective sense-making.It is by active participation in the discourse that characterizes the modeling learning that teachers can become effective managers of modeling discourse in their classrooms.modeling instruction began in college and high school physics and has expanded across the science disciplines into chemistry, biology, physical science and middle school science. Study Overview This study examines in-service teachers' outcomes in a program that aimed to increase teachers' content knowledge (cognitive skill) and confidence (self-efficacy) in the use of science and engineering practices (SEP) in the classroom.Three high-needs districts and a state university formed a partnership and proposed to enhance the quality of science instruction for middle-school teachers.The partnership was cognizant of the fact that science instructors lacked adequate preparation in areas associated with science and engineering practices and proposed, through the use of modeling instruction, to increase teacher science-content knowledge of energy and matter.The team planned a formal needs assessment in 2012 to collect data on concrete deficiencies. Needs Assessment Identifies Professional Development Focus A survey was administered to partnership teachers to assess their knowledge related to the grant's content focus.An online survey was administered in summer, 2012 to 250 science and math instructors in three districts to assess their science content knowledge with emphasis on SEP.The survey yielded a 68% response rate (n = 171).The assessment revealed that teachers in grades six through eight had the following limitations: lack of content knowledge and confidence in their ability to teach science content, with emphasis on scientific and engineering practices; minimal knowledge on integrating science and engineering content; and limited knowledge of how to design and deliver science and engineering activities.Teachers had no college coursework in the structure of matter, matter and energy flow in organisms, conservation and transfer of energy, the relationships of energy and forces, and energy in everyday life.This corresponded to no physics, chemistry or biology courses that could be considered fundamental to NGSS standards and practices. The data were combined with research that highlights the correlation between teachers' science-content knowledge and student achievement [20] and SEP and student achievement [21,22], particularly in high poverty areas, which were project cornerstones.Teacher deficiencies were a concern in these districts and administrators wanted to identify PD to best train teachers to meet NGSS challenges. Based on the needs assessment, partners identified three goals targeting middleschool teachers: increase teachers' physical science content knowledge in energy and matter; increase teachers' confidence in incorporating NGSS SEP into their instruction; and increase teachers' use of reformed teaching practices. Professional Development Model The PD model was designed to move the partnership toward accomplishing these goals, which started 27 October 2012 and was completed by Summer of 2014.PD included 236 h with three six-hour Saturday PD sessions during each academic year and two threeweek summer institutes in 2013 and 2014.Teachers participated in 227 h of PD overall.The Partnership for Success (PAS) program engaged 27 teachers in grades six through eight.The practices of engineering design were interrelated with scientific practices to create the context of the learning environment.PAS provided teachers with a PD program grounded in research on how students learn science and engineering pedagogical content knowledge and strategies for incorporating NGSS S into instruction.To do so, PD provided guided teaching practice, content learning experiences in the physical sciences, and engineering design tasks and extended projects.modeling instruction was also integrated into the PD model.PAS activities were selected to provide teachers with content preparation in a core scientific concept-energy-and to provide explicit practice and experience in using SEP.Teachers worked through activities and sense-making, confronting misconceptions, and learning to argue from evidence just as their students will be expected to do.They engaged in classroom discourse that reflected the type of discourse they would be expected to mediate in their own classrooms.Teachers went from "student mode," in whole-group discussions, to "teacher mode" deliberations, in which they explored the instructional implications and identified the theoretical underpinnings and disciplinary links to what they were learning.As PAS participants were middle-school teachers, faculty often helped them appreciate both the horizontal continuum of the energy concept across disciplines, and the vertical trajectory of conceptual development across grade levels, which resulted in a coherent model of energy storage and transfer. Both summers focused on crosscutting models of energy and the structure of matter.The first summer institute delved into macroscopic models of energy and the structure of matter and focused on developing SEP in the context of motion, forces, and mechanical and gravitational energy.Time was given to the development of operational definitions the use of scientific language and management of classroom discourse.Saturday sessions, after this first Institute, gave teachers an opportunity to share their successes and challenges as they gained confidence in the use of new teaching strategies.PAS engaged them in additional engineering design activities to help them transition from macroscopic to microscopic models of energy and the structure of matter to help frame the content for the second summer.The second summer institute focused on chemistry, ecology, and the earth sciences.More time was spent understanding the structure of matter and the role of energy and systems in these content areas. The mechanism used to deliver instruction was the Modeling Method of Instruction.The design of instruction followed the modeling cycle, thus, participants engaged in whole group pre-laboratory discussions and small-group laboratory activities, followed by analysis and synthesis of results.These results were shared via whiteboard meetings-whole group discussion and sense-making around the relationships explored in the laboratory activities.This model construction activity was then followed by a series of model elaboration and deployment activities.Activities done in small groups were followed by whole group discussion to allow participants to place what they learned in the context of their own teaching assignment. Iterative engineering design activities were used as a capstone activity in the summer institute.Teachers were involved in a guided curriculum design activity in which they worked together in groups.They used the modeling cycle to design or redesign a curriculum unit of their choosing and incorporated activities that utilized SEP.Units designed by participants were also made available electronically to other participants so they could integrate them in their own classrooms with the goal of giving feedback to the unit designer. Partner Roles District personnel were responsible for basic communication, facilities and district credit, as well as overall project management.University personnel were responsible for initial planning and delivery of professional development.The researcher was responsible for collection and analysis of aggregated data, quarterly reports (formative assessment), and dissemination of findings in a summative report.While each partner in the project had these specific roles, PD curriculum, leadership in PD sessions, data collection and analysis, and production of project deliverables were accomplished in full cooperation and participation. Method 2.1. Research Design This study employed a mixed-methods approach and thus the investigator collected, analyzed, and drew inferences from both quantitative and qualitative data in a single study.The investigator held the assumption that the combination of quantitative and qualitative approaches provides greater understanding of the research problem than either approach alone [23].The researcher used a quasi-experimental, matched-comparison group design, using multiple methods and statistical tests to measure progress toward meeting the established outcomes.This model provides a good alternative, as a randomized controlled trial was not feasible.The research design included both quantitative and qualitative methods and employed analysis of variance (ANOVA) to determine differences between groups and qualitative analysis to code, categorize, and analyze teacher comments.The assumptions of homogeneity of variance, normality, and independence were tested and met. Research questions that guided the study included: 1. To what extent did the program increase middle school teachers' science content knowledge, with emphasis on science and engineering practices?2. To what extent did the program increase middle school teachers' self-confidence in teaching science content, with emphasis on science and engineering practices?3. To what extent did the program increase teachers' use of reformed teaching practices? Participants and Recruitment District and school administrators developed strategies for the PAS program for recruitment and retention of teachers to maintain samples size of both groups (experimental and comparison groups).In year one, the project team recruited over 30 participating teachers from the three districts in the partnership and a comparison group equivalent on selected demographic characteristics (i.e., time teaching, grade band, and area of specialization).Matched comparison was based on number of years teaching (average of 13 years), grade level taught (i.e., middle school), and area of specialization (i.e., science).A t test analysis revealed no statistical difference between the groups based on the number of years teaching and grade level. Quantitative Data Data Sources.In order to answer the first research question (To what extent did the Program increase middle school teachers' science content knowledge, with emphasis on science and engineering practices?) the investigator employed the Diagnostic Test for Mathematics and Science (DTAMS) as a pre-post measure, and it was administered to both groups.In addition, the Basic Energy Concept Inventory (BECI) served as a pre-post measure for the PAS group, as it was closely aligned to the intervention.The BECI, a 25-selected-response-item instrument, is used to capture commonly held misconceptions regarding energy. To answer the second question (To what extent did the Program increase middleschool teachers' confidence in science content, with emphasis on science and engineering practices?) a self-efficacy instrument, the Science Teaching Efficacy Beliefs Instrument or STEBI [24] was administered to both groups as a pre-post measure.The STEBI, with 24-items, assessed teachers' confidence in science and engineering practices using a 5-point Likert scale (5 = strongly agree to 1 = strongly disagree). To answer the third research question (To what extent did the Program increase teachers' use of reformed teaching practices?)an observational tool, the Reformed Teacher Observation Protocol (RTOP, [25]) was used as a pre-post measure for both groups.The RTOP, a 25-item observational instrument, was designed to measure reformed teaching as defined by research in mathematics and science and national standards.All pre assessments were administered to both groups before the start of the intervention and the post-tests were administered to both groups after the intervention ended for the experimental group. As another data source, an online survey was administered after each Saturday PD session and during the summer institutes.Teachers rated PAS in terms of its effectiveness in providing guidance and concrete examples to enable progress in the eight NGSS SEP.The survey used a five-point rating scale of effectiveness (5 = highly effective, 4 = effective, 3 = average, 2 = below average, and 1 = not effective). Data Analysis.Quantitative data were analyzed two ways.Analysis of variance (ANOVA) determined differences between groups.Secondly, a paired samples t test was used to examine difference within groups to determine program efficacy.All analyses include 27 PAS and 29 comparison teachers.The assumptions of homogeneity of variance, normality, and independence were tested and met. Qualitative Data Teachers were given the opportunity to comment (on surveys) on the PAS program after each Saturday PD and during the summer institutes.They were asked to rate the program on specific criteria such as providing training on NGSS SEP but could also express views about PAS impact; as a result, themes associated with NGSS competence, confidence, and implementation in the classroom emerged.The researcher used the constant comparative method [26] as a conceptualizing method on the first level of abstraction.The initial phase involved conceptualizing all the incidents in the data.The researcher compared data and continually modified and sharpened the growing theory at the same time.Notes were compared to find differences and consistencies between codes, which helped reveal categories.Data were analyzed using a three-step process: data reduction, data display, and conclusion drawing and verification [27,28].Data reduction helped to sort, focus, and condense excerpts, which helped organize the data to develop conclusions.Data display enabled review of the reduced data so that conclusions could be drawn.Teachers' excerpts formed the basis for identifying categories, themes, and assertions. Quantitative Results To answer the first research question (To what extent did the Program increase teachers' science content knowledge, with emphasis on science and engineering practices?) the DTAMS was employed.The DTAMS pre-post test was used to measure knowledge of core content concepts.Means are arrayed for the PAS and comparison groups in terms of pre-post test within each group (paired t tests) and post-test statistical comparisons between the groups (ANOVA) to determine differences.PAS significantly outscored the comparison group on the DTAMS content knowledge items and the difference was significant (p = 0.03). DTAMS Results Between Group Difference.This evaluation examined differences between groups on the DTAMS' overall-content-knowledge mean score.The overall possible score for the content knowledge was 35.The PAS group (27 teachers) final knowledge mean score was 18.22 (SD = 5.4) and the comparison group (29 teachers) final knowledge mean was 15.31 (SD = 4.78), which revealed a significant difference favoring the PAS group (p = 0.03) as shown in Table 1 below.Within Group Differences.For PAS, there was a modest gain from the pre-(M = 16.62,SD = 5.26) to post-DTAMS knowledge mean (M = 18.22,SD = 5.40) and the gain was not significant (p = 0.27).There was no significant difference for the comparison group from the pre-DTAMS knowledge mean (M = 16.20,SD = 5.0) to post-DTAMS mean (M = 15.31,SD = 4.78) (p = 0.51), as seen in Table 2 below.There was no significant difference between the PAS and the comparison group on the pre-DTAMS knowledge mean score (p = 0.75). Basic Energy Concept Inventory (BECI) Results To answer the first research question, the Basic Energy Concept Inventory (BECI), an instrument to capture commonly held misconceptions regarding energy, was administered to the PAS group.The PAS program focused on energy content, and the BECI instrument was considered well aligned to the PAS intervention.Initially, teachers did not understand the structure of matter or potential energy, the structure of matter well enough to account for both warmth and coldness in terms of thermal energy, and did not account for energy that had dissipated into the environment.For the PAS administration of the pre-and a post-BECI, post-scores were higher (M = 15.7,SD = 3.03) than on the pre-BECI (M = 8.7, SD = 3.79) and the difference was significant (p < 0.001).Teachers mastered 63% of energy content (see Table 3).The overarching themes of the PAS workshops were energy and the structure of matter.Special care was taken, during the course of these workshops, to develop representational tools and practices that allowed the teachers to develop robust models of microscopic and macroscopic models of both of these core concepts, and teachers were encouraged to employ these tools across disciplines and grade levels.BECI increases revealed that PAS teachers left the program with a more robust microscopic model of energy transfer and storage. Self-Efficacy Instrument Results To answer the second question (To what extent did the Program increase middle school teachers' confidence in science content, with emphasis on science and engineering practices?) a science teaching self-efficacy instrument was administered.Teachers had increased confidence in science and engineering practices and the ability to design and deliver engineering activities.In addition, teachers were less anxious about their engineering skills. Within-Group Differences.Post-survey means revealed PAS teachers were more confident in their ability to design and deliver science and engineering activities and integrate SEP into their classroom.Data revealed teachers were less anxious about their engineering skills and more confident in the following areas: having the ability to answer students' engineering questions (survey item 18); using SEP to enable integration into classroom instruction (survey item 19); and designing and delivering engineering activities (item 23).PAS teachers were more confident in designing and delivering science content with scientific and engineering practices (SEP) (p-value < 0.001), as seen in the Appendix A. The increase in self-confidence for PAS teachers was also evident in the everyday actions of the teachers in the second year Saturday sessions and during the final summer institute regarding the depth and the types of questions they asked.There was an increase in teacher directed inquiry and analysis at the end of an activity.In addition, teachers were able to think deeply about what they were doing (during and after experiments) and about student thinking and learning in the context of engineering content and practices.In many instances, teachers were able to make suggestion on how to make the activities better. Between Group Differences.There were differences between groups on the post selfefficacy survey relating to items on SEP and engineering skills, favoring PAS.Post-survey means showed PAS teachers were more confident (p < 0.01) in their ability to answer students' engineering questions (M = 3.66, SD = 0.62) than the control (M = 3.03, SD = 1.14).PAS teachers were more confident (p < 0.001) in using SEP to enable integration into classroom instruction (M = 4.03, SD = 0.75) than the control (M = 3.24, SD = 1.02) and were more confident (p < 0.001) in designing and delivering science content with SEP (M = 4.11, SD = 0.75) than the comparison group (M = 3.00, SD = 1.30) as seen in Table 4. PAS PD in NGSS Scientific and Engineering Practices: Survey Results.To determine the extent to which PAS provided strategies and guidance to increase competence and confidence in NGSS SEP, an online survey was administered.Overall, the majority of teachers felt the program was highly Effective in providing guidance and examples in the following SEP: developing and using models (81% highly effective), engaging in argument from evidence (70%), asking questions and defining problems (67%), analyzing and interpreting data (67%), obtaining, evaluating and communicating information (67%), and planning and carrying out investigations (58%) as seen in Table 5 below. Reformed Teacher Observation Protocol (RTOP) To answer the third research question (To what extent does the project increase participating teachers' use of reformed teaching practices?) the RTOP was employed as the classroom observational tool.The post-RTOP data and significant differences (from pre-to post-RTOP) for the PAS group revealed integration of practices such as developing and using models, engaging in argument from evidence, asking questions and defining problems and analyzing data.The PAS group outscored the comparison group on the post-RTOP revealing growth over time, and also highlighted PAS teachers' increased integration of NGSS S&E practices. Within-Group Differences.The PAS group post-RTOP score (M = 73.44,SD = 14.32) revealed a 22-point gain from the pre RTOP score (M = 51.41,SD = 22.43) and the gain was significant (p < 0.001) as seen in Table 6 below.There was no significant difference for the Comparison group regarding the pre-post RTOP scores (p = 0.47).No significant difference was evident between the PAS and the Comparison groups on the pre-RTOP scores (p = 0.30).Between Group Differences.There was a significant difference between groups on the post RTOP scores (p = 0.0003) favoring PAS as seen in Table 3.The mean for the PAS group was 73.44 (SD = 14.32) and the mean for the Comparison was 56.10 (SD = 19.04).There was a Cohen's d 'large' effect size of 0.80, or one standard deviation difference between groups. RTOP data revealed PAS teachers' use of reformed teaching practices.Specifically, growth was observed in teachers' integration of NGSS SEP as students were seen developing and using models, engaging in argument from evidence, asking questions and defining problems, and analyzing and interpreting data.PAS classrooms, during post-RTOP observations, frequently involved students developing explanations and employing critique and evaluation (promoting argumentation from evidence).Increases also emerged as students incorporated (RTOP, 9) elements of abstraction with symbolic representations and theory building.Also, (RTOP, 11) students used a variety of means and developed and used models (e.g., used models, drawings, graphs, and concrete materials to represent phenomena); asked questions and defined problems; made more predictions, estimations, and hypotheses, and devised means for testing them (RTOP, 12); communicated information and ideas to other using a variety of means (RTOP, 16) and were analytical and reflective in their learning (RTOP, 14). NGSS Implementation in the Classroom Comments indicated that when teachers started the program, they had "increased anxiety over NGSS requiring a new set of skills not found in their education" yet PAS "provided a new skill set" (C.Mason, survey response, June, 2014).Initial teacher comments (commenters' names have been replaced by pseudonyms in this section) revealed anxiety associated with insufficient teacher training for successful NGSS implementation, impediments, which were similar to those identified in a prior study [1].In addition to low self-confidence, the comments focused on lack of engineering content knowledge and the inability to design and deliver integrated science and engineering activities.Qualitative comments captured at the end of the program indicated teachers were more frequently embedding NGSS SEP in the classroom, as reflected in the following excerpt, "Before the PAS program, I did not integrate any engineering practices.Now, I incorporate engineering practices almost on a weekly basis.This has been an easy transition especially with the models, resources, and strategies provided by the program" (A.Sabato, survey response, June, 2014).Another echoed this sentiment, "Now I have many strategies to integrate science and engineering practices and I use models, graphs, and other elements of abstraction in my teaching (M.Rodriguez, survey response, June 2014).Another indicated, "I feel much more confident in using science and engineering practices and asking students to use models, analyze data, and be reflective in their learning" (B.Masters, survey response, June 2014).Teachers more often used "scientific writing, integrating claim, evidence, and reasoning into classroom projects" (A.Prosser, survey response, 27 June 2014) and required students to "ask questions, analyze and interpret data, and communicate information during class time" (A.Monroe, survey response, 27 June 2014).The majority of teachers use modeling instruction and ask students to "define problems, build and use models, collect and analyze data, and communicate information to classmates." Teachers provided details on how they were integrating NGSS SEP into the classroom.One noted, "I use models and modeling instruction in my classroom and allow students to build their own conceptual models.I have changed my expectations for my students' lab reports and we focus on claims, evidence and reasoning" (T.Walker, survey response, 28 June 2014).Others have re-engineered the way they structure classes as a strategic process "Students have been involved in more experiments, and will be engaged in more open experimentation that will foster greater analysis and communication individually and between students" (A.Verde, survey response, June 2014)."PAS content and pedagogy have increased student critical thinking, confidence, and engagement" (C.Lorenzo, interview response, June 2015).Teachers are "promoting more student critical thinking and allow students to guide learning."They have devoted more time to planning the inquiry line of questioning. Discussion National research suggests that there are several impediments to overcome for the adoption of NGSS and one barrier is inadequate teacher preparation [1].Similarly, the PAS partnership recognized local teachers lacked adequate training in areas associated with science and engineering practices and proposed, through modeling-based learning experiences, to increase content knowledge in energy and matter.This study examined PAS efficacy and found that the PD potentially meets the demands middle-school teachers will face during NGSS implementation.PAS PD, which provided integrated science and engineering content, guided teaching practice, content learning experiences in the physical sciences, and engineering design tasks, increased teachers' competence and confidence in using NGSS SEP.It is likely these results will help inform other educators and researchers as states and district begin the NGSS adoption process. Program Builds Competence and Confidence in the Use of NGSS Although current science teaching practices often emphasize the memorization of facts, the PAS team adopted the NGSS focus, which emphasizes the active construction of conceptual knowledge by "doing science" through science and engineering practices.Regarding content, the BECI attempted to capture gains in energy and the structure of matter, which were the overarching themes in PAS.PAS required the development and use of representational tools and practices that allowed teachers to develop microscopic and macroscopic models of these core concepts.Teachers were involved in deep, rich discussions to highlight naïve beliefs and replace them with coherent conceptual models.Using models as thinking tools in diverse problem contexts, teachers used them to frame their thinking in responding to BECI questions.Moreover, BECI increases revealed that PAS teachers left the program with a more robust microscopic model of energy transfer and storage. PAS data revealed increased self-confidence as teachers were less anxious about their engineering skills and more confident in their ability to answer students' engineering questions; were more confident in using scientific and engineering practices to enable integration into classroom instruction and in designing and delivering engineering activities.They also indicated they were better able to design, deliver, and integrate science content with SEP. Program Teachers Use NGSS SEP in Classroom Instruction During observations, PAS teachers were guided in their inquiries by models, which often created explanations for data.In addition, teachers' design, use, assessment and refinement of models played a primary role in the program, supporting NGSS and prior research emphasizing the importance of models in science education [11,14,16].Consistent with international and national research informing the NGSS, PAS made modeling a core practice in the PD [13,14].PAS was also found effective integrating argumentation [10]; using model-based inquiry [12]; planning and carrying out investigations; constructing explanations and designing solutions; and obtaining, evaluating, and communicating information, thereby supporting NGSS [17]. Limitations PAS program teachers were recruited by the three districts and were paid for their participation.Teachers self-selected into the study, and these teachers persisted in a PD program for 24 months.Since the teachers were not randomly selected, there are limitations to the study; teachers were motivated to learn new skills and persist in the program.These results may be generalizable to the population of teachers who would enroll in PD for academic growth and for those who are dedicated to improving their NGSS scientific engineering and practices. Implications PAS teachers were more competent and confident in using science content that emphasized science and engineering practices in the classroom.What implications are relevant to states adopting NGSS and districts serving middle school students?These findings and their specificity contribute to current knowledge and can be utilized by districts in selecting PD to support teachers in preparing to implement NGSS successfully.Teachers trained in the methods above and those who employ modeling instruction offer a profile of teachers who could become leaders in NGSS teacher professional development.They could also be employed as peer mentors in schools and districts to facilitate the transition to and implementation of the new standards.As NGSS move towards national adoption, it is crucial that educational leaders understand what these standards and the changes will mean for the teachers who implement them.To this end, our study examined a program that aimed to prepare middle-school teachers for NGSS by building competence and confidence in using science and engineering practices.This PD, incorporating modeling instruction, potentially meets the demands teachers will face during NGSS implementation.Our findings support prior research on elementary teachers, adding to the literature base on NGSS implementation.Consistent with results from Trygstad [1], our findings call for targeted professional development as teachers are concerned about receiving training in engineering content.The information we have learned from this study will help educators align the resources spent on PD with the demands teachers will face in a NGSS classroom. Table 4 . Self-efficacy instrument post means: PAS and comparison group differences 1 . Table 5 . PAS Provided effective training in NGSS science and engineering practices. Table A1 . If students are underachieving in science, it is most likely due to Cont.
2021-09-27T18:46:36.679Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "ff3241121c33d61d65702b32de137124a9495532", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2305-6703/1/1/5/pdf?version=1629181552", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "82e49f786a47b4defc1101e9d80e6b45e09c84f1", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
53950066
pes2o/s2orc
v3-fos-license
Evolutionary significance of the microbial assemblages of large benthic Foraminifera ABSTRACT Large benthic Foraminifera (LBF) are major carbonate producers on coral reefs, and are hosts to a diverse symbiotic microbial community. During warm episodes in the geological past, these reef‐building organisms expanded their geographical ranges as subtropical and tropical belts moved into higher latitudes. During these range‐expansion periods, LBF were the most prolific carbonate producers on reefs, dominating shallow carbonate platforms over reef‐building corals. Even though the fossil and modern distributions of groups of species that harbour different types of symbionts are known, the nature, mechanisms, and factors that influence their occurrence remain elusive. Furthermore, the presence of a diverse and persistent bacterial community has only recently gained attention. We examined recent advances in molecular identification of prokaryotic (i.e. bacteria) and eukaryotic (i.e. microalgae) associates, and palaeoecology, and place the partnership with bacteria and algae in the context of climate change. In critically reviewing the available fossil and modern data on symbiosis, we reveal a crucial role of microalgae in the response of LBF to ocean warming, and their capacity to colonise a variety of habitats, across both latitudes and broad depth ranges. Symbiont identity is a key factor enabling LBF to expand their geographic ranges when the sea‐surface temperature increases. Our analyses showed that over the past 66 million years (My), diatom‐bearing species were dominant in reef environments. The modern record shows that these species display a stable, persistent eukaryotic assemblage across their geographic distribution range, and are less dependent on symbiotic photosynthesis for survival. By contrast, dinoflagellate and chlorophytic species, which show a provincial distribution, tend to have a more flexible eukaryotic community throughout their range. This group is more dependent on their symbionts, and flexibility in their symbiosis is likely to be the driving force behind their evolutionary history, as they form a monophyletic group originating from a rhodophyte‐bearing ancestor. The study of bacterial assemblages, while still in its infancy, is a promising field of study. Bacterial communities are likely to be shaped by the local environment, although a core bacterial microbiome is found in species with global distributions. Cryptic speciation is also an important factor that must be taken into consideration. As global warming intensifies, genetic divergence in hosts in addition to the range of flexibility/specificity within host–symbiont associations will be important elements in the continued evolutionary success of LBF species in a wide range of environments. Based on fossil and modern data, we conclude that the microbiome, which includes both algal and bacterial partners, is a key factor influencing the evolution of LBF. As a result, the microbiome assists LBF in colonising a wide range of habitats, and allowed them to become the most important calcifiers on shallow platforms worldwide during periods of ocean warming in the geologic past. Since LBF are crucial ecosystem engineers and prolific carbonate producers, the microbiome is a critical component that will play a central role in the responses of LBF to a changing ocean, and ultimately in shaping the future of coral reefs. I. INTRODUCTION Across the globe, coral reefs are experiencing rapid declines due to deteriorating environmental conditions mainly driven by ocean warming (Pandolfi et al., 2011;Hughes et al., 2017). In these environments, symbiotic associations between organisms can provide the partners involved with the capacity to respond to environmental stresses as well as providing robustness under the challenges caused by climate change (Ainsworth & Gates, 2016). Associations with prokaryotic and eukaryotic microorganisms can facilitate the success of species across a variety of habitats, playing a fundamental role in the evolution and adaptive capacity of host organisms (Saffo, 1992;Cavanaugh, 1994), and have been associated with vulnerability when obligatory symbionts are expelled from their host (i.e. bleaching) (Hallock, 2000;Hughes et al., 2017). Many heterotrophic organisms living in oligotrophic waters have evolved obligatory symbioses with photosynthetic microalgae, thus establishing biotrophic mixotrophy Selosse, Charpin, & Not, 2017). This process is called photo-symbiosis as it makes photosynthesis indirectly available to the host (Selosse et al., 2017). However, mixotrophy comes at a cost, as it requires five times more energy and nutrient allocation to maintain the photosynthetic apparatus compared to maintaining a strictly heterotrophic feeding mode (Raven, 1997). Nonetheless, the development of mixotrophy allows organisms to occupy previously inaccessible niches, such as nutrient-poor environments. Photo-symbiosis is critical to maintaining functioning coral reefs, not only in corals, the best-known reef-associated organisms, but also in the (often) overlooked unicellular eukaryotic large benthic Foraminifera (LBF). Symbiosis with eukaryotic taxa (i.e. microalgae) is essential for the health of reef ecosystems, and LBF are responsible for a significant proportion of the carbonate sediment across reef environments worldwide (Langer, 2008). From a global carbon perspective, LBF play a fundamental role in carbon sequestration and carbon cycling (Langer, Silk, & Lipps, 1997;Langer, 2008), in addition to sediment production and reef maintenance (Baccaert, 1986;Fujita & Fujimura, 2008;Dawson, Smithers, & Hua, 2014;Doo et al., 2017). LBF species, especially those that produce high-magnesium tests, serve an important role in maintaining the chemical equilibrium on coral reefs, serving to buffer against daily pH flux from reef metabolism through skeletal dissolution post mortem (Yamamoto et al., 2012). It is becoming increasingly apparent that other microorganisms such as bacteria and Archaea (henceforth referred to as prokaryotic associates), fungi, and viruses, play a significant and complex role in maintaining the host's health (Peixoto et al., 2017). Prokaryotic microbial associations can benefit the host by enhancing nutrient cycling (S, C, and N), providing photosynthesis-dependent nitrogen fixation, enhancing calcification, and in production of antimicrobials and pathogen removal (Knowlton & Rohwer, 2003;Lesser et al., 2004). By contrast, the role of fungi and viruses remains elusive (Lecampionalsumard, Golubic, & Priess, 1995;Sweet & Bythell, 2017). Identifying specific microbes that provide critical functional contributions to a host organism requires an understanding not only of the endobiotic microbial population, but also of the persistence and stability in time and space of both the microbial functional niches and the microbes that utilise them (Ainsworth et al., 2015;Hernandez-Agreda et al., 2016). These associations with microbial partners likely underpin the capacity of reef organisms to respond to climate change. Ocean warming will influence the biogeographic range of reef species, which could result in poleward expansion as subtropical and temperate marine ecosystems become 'tropicalised' (Verges et al., 2014). The flexibility in these associations will determine the host's capacity to accommodate to local environmental change, as well as allowing adaptations to new environmental conditions following distribution range expansions. The composition of both the prokaryotic microbiome and the eukaryotic symbionts in relation to environmental change has been explored largely in reef-building corals (LaJeunesse, 2002;Ainsworth, Thurber, & Gates, 2010;Bourne, Morrow, & Webster, 2016). However, many other organisms, such as LBF, also benefit from the intricate interplay between prokaryotic endobionts and eukaryotic endosymbionts, mainly microalgae. Although only ca.10 families of benthic Foraminifera are currently known to have associations with algal symbionts, these families consist of many described species, which are abundant in shallow carbonate platforms worldwide. LBF are a polyphyletic group in which endosymbiosis with microalgae evolved independently in multiple benthic foraminiferal families (Hallock & Glenn, 1986;Hallock, 1999;Lee, 2011). The shell of LBF facilitates the housing of photosynthetic symbionts by morphological adaptations, including canaliculation, flosculisation, and the development of endo-and exoskeleton structures or secondary or lateral chamberlets (Renema, 2007). Evolutionary radiations indicate that the acquisition of, and change in, algal types were crucial steps in the evolution of large miliolid Foraminifera (Holzmann et al., 2001). Symbiosis with algae has been suggested to be the driving force in the evolution of these groups of benthic Foraminifera (Lee, 2006(Lee, , 2011Lee et al., 2010). LBF include members of two orders of foraminifera: Rotaliida and Miliolida (Hallock, 1999). The order Rotaliida, characterised by an optically radial, bilamellar perforate test (Pawlowski, Holzmann, & Tyszka, 2013), includes three modern families: Amphisteginidae, Calcarinidae, and Nummulitidae. The order Miliolida, with an imperforate wall, high-magnesium calcite test and with randomly oriented crystals refracting light in all directions and resulting in a porcelaneous appearance of the test (Pawlowski et al., 2013), includes the Alveolinidae, Peneroplidae, Soritidae, and Archaiasidae (Loeblich & Tappan, 1984). In general, Rotaliida species are known predominantly to host diatoms, whereas Milioliida also play host to other algal groups, such as chlorophytes, rhodophytes, and dinoflagellates (Lee, 2006). Additionally, modern species of both groups have associations with a diverse prokaryotic community, including heterotrophic bacteria, photosynthetic cyanobacteria, and algal plastids, suggesting that Foraminifera are particularly favourable partners for the establishment of symbioses (Lee, 2006;Bourne et al., 2013). At least 47 modern species across 15 genera have been identified as possessing algal symbionts (Lee, 2006;Förderer, Rödder, & Langer, 2018). Whereas eukaryotic symbiosis has received considerable attention, prokaryotic symbiosis and the role of bacteria in LBF remains largely unexplored (e.g. Webster et al., 2016;Prazeres et al., 2017a;Prazeres, 2018). Not only is the diversity of bacterial communities poorly known, but so is the relationship and role that these bacteria may have in LBF ecology, adaptive potential, and evolution. In this review, we aim to determine how the eukaryotic and prokaryotic microbiome influences the capacity of LBF to occupy new habitats, expand their distribution range, and adapt or acclimatise to shifts in environmental conditions. For the purposes of this review, eukaryotic symbionts and prokaryotic partners are considered algal and bacterial species, respectively. Firstly, we explore the known algal partners and how they influence modern LBF species' biology and ecology. We will also discuss the geographical distribution of fossil LBF species within their environmental context, and link it to their microbiome, particularly to algal symbionts. Finally, we argue that the microbiome (i.e. algal and bacterial species) is likely to be crucial in the resilience, acclimation, and adaptation of LBF in the face of climate change. We discuss how the microbiome could benefit and drive LBF evolution across species distribution ranges: (i) by persistent eukaryotic and prokaryotic microbial associations across the distribution of LBF species, which have been reported to be highly species-specific and to determine ecological niches in LBF; (ii) the presence of a variable microbiome responsive to environmental gradients; (iii) the presence of a stable, persistent algal symbiont community coupled with flexible bacterial associations; and (iv) adaptable algal symbiosis with a persistent bacterial community, which could assist species in crossing biogeographical barriers and adapting to changing environmental conditions. The composition of the microbiome benefits the host in different ways, and different species are likely to utilise different strategies to maintain the holobiont system. Therefore, it is crucial to understand and disentangle how the host and symbiont compartments are likely to interact with, and respond to environmental change. II. EVOLUTION OF ALGAL SYMBIOSIS IN LARGE BENTHIC FORAMINIFERA Symbioses are often cited as a pathway for the abrupt appearance of evolutionary novelty, and as facilitating the occupation of habitats and niches previously unavailable to asymbiotic counterparts (Norris, 1996;Melo Clavijo et al., 2018). Throughout Foraminifera evolution, the recurrent rise of algal symbiosis coincides with periods of global warming, relative drought, high sea levels, and the expansion of tropical and subtropical belts into higher latitudes (Lee, 1995;Boudagher-Fadel, 2008). In benthic Foraminifera, the presence of algal symbiosis in fossil forms is deduced mainly through the study of morphological adaptations, such as extrapolation of test size and shape, chamber arrangement, and ultrastructural modifications to regulate light intensity to the photosymbionts (Renema, 2018). Measurement of growth rates and patterns of stable isotopes have also been used to recognise algal symbiosis in the fossil record (Erez, 1978;Purton & Brasier, 1999;Briguglio, Metscher, & Hohenegger, 2011;Briguglio, Hohenegger, & Less, 2013). Taxonomic radiation and acquisition of photosymbiosis allowed shifts in ecological strategy, enabling benthic Foraminifera to expand into a variety of habitats and to become abundant in oligotrophic environments (Hallock, 1985). The association with algal symbionts resulted in a mixotrophic lifestyle, allowing the utilisation of both inorganic and organic sources of nutrients for photosynthesis and organic carbon accumulation necessary for metabolism and Dinoflagellate-bearing species are more dependent on their symbiont than diatom-bearing and other algal-bearing species for acquiring energy. Therefore, it is likely that species that rely less on algal symbiosis for growth and calcification utilise bacteria as a food source and require additional translocation of organic compounds from cyanobacteria. Bold arrows correspond to a high dependence on the exchange represented. Light grey arrows represent compounds being exchanged from the host to the microbial associate, whereas dark grey arrows represent the exchange from the microbial associate to the host. reproduction (Hallock, 1981). Morphological modifications in benthic Foraminifera resulted in adaptations of internal structures, allowing the test to evolve to accommodate their algal symbionts (Lee & Hallock, 1987). Algal symbiosis is particularly advantageous in environments where light is readily available, dissolved inorganic dissolved nutrients are scarce, and significant amounts of energy must be expended to capture organic matter (Hallock, 1981;Lee, 1995). Therefore, the direct benefits of photosymbionts are twofold: (i) the acquisition of energy through photosynthesis, which is particularly favourable in oligotrophic environments (Hallock, 1981); and (ii) enhanced calcification rates, because the energy acquired through photosynthesis is orders of magnitude higher than in heterotrophy alone (Hallock, 1981;ter Kuile, 1991). Additionally, hosts are protected against ultraviolet (UV) irradiation by housing algae as symbionts, which can minimise the effects of hazardous irradiation through the production of photo-protective amino acids by the symbionts (Hohenegger, 2009; Fig. 1). These morphological and physiological modifications were fundamental to the successful acquisition of a wide range of microbial associates by LBF. Members of various modern LBF families are hosts to a variety of algal symbionts (Stanley & Lipps, 2011). Even though most LBF hosts are mixotrophic, they usually cannot survive for long periods without their endosymbiotic algae (Lee, 2006). Cyanobacteria and bacteria are also suggested to be important in LBF biology and ecology (Lee & Anderson, 1991;Bernhard et al., 2012). Bacteria can function as antibiotic producers, perform nutrient cycling, and be ingested as food when other types of organic matter are not available, while cyanobacteria can provide additional photosynthetic products to the host when light is available (Fig. 1). This symbiont diversity is in sharp contrast to many reef-building corals, which are only capable of hosting dinoflagellates (Muscatine & Porter, 1977;Stanley, 2006;Stanley & Lipps, 2011). LBF are the only known taxa that both feed on particulate food and are able to sustain symbiosis with diatoms, which are one of the most common microalgae (Lee & Anderson, 1991). The benefits from symbiosis and the capacity to accommodate different types of endosymbionts have facilitated adaptation of LBF to a variety of environments, with varied regimes of light, temperature, salinity, and nutrients, which will be discussed in detail below. III. DIVERSITY AND ECOLOGICAL IMPORTANCE OF ALGAL SYMBIONTS LBF host a diverse array of algal symbionts ( Fig. 2; Table 1). Most higher taxonomic groups predominantly host a single type of symbiont. Within the algal groups known to form symbiosis with LBF, most modern and fossil species host diatoms. Only the clade Soritacea hosts a variety of different algal types. The Soritacea form a monophyletic group and have evolved from an asymbiotic lifestyle into symbiosis with rhodophytes and chlorophytes, and later dinoflagellates (Leutenegger, 1984;Holzmann et al., 2001;Fay, 2010). The basal clade, Peneroplidae, host unicellular rhodophytes (Lee, 2006), Archaiasidae and Parasorites, which is the most basal genus in the Soritidae, host chlorophytes (Pawlowski et al., 2001a), and all other Soritidae host dinoflagellates (Pawlowski et al., 2001b). The depth distribution of LBF taxa is partly determined by the light intensity and wavelengths required by their symbionts (Renema, 2018). Light intensity and spectrum are important factors limiting the distribution of the host species at the lower end of their depth range, while the upper end of the range is determined by additional factors, such as wave energy (Hottinger, 1983;Hohenegger et al., 1999;Renema, 2018). There is a progressive increase in the maximum depth of occurrence from chlorophyte-bearing species, through species hosting rhodophytes or dinoflagellates, to species harbouring diatoms, which are distributed over the largest depth range (Leutenegger, 1984). The geographic range limits are often determined by temperature gradients, limiting their occurrence to the (sub)tropical to warm temperate regions (Langer & Hottinger, 2000;Renema, 2018 of LBF at their highest latitudes is associated with warm boundary currents (e.g. Kuroshio in the northwest Pacific, the Leeuwin current in the east Indian Ocean, and the Gulf Stream in the northeast Atlantic; Fig. 3). Algal symbionts can be acquired from the environment or through reproduction, and LBF utilise different strategies to pass on the algal symbiont to their offspring. LBF have a paratrimorphic life cycle, with both asexual and sexual reproductive phases (Lee et al., 1997;Dettmering et al., 1998;Harney, Hallock, & Talge, 1998). A paratrimorphic life cycle distinguishes three generations ( Fig. 4): (i) agamonts, which are diploid, multinucleate, and microspheric (i.e. with a small initial chamber of the test); (ii) schizonts that are diploid, multinucleate, and megalospheric (i.e. with a large initial chamber of the test); and (iii) gamonts, which are haploid, mononucleate and megalospheric ( Fig. 4) (Dettmering et al., 1998). The sequence of the three generations in a paratrimorphic life cycle is not obligatory, thus offering foraminifera potential benefits in terms of flexibility (Harney et al., 1998). LBF can transmit their symbionts vertically through rounds of asexual reproduction (megalospheric forms), and horizontally through rounds of sexual reproduction (microspheric and megalospheric forms) (Harney et al., 1998). By optimising these different symbiont acquisition strategies, LBF can potentially increase their capacity to adapt to new environmental conditions. However, the mechanisms that affect the horizontal transfer of eukaryotic symbionts following sexual reproduction remain unclear. Free-living representatives of eukaryotic foraminiferal algal endosymbionts are rare (Lee, 2006). Therefore, algal symbionts are possibly acquired immediately after reproduction by taking in symbionts expelled from the parent. Alternatively, there may be insufficient data available on the diversity and abundance of free-living species that could be potential LBF symbionts. We now review the available data on the four main types of algal endosymbiont taxa found in modern LBF, and their role in LBF biology and ecology. (1) Class Bacillariophycea (Diatoms) Four independently evolved families of LBF (Alveolinidae, Calcarinidae, Amphisteginidae, and Nummulitidae) host endosymbiotic diatoms (Hallock, 1999). It is worth noting that an additional non-LBF family, the Elphidiidae, includes species that can sequester photosynthetically active plastids from diatoms , 2002a, 2002bJauffrais et al., 2018). All endosymbiotic diatoms within LBF share a 104 kDa glycoprotein on their surface, which is not found on the surface of free-living diatoms (Lee & Reyes, 2006). On substrates where species of LBF are found, free-living endosymbiotic diatoms represent less than 0.5% of the microflora (Lee et al., 1989), further indicating that both host and algae are involved in an obligatory, mutualistic relationship. As a result, loss of symbionts (i.e. bleaching) can lead to reduced growth, oxidative stress, fecundity impairment, and increased mortality of host populations (Prazeres, Martins, & Bianchini, 2012;Prazeres, Roberts, & Pandolfi, 2017b). Diatom-bearing species show the broadest depth and latitudinal range distribution among the LBF. They are common and abundant in tropical and subtropical areas (Langer & Hottinger, 2000;Langer et al., 2013b;Weinmann et al., 2013b). Diatom-bearing species require mainly blue-green spectrum light to photosynthesise (Leutenegger, 1984) and, as a result, can colonise deeper areas of open ocean (>100 m) where only blue light is able to penetrate (Fig. 5). For example, members of the During sexual reproduction gametes do not carry the algal symbionts, and symbionts are acquired horizontally. By contrast, algal symbionts are vertically acquired during asexual fission. It remains unclear if adults acquire algal symbionts and bacteria from the environment, and how bacteria are transferred from parents to offspring. Dashed black arrows denote uncertain routes of acquisition and the solid red arrow denotes a known transfer route. Amphisteginidae and Nummulitidae families can live at over 130 m water depth or <1% surface photosynthetically active radiation (PAR) (Hallock & Peebles, 1993;Hohenegger, 2000;Renema, 2006b;Boudagher-Fadel, 2008). However, other requirements besides the light spectrum, such as light intensity, substrate and wave energy, also affect their distribution (Renema, 2018). A field colonisation experiment showed that diatom-bearing species, mainly amphisteginids and nummulitids, avoid high light levels, and are often found in shaded microhabitats (Fujita, 2004). By contrast, the shallow occurrence of calcarinids, for example, is presumably not only linked with the symbionts' requirements for high light levels, but also with their capacity to withstand elevated hydrodynamic stress, as these species possess the ability to attach firmly to substrates (Leutenegger, 1984;Fujita, 2004Fujita, , 2008. Moreover, some species of calcarinids can live in mesotrophic conditions and are abundant on inshore, turbid reefs of the West Pacific Ocean (Renema & Troelstra, 2001;Renema, 2010). Tolerance limits vary among species, and the mechanisms through which some species such as Calcarina hispida are able to replace more-sensitive taxa such as Amphistegina lobifera (Renema, 2010) remain unclear, as both species harbour diatoms as symbionts, have similar ecological requirements, and occupy the same reef habitat. It is possible that the underlying difference between species' tolerance to mesotrophy and eutrophy hinges on their prokaryotic microbiome, and the extent to which they depend on heterotrophy for energy intake. In summary, diatom-bearing species can be broadly divided into two groups: high-and low-light adapted. High-light-adapted species inhabit mainly well-lit reef flat areas, whereas the low-light adapted group shows cryptic behaviour and is commonly found along the reef slope. The ability of diatom-bearing LBF to inhabit this broad range of environments can be explained by (i) their ability to host multiple species and strains of diatom symbionts , and potentially benefit from associations with other algal groups; and (ii) their remarkable morphological plasticity (Hallock, Forward, & Hansen, 1986;Prazeres & Pandolfi, 2016;Prazeres, Uthicke, & Pandolfi, 2016a), which allows LBF to regulate light intensity to their symbionts, avoiding photo-inhibition and damage to the photosystem (Hottinger, 1983). (2) Class Dinophyceae (Dinoflagellates) The Soritidae are the only LBF family that house dinoflagellate symbionts (Fay, 2010), with the exception of the basal genus Parasorites, which hosts chlorophytes (Holzmann et al., 2001;Pawlowski et al., 2001a). Symbiodinium is the most common genus of dinoflagellates in soritids, but other less-abundant dinoflagellate species have also been isolated (Lee & Anderson, 1991). It is worth noting that planktonic Foraminifera species are hosts to this same genus of dinoflagellates (Lee & Anderson, 1991). Dinoflagellates of the genus Symbiodinium are crucial components of coral reef ecosystems in their roles as endosymbionts of reef-building corals (Muscatine & Porter, 1977) and other marine invertebrates, such as molluscs, sponges, and other cnidarians (LaJeunesse, 2002;Stat, Carter, & Hoegh-Guldberg, 2006). Molecular phylogenetic studies have revealed an extraordinary diversity of Symbiodinium lineages, most of which are specifically associated with this group of foraminifera (Pochon et al., 2001;Pochon & Pawlowski, 2006). The genus Symbiodinium encompasses nine lineages, delineated phylogenetically using nuclear and chloroplast ribosomal DNA and referred to as clades A to I (Pochon, LaJeunesse, & Pawlowski, 2004;Pochon & Gates, 2010). Geographic variation in the distribution of Symbiodinium clades in soritids, such as the absence of Symbiodinium clade C in the Caribbean population of Sorites and its presence in the population of the same phylotype of Sorites on the Pacific side of the Isthmus of Panama (Pochon et al., 2004), suggests that these symbiotic associations have evolved in response to the different environments of each region over the last 3-4 million years (My) (Garcia-Cuetos, Pochon, & Pawlowski, 2005). It has been suggested that soritids have strong host-symbiont specificity (Pochon et al., 2001), resulting from a combined effect of a selective recognition mechanism, vertical transmission of symbionts, and biogeographical isolation (Garcia-Cuetos et al., 2005). Nonetheless, mixed infections have been observed (Pochon et al., 2007;Momigliano & Uthicke, 2013), and hosts can compartmentalise symbionts (Fay, 2010). In some soritids, such as Marginopora vertebralis, Symbiodinium diversity can be high (up to four different clades) in marginal habitats characterised by high seasonal fluctuations in environmental parameters (Momigliano & Uthicke, 2013). While symbiont polymorphism seems to be a common phenomenon, a higher diversity of mixed genotypes is observed more frequently in juvenile specimens, which may be more able to switch or shuffle heterogeneous symbiont communities than adults (Pochon et al., 2007). During ontogeny, symbiont diversity decreases, as their Symbiodinium community moves towards species suited to the prevailing environmental conditions (Pochon et al., 2007). Dinoflagellate-bearing species can colonise a wide range of reef habitats, and occur at depths of up to 80 m when conditions allow optimal light penetration, but are most abundant and commonly found on reef flats and upper reef slopes above 5 m, where light levels are highest (Hohenegger et al., 1999;Renema & Troelstra, 2001;Doo et al., 2017). These species have a narrower geographical distribution than diatom-bearing LBF and are more diverse in the Indo-Pacific Ocean. The genera Sorites and Amphisorus are circumtropical, and are the only genera to occur in the West Atlantic Ocean, whereas Marginopora is limited to the West Pacific Ocean (Fay, 2010). Light preference varies among species. For example, Marginopora vertebralis is adapted to high light, and is abundant in shallow reef areas (Pochon et al., 2007). However, species such as Sorites orbiculus and Amphisorus hemprichii are common in low-light environments, and can be found in deeper regions and shaded micro-habitats on the reef flat (Hohenegger, 2000). It appears that dinoflagellate-bearing hosts can maintain mixed infections, and preferentially select symbionts from the available species pool in order to respond to changes in environmental conditions. As opposed to diatom-bearing species, dinoflagellate-bearers have the advantage of selecting from a higher diversity of symbionts given that other reef organisms are also hosts of Symbiodinium, such as sponges, reef-building corals, and molluscs (Coffroth & Santos, 2005). Even though this group of LBF are relatively dependent on their symbionts, other reef organisms can act as reservoirs of symbiont diversity. (3) Division Chlorophyta The chlorophyte-bearing foraminifera comprise at least 13 species classified into five genera (Lee, 2006). They all belong to the subfamily Archaiasinae, with the exception of the soritid genus Parasorites. Most of these species are endemic to the Western Atlantic. A few species, including Parasorites orbitolitoides and Laevipeneroplis malayensis, have been reported from the Indo-Pacific (Renema, 2007;Muruganantham et al., 2017). Additionally, Parasorites sp. has been observed to host symbionts other than chlorophytes (Renema, 2003). Chlorophyte-bearing species have a narrower depth distribution than those bearing diatoms and dinoflagellates. Chlorophyte endosymbionts require primarily red light to photosynthesise (Fig. 5) and lack accessory pigments to allow them to absorb light in the blue region of the spectrum (Lee & Hallock, 1987). Chlorophyte-bearing LBF represent a highly diverse group and can be found across a wide range of shallow habitats and environmental conditions. These species are particularly diverse in the Western Atlantic and Caribbean (Hallock, 1999), and are abundant in shallow environments (<30 m). For example, in the Florida Keys, Androsina lucasi can be found in exceptional abundance in open, dwarf-mangrove flats at less than 0.2 m depth. Archaias angulatus live at depths less than 2 m, where temperatures range from 14 • C in winter to 33 • C in summer. Cyclorbiculina compressa, Parasorites orbitolitoides, Laevipeneroplis proteus, and L. bradyi inhabit a broader depth range of 5-30 m (Hallock & Peebles, 1993). Molecular and morphological data show that the chlorophyte symbionts of LBF belong to the genus Chlamydomonas (Pawlowski et al., 2001a;Lee, 2006). Like diatom-and dinoflagellate-bearing species, chlorophyte-bearing species seem to have exceptionally flexible relationships with their symbionts (Lee et al., 1997). All foraminiferal symbionts form a monophyletic group closely related to Chlamydomonas noctigama (Pawlowski et al., 2001a). The group is composed of seven types, including C. hedleyi and C. provasoli. Each of these types can be considered a separate species, based on comparisons of genetic differences between other established Chlamydomonus species (Pawlowski et al., 2001a). Several LBF species share the same symbiont type, but only Archaias angulatus has been observed to host more than one chlorophyte species (Pawlowski et al., 2001a). Symbionts of all Caribbean and Indo-Pacific genera studied to date are closely related, suggesting a single origin of symbiosis between chlorophytes and Soritacea (Holzmann et al., 2001;Pawlowski et al., 2001a). In the Red Sea, chlorophytes have also been isolated as minor endosymbionts or intracytoplasmic associates from Amphistegina spp. and Amphisorus hemprichii, which host mostly diatoms and dinoflagellates, respectively (Lee & Anderson, 1991). Chlorophyte-bearing species are less dependent on their symbionts for energy acquisition, and can be found in relatively productive coastal waters (Walker et al., 2011). Some species, such as Archaias angulatus, are known to tolerate relatively high levels of nutrients, and feeding provides 10× more carbon than carbon fixation through symbiotic photosynthesis (Lee & Bock, 1976). Nonetheless, symbiosis enhances calcification in chlorophyte-bearing species (Duguay & Taylor, 1978). Hosts are unable to grow in very low light environments, but will quickly reach maximum photosynthesis rates under high light intensity (Walker et al., 2011). (4) Division Rhodophyta Members of the family Peneroplidae, including the genera Peneroplis, Dendritina, Spirolina, and Monalysidium (Loeblich & Tappan, 1984) are known to host red algae. As opposed to other symbiont algal groups, diversity of species of rhodophytes that form symbioses with LBF are relatively poorly known. To date, morphological studies identified only one symbiotic species, Porphyridium purpurum, which has been isolated from both Peneroplis planatus and P. pertusus (Lee, 1990). The isolated strains are conspecific (Lee, 1990) although molecular studies are yet to be carried out to confirm the identity of this species and perhaps reveal a higher diversity of endosymbiotic rhodophytes. Similarly, to chlorophyte-bearing species, rhodophyte-bearers generally have a narrow depth distribution. They are most common and abundant between 0 and 20 m and require primarily yellow/orange light for photosynthesis (Fig. 5) (Hohenegger et al., 1999;Boudagher-Fadel, 2008), but can occasionally be found in deeper areas, as deep as 40 m, of the reef system (Renema, 2018). The relationship between rhodophytes and Foraminifera is unique. Unlike other eukaryotic symbionts in LBF, the rhodophyte cell is not membrane bound but found within the host cytoplasm (Lee & Anderson, 1991), and functions as an organelle, potentially facilitating energy transfer between symbiont and host (Hallock, 1999). This relationship could explain the adaptation of rhodophyte-bearing species to a wide range of environmental conditions, from shallow areas (<5 m) with high water energy (Hohenegger et al., 1999) to deeper areas in clear, oceanic conditions (up to 40 m) (Fujita, 2004). They can also be found in oligophotic conditions in coastal waters (Renema, 2018), are common and dominant in foraminiferal assemblages in hypersaline environments (Hallock, 1999), and in seagrass meadows (Reich et al., 2015). However, details of the interaction between rhodophyte-bearing hosts and their symbionts in relation to light intensity and other environmental conditions remain elusive (Ziegler & Uthicke, 2011). There is a clear lack of information on both chlorophyte-and rhodophyte-bearing hosts' relationship with their respective symbionts. Given that there is evidence showing that dinoflagellate-bearing hosts evolved from chlorophyte-and rhodophyte-bearing hosts (Holzmann et al., 2001), studies on the nutritional and physiological host-symbiont interactions might provide clues to the development of specific symbiont functions that contributed to the diversification of symbiosis within the Order Milliolida. IV. IDENTIFICATION AND POSSIBLE ROLES OF A PROKARYOTIC COMMUNITY Organisms with photo-symbiotic relationships with micro-algae, such as reef-building corals, giant clams, and sponges, also have a diverse array of bacterial associates with a role in maintaining the health of the holobiont (Ainsworth et al., 2010). In these groups it has been demonstrated that the presence of photosynthetic symbionts influences the bacterial species composition, but not the species richness, evenness, or phylogenetic diversity of invertebrate-associated microbiomes (Bourne et al., 2013). The presence of both ectoand endobiont bacteria that function as symbionts has been identified in sponges (Taylor et al., 2007), reef-building corals (Lesser et al., 2004), and sea urchins (Guerinot & Patriquin, 1981). Analogous to these better-studied organisms, it is conceivable that prokaryotic endosymbionts perform a number of roles for prokaryotic endobionts in LBF, such as providing resilience to environmental variability, diseases, and in nitrogen fixation. Symbiosis with prokaryotes, especially nitrogen-fixing bacteria, could significantly influence the ecology of their host (Fig. 1) and could have significant impacts on local nutrient biogeochemistry (Fiore et al., 2010). Prokaryotes are ubiquitous in marine environments (Azam et al., 1983;DeLong, 1992), and many species of benthic Foraminifera consume bacteria (Eubacteria, Archaea) as part of their diet (Bernhard & Bowser, 1992). Prokaryotic-foraminiferal associations are not uncommon. While some benthic foraminifera have associations with ectobionts, most foraminiferal-prokaryotic associations identified to date involve endobiont microbes (Bernhard, Tsuchiya, & Nomaki, 2018). In Foraminifera, the presence of prokaryotic endobionts has been identified in very few non-photosymbiotic species, but they could provide potential benefits such as supplying photosynthetically fixed carbon (Bird et al., 2017), and aiding intracellular denitrification processes (Bernhard et al., 2012). Only one study has identified intracellular red cyanobacteria through scanning electron microscopy, suggesting that it may be a potential endosymbiont of Marginopora vertebralis (Lee et al., 1997). Similarly, Prazeres et al. (2017a) used next-generation sequencing (NGS) to detect a high relative abundance of cyanobacteria associated with Amphistegina lobifera collected from oligotrophic environments, noting that the higher light availability in these environments could give the cyanobacteria a competitive advantage. As in corals, cyanobacteria could play a role in N-fixation within the host when conditions are optimal (Lesser et al., 2004). Other groups such as α-Proteobacteria have been consistently identified in several LBF species through NGS (Bourne et al., 2013;Webster et al., 2016;Prazeres et al., 2017a), and are commonly found as endobionts in reefs with high coral cover (Kelly et al., 2014). The relative abundance of this bacterial class tends to decline significantly when populations are exposed to increases in sea-surface temperature, when they are substituted by other bacterial taxa . Natural populations of LBF exposed to environmental fluctuations show a similar pattern: the diversity of bacteria is higher than in physically and chemically stable habitats (Prazeres et al., 2017a), and the relative abundance of Alpha-Proteobacteria is generally low. Environmental variables such as water quality, temperature fluctuations and light exposure may help drive the observed compositional differences in the bacterial communities. Nonetheless, it is unknown whether the bacterial microbiome responds to, or is filtered by environmental gradients (Prazeres et al., 2017a). Both environment and foraminiferal physiological state are likely to determine the intracellular prokaryotic community present in LBF . The degree of dependence and the specific host-prokaryote relationships remain to be investigated. In LBF very little is known about the host-prokaryote-eukaryote relationship. Similar to transmission routes of algal symbionts, bacterial endobionts could be acquired either by vertical or horizontal transmission (Fig. 4). It is also likely that gametes could carry bacterial endobionts during sexual reproduction, as observed for sponges (Enticknap et al., 2006). V. FOSSIL RECORD AND EVOLUTION OF MODERN LBF SPECIES Algal endosymbiosis within Foraminifera has evolved multiple times. Even though we cannot determine past changes in the LBF microbiome, including both eukaryotic and prokaryotic associations because of the lack of preservation of the associates, we can use the morphology and spatial distribution of modern assemblages as an analogue (Renema, 2008b). Given the conservative presence of eukaryotic symbiont taxa in modern families, we can assume that similar symbionts were present in modern and extinct representatives of LBF families and their relatives. (1) LBF are effective trackers of climate change LBF were important carbonate producers during warm periods over the past 400 My (Hallock & Glenn, 1986;Wilson & Rosen, 1998;Renema et al., 2008;Morsilli et al., 2012). Here we focus on the past 66 My, since this time interval includes the evolution of modern faunas. Following the Cretaceous-Paleogene (K-P) event, LBF began to stage a recovery in the Early Paleocene [∼66 million years ago (Ma)], resulting in increased size and the evolution of most of the Cenozoic (modern) LBF families. In this time interval, atmospheric CO 2 concentrations were at least twice present levels, and sea-surface carbonate saturation was significantly lower (Sloan & Rea, 1996;Zhang et al., 2013). By the Early Eocene, six of the seven modern LBF families were already present (Serra-Kiel et al., 1998). The Calcarinidae are the only family that evolved during the (Late) Neogene (∼5 Ma; Renema, 2010). The Cenozoic is characterised by a global cooling trend, interrupted by three warm intervals, the Paleocene-Eocene thermal maximum (PETM-EECO), the Middle Eocene climatic optimum (MECO), and the Middle Miocene climatic optimum (MMCO) (Zachos, Dickens, & Zeebe, 2008). During each of these warm intervals, rapid expansions of geographic ranges of LBF to higher latitudes are found in the fossil record. During the PETM-EECO, the diatom-bearing species Nummulites occurred as far north as Biological Reviews 94 (2019) the Rockall bank at 57 • N. Range expansions from the Paris Basin into the Belgium Basin are also associated with warm periods (King, Gale, & Barry, 2016;Baccaert, 2017). During the MECO, orthophragminids and Nummulites occured as far north as southern Alaska (55 • N) and Belgium (51 • N) (Adams, Lee, & Rosen, 1990). Following the Eocene-Oligocene cooling (Lear et al., 2008), the latitudinal distribution of LBF contracted (Renema, 2008b). This trend was reversed during the MMCO, when geographic ranges expanded again, especially in the southern hemisphere. LBF became abundant as far south as southern Australia, which was positioned further south than at present. These excursions into higher latitudes were evolutionarily important for LBF. For example, a new species in the genus Cycloclypeus emerged during a range expansion, and replaced its ancestor following the subsequent range contraction during the Late Miocene (Renema, 2015). (2) The presence of algal symbionts matters Temporal longitudinal trends in LBF diversity can be detected in tandem with global climatic patterns (Fig. 6). The most distinct is the difference in faunal composition between the West Atlantic and Tethyan realms. In the Tethyan realm, LBF diversity tracks the closure of the Tethys Ocean from west to east (Renema et al., 2008). Distinct hotspots can be recognised: (i) in south-west Europe during the Paleocene-Middle Eocene, (ii) in the Middle East from the Late Eocene to Early Miocene, and (iii) the present-day Indo West-Pacific biodiversity hotspot (Renema et al., 2008). The taxonomic groups and symbiont types in which diversity is highest differ among these three hotspots (Fig. 6). In the south-west Europe biodiversity hotspot, diatom-bearing Nummulitidae and Orthophragminidae were especially diverse, including at least two families with numerous species. This is comparable to the present-day Indo-Pacific fauna, where Nummulitidae, Calcarinidae, and Amphisteginidae drive biodiversity patterns, and chlorophyte-bearing species are rare Förderer, Rödder, & Langer (2018). Diversification in these provinces is primarily driven by adaptation to the depth (and light) gradient, and secondarily by onshore-offshore gradients (Hohenegger et al., 1999;Renema, 2018). The Late Eocene-Early Miocene Middle Eastern biodiversity hotspot is dominated by genera housing chlorophytic symbionts and is thus more similar to the present day West Atlantic fauna, where LBF diversity is concentrated in the shallow photic zone. Based on modern analogues housing chlorophytic and dinoflagellate symbionts, which are found in relatively shallow environments (Waters & Hallock, 2017), it is likely that horizontal rather than vertical differentiation occurred in the Atlantic Ocean. Dinoflagellate-bearing taxa followed these trends to a much lesser extent. The Alveolinidae and Soritidae in the Eocene are comparable in their diversity and size distribution to the diatom-bearing Nummulitidae in the Tethyan realm. During the Neogene the Soritidae diversified in the Caribbean (Hottinger, 2001) and Indo-Pacific . However, unresolved taxonomy impedes further inferences about the drivers of diversity in this group. In conclusion, the fossil record provides ample evidence that photosymbiosis has been a critical factor driving morphological diversity in LBF assemblages. Additionally, clear differences in spatio-temporal distribution occur among taxa with different symbiont types, indicating that these are important for their adaptive potential with regard to environmental changes. VI. IMPORTANCE OF THE MICROBIOME TO LBF ECOLOGY AND DISTRIBUTION The presence of eukaryotic and prokaryotic associates has fundamental implications for the adaptation and evolution of their host organisms and their responses to environmental change (Cavanaugh, 1994;Bourne et al., 2016). Large-scale geographic distribution patterns reveal that algal symbionts determine the distribution limits of LBF, such as rhodophytes in Australia, diatoms in southern Japan, and rhodophytes and dinoflagellates in the Mediterranean, indicating that there is no simple correlation with symbiont types (Fig. 3). At regional scales, the distribution of LBF is often restricted by large riverine outflows, which form physical and chemical barriers to dispersion (Langer & Hottinger, 2000). Furthermore, in turbid regions and in the eastern part of the Atlantic and Pacific Oceans, dinoflagellate-and chlorophyte-bearing species are rare and diversity is low, indicating that diatomand rhodophyte-bearing taxa are more tolerant to higher nutrient levels (Hallock & Peebles, 1993;Renema, 2018). Additionally, light tolerance of algal symbionts strongly influences host depth distribution (Fig. 3). Taken together, this highlights the plasticity and capacity for adaptation of the host-algal symbiont system, as well as the need for a better understanding of how the holobiont functions, and the underlying mechanisms regulating bacterial and algal associations. (1) Stability and variability of microbial associates and LBF species occurrence (a) Persistent microbiome throughout the host's distribution range The biogeography and stability of the eukaryotic symbiont community are linked to the distribution of the host species (Fay, 2010), and species specificity is high (Garcia-Cuetos et al., 2005). For example, the diatom symbiont community of Amphistegina gibbosa (an Atlantic species) is continuous across its geographic range, but significantly different from the symbionts of the two Pacific species in the same genus, A. lessonii and A. lobifera, from Oahu, Hawaii, which also differ from each other (Barnes, 2016). Molecular studies showed that Pacific A. lobifera specimens that occur on the Great Barrier Reef (GBR) host eukaryotic symbiont communities similar to the Hawaiian population (Barnes, 2016;Prazeres et al., 2017a). A recent molecular study also found no systematic differences in symbiont composition of A. lobifera populations between the Mediterranean and Red Seas . It is plausible that in some LBF species, especially those with a circumtropical distribution such as Amphistegina spp., a stable, dominant eukaryotic symbiont over their entire geographic range would guarantee host functionality, regardless of their environment. In the case of prokaryotic associations, organisms such reef-building corals and sponges with a wide geographic distribution can form persistent associations with rare bacterial taxa, which can be species specific and are ubiquitous throughout the host's distribution range (Reveillaud et al., 2014;Ainsworth et al., 2015). This finding suggests the existence of strong ecological and/or evolutionary factors driving these associations (Reveillaud et al., 2014), and a key role for bacteria in facilitating the success of host-algal symbioses across diverse environmental regimes (Ainsworth et al., 2015). This hypothesis has never been tested in LBF, and it is unknown how conserved bacterial associations are. Nonetheless, it is plausible that a preserved core microbiome throughout the distribution of foraminiferal hosts is present. This pattern possibly explains why some species with conserved algal symbionts show a remarkable capacity to colonise a broad range of environments and can be found across a wide depth range. (b) Variable and diverse microbiome throughout their distribution In contrast to globally distributed LBF hosts, species with a more restricted distribution tend to show a more diverse and variable eukaryotic community across their distribution range (Pochon et al., 2007). In this case, abiotic factors, rather than host identity, are predicted to have a higher influence on the symbiotic community, especially at local scales. Biogeographical barriers would define the availability of microbial associates (eukaryotic and prokaryotic), resulting in differences in the microbiome between the core and edges of their distribution. This pattern could be present along latitudes and with depth . Patterns of microbiome variability are best observed in species that host dinoflagellates. Dinoflagellate-bearing hosts exhibit high diversity in the Indian Ocean and West Pacific region, where symbionts are also more diverse than in the central Pacific, Red Sea (Gulf of Eilat), and Caribbean/Atlantic regions (Garcia-Cuetos et al., 2005). Symbiont community diversity can also be high in LBF species that live in marginal habitats characterised by high seasonal fluctuations in environmental parameters. In these locations mixed infections are common, and a high degree of flexibility can be found (Momigliano & Uthicke, 2013). A heterogeneous mix of symbionts would allow a host to select symbiont species from an existing pool that are better suited to environmental conditions at the extremes of the host's physiological limits (Fay, Weber, & Lipps, 2009). Similarly, species with a narrow depth distribution, mainly restrained to shallow areas, exhibit higher flexibility and diversity of algal symbionts compared to species with distributions extending to deeper areas in both the Caribbean and Pacific (Baker, 2003). This is proposed to be a consequence of environmental heterogeneity in shallow environments, which are thought to be more variable both in space and time than deeper environments (Baker & Rowan, 1997;Baker, 2003). Additionally, in the case of LBF, shallow-dwelling species select for symbionts that are more efficient at harnessing light, and as depth increases, specimens are likely to become less dependent on light for energy production, with a corresponding increase in reliance on heterotrophic feeding (e.g. Walker et al., 2011). Along with the diverse eukaryotic community, the bacterial microbiome may also vary across the hosts distribution range. The presence of a variable bacterial microbiome has been suggested to be advantageous when conditions change (Ziegler et al., 2017). Bacteria can be utilised by the host to stabilise local host-algal symbioses, and the correlation between algal and bacterial associations can be strong. For example, dinoflagellate-and diatom-bearing species show significantly different bacterial community compositions, even when specimens are collected from the same reef site and habitat (Bourne et al., 2013). In this case, the identity of algal symbionts would drive the composition of the bacterial community . In general, flexible, diverse recombination among hosts and associates is likely to be evolutionarily favoured over permanent associations. Host flexibility protects against extinction in a single host species (Langer & Lipps, 1995), particularly in the core of their vertical distribution (i.e. across depth) but also horizontally (i.e. with latitude and longitude) for LBF hosts with limited regional distributions. (c) Conserved algal community but variable bacterial associations It is predicted that conserved algal symbiont communities are particularly advantageous to LBF species living at the edge of species' geographical distribution, whereas variability in environmental conditions is accommodated by variable bacterial associations. For example, LBF and reef-building corals living in tropical-subtropical transition zones exhibit low diversity, and a dominant symbiont type (Momigliano & Uthicke, 2013;Ng & Ang, 2016). Similarly, it has been shown that LBF hosts can form flexible and site-specific associations with bacteria, while maintaining a conserved algal community, among populations exposed to different environmental conditions (Prazeres et al., 2017a). The association of LBF with a range of different bacterial taxa could contribute to host distribution and survival across different habitats, as observed for reef-building corals (Hernandez-Agreda et al., 2016). In this case, a variable bacterial community would assist LBF hosts to acclimate to specific environmental conditions, particularly when subject to high physical and chemical fluctuations (Prazeres et al., 2017a). These findings suggest that environmental filtering will differentially affect algal and bacterial symbiont communities: (i) by maintaining consistent, beneficial algal symbionts, and (ii) by acquiring local bacterial taxa, stabilising host-symbiont associations, and providing further capacity for local acclimation/adaptation (Shade & Handelsman, 2012). For example, the population of A. lobifera from the Red Sea has a higher thermal tolerance than an invasive population that has recently colonised the Mediterranean Sea . However, both are capable of maintaining photosynthesis at 32 • C, which is well above the thermal optimum for most LBF species (Doo et al., 2014;Prazeres et al., 2017b), and support similar algal symbiont communities . The presence of a local, diverse bacterial community that is responsive to biotic and abiotic processes could be responsible for the difference in thermal tolerance observed in the two A. lobifera populations. In this case, the bacterial microbiome acquired by the host assists with local acclimation of the migrant population during its colonisation of the Mediterranean Sea. (d) Flexible algal symbiont communities but conserved bacterial associations Mixed infection of symbionts has been reported in a phylogenetically broad diversity of hosts, especially those with constrained distributions (Fay et al., 2009;Fay & Weber, 2012), as mentioned above. However, little is known about the relationship between highly variable algal symbiont communities and the bacterial microbiome. A variable bacterial community is suggested to be more advantageous in unstable environments (Prazeres et al., 2017a). Therefore, it is plausible that in LBF hosts such as dinoflagellate-bearing species, which rely to a great extent on their algal symbionts to meet their metabolic requirements (Fig. 1), flexibility in algal consortia is beneficial. In reef-building corals, host identity appears to play a significant role in shaping bacterial communities (Brener-Raffalli et al., 2018). Similarly, LBF would also rely on a flexible algal community for resistance to thermal stress, and other environmental changes. As a result, an advantageous conserved bacterial community could be transferred across generations, while algal symbionts are acquired from the environment or actively selected by the host from its internal pool (Fay, 2010). This pattern could be common in dinoflagellate-bearing species, as the pool of available symbionts in reef environments is high, given that other common reef organisms such as reef-building corals and giant clams are also hosts of a diverse Symbiodinium community (Coffroth & Santos, 2005), which could potentially be acquired by LBF hosts. (2) The microbiome and the presence of cryptic speciation In addition to selective pressures acting through bacterial and algal associates, the genetic diversity of the host can also provide mechanisms for responding to changes in environmental conditions. Genetic diversity within populations of LBF and their symbionts is poorly known, making it impossible to assess how genetic lineages within species (i.e. cryptic diversity) is distributed in space, and to test whether it is associated with different microbial associates. Speciation is not always accompanied by morphological change (Bickford et al., 2007), and the presence of cryptic speciation in organisms that have been traditionally described based on morphological traits could hide genetic diversity within and among populations. Therefore, intraspecific genetic diversity is potentially a factor that could explain some host-symbiont specificity/variability within their distributional range. Schmidt et al. (2016) identified genetic divergence of host A. lobifera populations between the GBR and Mediterranean/Red Seas, which was accompanied by different algal symbiont communities. It is plausible that, at the edge of their distribution, where biogeographic breaks occur, sexual reproduction occurs more frequently (Triantaphyllou et al., 2012). This would allow individual hosts not only to acquire new symbionts through horizontal transmission, but would also lead to increased genetic diversity within LBF populations. Based on these observations, horizontal transmission is predicted to be advantageous during successful spatial expansion and to accommodate new environmental conditions, whereas at the core of their distribution vertical transmission of eukaryotic symbionts via asexual reproduction would be more prevalent (Rottger, 1974). Intraspecific genetic diversity may not only hide cryptic speciation but also host-symbiont specialisation (both eukaryotic and prokaryotic) across distributional ranges. In the latter, eukaryotic and prokaryotic associations could be adapted to specific climatic regimes, which could facilitate adaptation and geographic range shifts in response to climate change (Berkelmans & van Oppen, 2006). Potentially, each morphologically defined species could disguise a mosaic of biological (cryptic) species with divergent adaptive potential and microbial associations appropriate to the environment in which they live. Fitness trade-offs in different environments could result in diversifying selection among populations invading different habitats, leading to divergence in temperature tolerance or life-history adaptations, which could be driven by their microbiome (Shropshire & Bordenstein, 2016). Therefore, characterising the microbial communities associated with LBF will be a crucial step towards understanding how an invasive population can establish successfully as a dominant carbonate producer either: (i) by depending on the presence of a pre-adaptive microbiome; or (ii) by re-shaping the symbiont community. Given the relationship between LBF and their microbial associates, and the role that eukaryotic symbionts are known to play in LBF evolution (Lee & Hallock, 1987), it is likely that the microbiome, including prokaryotes and eukaryotes, could drive cryptic speciation in LBF. VII. FUTURE DIRECTIONS We argue that the current predicted poleward expansion of some LBF species as global warming progresses partly hinges on their microbiome. Range expansions of LBF have triggered substantial changes in ecosystem function, including shifts in species diversity, carbonate production, and ecological impacts on native biota (Langer & Hottinger, 2000;Langer, 2008;Weinmann et al., 2013a,b;Langer et al., 2013b). The potential for the recombination of different eukaryotic and prokaryotic partners (Fig. 4), and natural selection for host populations associated with more tolerant symbionts may serve to create communities of holobionts suited to altered environmental conditions. Consequently, symbiont communities might assist LBF species to respond to ongoing climate change. The geological record demonstrates that LBF can be used to trace expansions of subtropical and tropical belts during climate warming, with regional differences in the dominant eukaryotic symbiont types. Northern and southernmost records during warm intervals relate to fossil and modern diatom-bearing taxa, especially nummulitids, orthophragminids, and lepidocyclinids (Figs 3 and 6). The diversity and abundance of LBF has varied with space and time, while modern assemblages show similar patterns of biogeography and geographical range expansion driven by current trends of ocean warming, as seen throughout the Cenozoic (Fig. 3). For example, in the Caribbean region, Amphistegina or Archaias are typically the dominant LBF taxa (e.g. Baker et al., 2009), and the diversity of chlorophyte-bearing species is high. By contrast, on the GBR and in the Indo-Pacific, LBF diversity is higher among the diatom-bearing hyaline taxa, which tend to be the dominant species in oligotrophic reef-associated environments. Diatom-bearing species have dominated shallow platforms throughout the Cenozoic (Wilson et al., 1998;Morsilli et al., 2012;Novak & Renema, 2018), and are good candidates to become the dominant calcifiers in carbonate environments in the future . This is largely due to their high tolerance to a broad range of temperature, nutrient, and light levels Langer et al., 2013b;Prazeres, Uthicke, & Pandolfi, 2016b;Prazeres et al., 2017b), as well as their proven capacity to colonise new habitats and areas efficiently. We argue that the combination of a comparatively stable relationship with eukaryotic symbionts, and a highly flexible relationship with prokaryotic endobionts underpins this capacity. Nonetheless, despite the importance of prokaryotes for survival and adaptation in other organisms (Apprill, 2017), there remains very little information about bacterial communities associated with LBF. Future studies on the biology, ecology, and evolution of LBF should take into consideration the role of prokaryotic associates in facilitating and/or mediating species responses to changes in environmental conditions and colonising new environments. In light of this, some specific lines of research should be considered: (i) re-assessment of eukaryotic symbiont diversity using molecular techniques in addition to morphology, specifically in the context of environmental gradients. Even though the identities of major algal groups hosted by LBF families are relatively well known, there are few studies on the intra-and interspecific distribution of eukaryotic symbionts, biogeographical breaks, and biotic and abiotic factors that influence the flexibility/specificity and diversity of Biological Reviews 94 (2019) these associations. (ii) Assessment of the diversity of bacterial communities associated with LBF, utilising next-generation sequencing, across gradients of depth, latitude, and longitude. Quantifying functional shifts in bacterial communities, and how bacteria contribute to the energy budget and other physiological pathways will also represent important advances in LBF biology. The scarcity of data on the role of prokaryotes in LBF is a significant knowledge gap, and should be a high priority for future research. (iii) Identification of bacteria using imaging techniques such as fluorescent in situ hybridisation and electron scanning microscopy, which should help decipher the potential roles of bacteria in LBF intracellular space. (iv) Further research on neglected eukaryotic symbiont groups such as rhodophytes, chlorophytes, and chrysophytes. Rhodophyte-bearing LBF are a particularly tantalising model, as not only do they show a peculiar relationship with their symbionts, but most species that host red algae are cosmopolitan and occupies a wide range of environments. (v) Study the presence of cryptic diversity and phylogeography of species that have been previously described based on morphological characteristics. Genetic differences are not necessarily accompanied by the development of morphologically divergent traits (Darling & Wade, 2008), and the presence of cryptic diversity may shed light on biogeographic boundaries of eukaryotic and prokaryotic symbiont communities, as well as the abiotic factors that drive host-symbiont specialisation. VIII. CONCLUSIONS (1) Geological and modern records of LBF distributions show that diatom-bearing taxa are the most common, abundant, and dominant taxa across wide environmental gradients, whereas dinoflagellate and chlorophytic species tend to be more restricted in their distribution, and less tolerant to nutrients and terrestrial influences. (2) Modern, cosmopolitan diatom-bearing species depend less on their eukaryotic microbes for meeting their energetic requirements and show stable diatom symbiont communities in the core of their distribution. By contrast, dinoflagellate-bearing taxa, are more reliant on their symbiont for survival, and tend to have a flexible association responsive to environmental conditions across their range of occurrence. (3) The occurrence of cryptic speciation in many LBF species can hide host-symbiont specialisation and adaptation capacity of the host to different environments. The capacity of species to adapt to their new environment is a critical component for understanding the role of evolutionary processes in the assembly and dynamics of natural communities. (4) Abiotic factors (e.g. temperature, water clarity, and nutrient availability), and eukaryotic symbionts had an important synergistic contribution to the expansion and contraction of LBF distribution during warming-cooling cycles during the Cenozoic. (5) Interactions between the host, the eukaryotic symbionts, and the prokaryotic endobionts is key to understanding the plasticity, adaptive potential, and resilience of LBF to environmental change. Additionally, the physiological state of both the host and the associates is likely to influence the identity and diversity of the eukaryotic and prokaryotic community. (6) In recent years, we have seen major advances in describing and understanding the role of microbial assemblages in reef fauna, including reef-building corals, sponges, and benthic Foraminifera, and the role that bacteria and other microorganisms play in maintaining the health of the reefs. LBF are essential ecosystem engineers and prolific carbonate producers, and the study of their microbiome should provide important information on their ability to respond to climate change. (7) Identifying host-prokaryote-eukaryote associations and genetic structure within LBF host populations is crucial to a better understanding of the capacities of LBF species to adapt to their new environment or to shift their distribution range. These are critical components for understanding the role of evolutionary processes in shaping the assembly and dynamics of natural communities. IX. ACKNOWLEDGEMENTS Average sea-surface temperature pattern in Fig. 3 is courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio (available at https://svs.gsfc.nasa.gov/ 3652). M. P. would like to thank the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) for proving funding through the Veni fellowship. Special thanks to T. Edward Roberts for contributions to Figs 1 and 4.
2018-12-02T14:21:44.324Z
2018-11-18T00:00:00.000
{ "year": 2018, "sha1": "64c5ffe502c133fe8f6333541ae3ddc0a6e0d6e1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/brv.12482", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2be55c1e1d10c27333373293c32e1681343c3bb2", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
16233298
pes2o/s2orc
v3-fos-license
I=2 $\pi\pi$ scattering using G-parity boundary condition To make the $\pi\pi$ state with non-zero relative momentum as the leading exponential, we impose anti-periodic boundary condition on the pion, which is implemented by imposing G-parity or H-parity on the quark fields at the boundary. With this, we calculate the I=2 $\pi\pi$ phase shift from lattice simulation by using L\"uscher's formula. Introduction Lattice gauge theory provides a way to investigate low energy physics of QCD, which cannot be done using any perturbative method. One of the interesting physical quantities is related to ππ system. The K → ππ WME which violates CP symmetry is of particular interest. Since lattice calculations easily extract ground state, the need to generate the final ππ state with nontrivial relative momentum is a serious difficulty. We proposed to use G-parity boundary conditions to overcome this difficulty [2]. In the present paper, we have implemented this G-parity idea and a new H-parity boundary condition in numerical simulations and calculated the I = 2, ππ phase shift. G parity boundary condition Since the G-parity operation on a pion gives by applying this operation on the boundary, we can impose anti-periodic boundary condition on pion. To implement this condition on lattice, we have to use the G-parity operation on the quark fields: In actual calculation, we impose this boundary condition only in the z-direction so that we have a pion with non-zero z-momentum. Because at the * This work was supported by in part by the U. S. Department of Energy and the RIKEN BNL Research Center. boundary there are terms such as ψψ andψψ, it requires some special care to implement [3]. First, we have to impose a charge-conjugate boundary condition on the gauge field to keep gauge invariance and we have to virtually double the box size for the Dirac operator inversion. Since isospin has an important role in the two pion system, it is worth noting that G-parity commutes with isospin, which means that under this unusual boundary condition isospin is still a good quantum number. H parity boundary condition An easier way to impose anti-periodic boundary conditions on a pion is to apply the following operation on the quark fields, We will call this operation H-parity . Then, the operation on the pions will be, Under this boundary condition, isospin is not a good quantum number anymore, but I z is still good. Since we know that the I z = 2, ππ state is composed of two π + , this state must have nonzero relative momentum. This is not true for the I = 0 state. So the utility of this boundary condition is more limited than the G-parity boundary condition. However it has the advantage that it doesn't require any modification of the gauge field boundary condition. This allows us to use existing lattices, including dynamical ones. Single pion We first investigate the properties of the onepion system. We expect to find a one-pion state with momentum π L , unlike the conventional 2π L . Figure 1 shows a graph of energy versus pion mass. As expected, the single pion state with smallest energy has E(m π ) = m 2 π + sin 2 ( π L ). Figure 2 shows the effective mass plot for the two pion state. We have a very nice plateau in the time range from 3 to 11, but after that we have suspicious fall-off. This fall-off can be explained by considering Fig. 3. A pion created at t = 0 can propagate in either direction. Because we have two particles, there is a state in which the two particles propagate in opposite directions. The correlation function for this state is Two pions here E π means the energy of one pion with momentum. This is just a constant, and will be dominant near t = T 2 because the energy of the I = 2, two-pion state is bigger than 2E π and can cause the abrupt fall-off in this region. We confirmed this idea by fitting the effective mass plot with the function "cosh + const.", the solid line in Fig. 2. Figure 4 shows the same two-pion state effective mass plot for the G-parity boundary condition. This G-parity effective mass plot is quite different from the one for H-parity. Instead of an abrupt fall off, it has gradual decrease. It even looks like it has two plateaus. A simulation with more time slices (N t = 48) also shown in Fig. 4 demonstrates this two-plateau structure. Since the spatial lattice volume was small (≈ (1.7f m) 3 ) for this calculation, we guessed that it might be a finite volume effect and performed the same simulation with a bigger volume. Figure 5 shows the result for a spatial volume ≈ 1.7f m × 1.7f m × 3.4f m. We notice that the gradual decrease of G-parity plot has disappeared and it is almost identical to that seen with H-parity. Therefore, we can conclude that it is a finite volume effect that caused the two plateau behaviour. This might mean that we have discovered a new finite volumeqqqq state. After becoming convinced from these tests that we have a two-pion state with non-zero relative momentum, we extended this simulation and cal- culated the I = 2 ππ phase shift. Since we are using Lüscher's formalism [1]., this is nothing but spectroscopy. Figure 6 shows our phase shift calculation including CP-PACS [4] and experimental results. The following are our δ ππ simulation parameters (1/a is in GeV and N t =32): Figure 6. Phase shift results versus momentum. Conclusion We have tested the idea of imposing antiperiodic boundary conditions on the pion by applying a G-parity or H-parity operation on the quark field at the boundary. For the one pion case, we found that the energy of the particle is given by m 2 π + sin 2 ( π L ) as expected. We can achieve a two-pion state with non-zero momentum, and from this relative momentum, we can calculate the I = 2, ππ phase shift. Since the H-parity boundary condition can be applied to existing lattices, it will be more convenient than G-parity. In particular, we plan to use the Hparity boundary condition for a ∆I = 3 2 K → ππ calculation on existing lattices. However, for the I = 0, ππ state only the G-parity boundary condition with dynamical lattices will work. Since the G-parity boundary condition is more vulnerable to finite volume effects, e.g. theqqqq state which we need to study further, it may require more resources.
2014-10-01T00:00:00.000Z
2003-11-03T00:00:00.000
{ "year": 2003, "sha1": "f4713c34e30ab6c35ca3ebb7f43e575e01245e4d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/0311003", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f4713c34e30ab6c35ca3ebb7f43e575e01245e4d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119179496
pes2o/s2orc
v3-fos-license
Polyhedral products for simplicial complexes with minimal Taylor resolutions We prove that for a simplicial complex $K$ whose Taylor resolution for the Stanley-Reisner ring is minimal, the following four conditions are equivalent: (1) $K$ satisfies the strong gcd-condition; (2) $K$ is Golod; (3) the moment-angle complex $\mathcal{Z}_K$ is homotopy equivalent to a wedge of spheres; (4) the decomposition of the suspension of the polyhedral product $\mathcal{Z}_K(C\underline{X},\underline{X})$ due to Bahri, Bendersky, Cohen, and Gitler desuspends. Introduction Golodness is a property of a graded commutative ring R which is originally defined by a certain equality involving a Poincaré series of the cohomology of R, and Golod [G] gave an equivalent condition in terms of the derived torsion algebra of R. Golodness has been intensively studied for Stanley-Reisner rings since those of important simplicial complexes such as dual sequentially Cohen-Macaulay complexes are known to have the Golod property, and where |v i | = 2 and v I = v i 1 · · · v i k for I = {i 1 , . . . , i k }. We consider the derived algebra Tor k[v 1 ,...,vm] (k [K], k) and fix its products and (higher) Massey products to those induced from the Koszul resolution of k over k[v 1 , . . . , v m ] tensored with k [K]. Let R + denote the positive degree part of a graded ring R. One of the biggest problem in Golodness of Stanley-Reisner rings is to get a combinatorial characterization of Golodness, where we have many examples of interesting simplicial complexes. This is still open at this moment while there have been many attempts. Then we consider the following weaker problem. Problem 1.2. Find a class of simplicial complexes for which Golodness of Stanley-Reisner rings can be combinatorially characterized. In a seminal paper [DJ], Davis and Januszkiewicz showed that the cohomology with coefficient k of a certain space constructed from a simplicial complex K, called the Davis-Januszkiewicz space for K, is isomorphic to the Stanley-Reisner ring k[K]. This opens a way of a topological study of Stanley-Reisner rings. Moreover, Baskakov, Buchstaber and Panov [BBP] found an isomorphism between the cohomology with coefficient k of the space Z K , called the moment-angle complex for K, and the derived torsion algebra Tor * k[v 1 ,...,vm] (k[K], k) which respects products and (higher) Massey products. Then we can study Golodness of Stanley-Reisner rings by investigating the homotopy types of moment-angle complexes. Thus there is a trinity in studying Golodness of Stanley-Reisner rings consisting of algebra, combinatorics and homotopy theory. In this paper, we consider Problem 1.2 under the above trinity, and we will prove the following, where the notation in the condition (4) will be defined later. Recall that a non-empty subset of the vertex set of a simplicial complex K is a minimal non-face if N ∈ K and N − i ∈ K whenever i ∈ N. Put [m] := {1, . . . , m}. (1) k[K] is Golod; (2) any two minimal non-faces of K are not disjoint; (3) the moment-angle complex for K is homotopy equivalent to a wedge of spheres; Remark 1.4. (1) In Theorem 1.3, Golodness does not depend on the ground ring, but in general, this is not true as in [K1,IK2]. We will see in the next section that minimality of the Taylor resolution of k[K] does not depend on k, so in fact, Theorem 1.3 does not depend on k. (2) Recently, Frankhuize [F] proved the equivalence between (1) and (2) in a more general setting by a purely algebraic manner. Throughout this paper, let K denote a simplicial complex on the vertex set [m], where K might have ghost vertices. Minimality of the Taylor resolutions In this section, we recall the definition of the Taylor resolution for a Stanley-Reisner ring and a combinatorial characterization of its minimality due to Ayzenberg [A]. We then prove the implication (1) ⇒ (2) of Theorem 1.3. Let N 1 , . . . , N r be minimal non-faces of K. Then we have such that R −ℓ is the free k[v 1 , . . . , v m ]-module generated by symbols w i 1 ,...,i ℓ for 1 ≤ i 1 < · · · < i ℓ ≤ m with the differential where we set v ∅ = 1. As usual, we say that the Taylor resolution is minimal if the differential satisfies By definition, minimality of the Taylor resolution for k [K] does not depend on the ground ring k, so we say that K has a minimal Taylor resolution if the Taylor resolution for k[K] is minimal for some k. Minimality of the Taylor resolution for k[K] can be readily translated combinatorially as: Proposition 2.1 (Ayzenberg [A]). Let N 1 , . . . , N r be minimal non-faces of K. Then K has a minimal Taylor resolution if and only if Ayzenberg [A] constructed a new simplicial complex with a minimal Taylor resolution from any given simplicial complex, and we here generalize his construction. Let N = {N 1 , . . . , N r } be a sequence of subsets of a finite set W , where we allow N i = N j for some i = j and call W the ground set of N. By introducing new distinct points a 1 , . . . , a r , we put N i = N i ⊔ {a i } and V = W ⊔ {a 1 , . . . , a r }. Define K(N) to be the simplicial complex on the vertex set V whose minimal non-faces are N 1 , . . . , N r . Then since N i ⊂ k =i N k for all i, we have the following by Proposition 2.1. Corollary 2.2. K(N) has a minimal Taylor resolution. Notice that any simplicial complex is determined by its minimal non-faces. Proposition 2.3. If K has a minimal Taylor resolution, then there is a sequence N of subsets of a finite set W such that K ∼ = K(N). Proof. Let N 1 , . . . , N r be all minimal non-faces of K. By Proposition 2.1 there exists We prove the implication (1) ⇒ (2) of Theorem 1.3. For this, we use the following lemma, where the proof will be given in the next section. For a subset I ⊂ [m], we put We here record an obvious fact of minimal non-faces, where we omit the proof. For a simplex σ ∈ K, let lk K (σ) denote the link of σ in K. (1) For a simplex σ ∈ K, any minimal non-face of lk K (σ) has the form N i − σ for some i. Proof. Let N 1 , . . . , N r be minimal non-faces of K. Assume N i ∩ N j = ∅ for some i = j. By Proposition 2.1, we have N k ⊂ N i ∪N j for any k = i, j. Then by Lemma 2.5, N i , N j are the only minimal non-faces of K N i ∪N j . It follows that K N i ∪N j = ∂∆ N i * ∂∆ N j . Then we have |N i | ≥ 1 and |N j | ≥ 1. Thus by Lemma 2.4, K is not Golod, completing the proof. Polyhedral products In this section, we recall the definition of polyhedral products and their properties that we are going to use. Let (X, be a sequence of pairs of spaces indexed by vertices of K. The polyhedral product Z K (X, A) is defined by For a sequence of pointed spaces X = {X i } i∈[m] , we put (CX, X) := {(CX i , X i )} i∈ [m] , where CY denotes the reduced cone of a pointed space Y . The real moment-angle complex RZ K is the polyhedral product Z K (CX, X) with X i = S 0 for all i while the moment-angle complex Z K is Z K (CX, X) with X i = S 1 for any i. Recall from [IK1] that the fat wedge filtration * In [IK1], the fat wedge filtration is shown to be quite useful in studying the homotopy type of a polyhedral product Z K (CX, X). For example, it is shown that the fat wedge filtration splits after a suspension so that we can recover the homotopy decomposition of Bahri, Bendersky, Cohen and Gitler [BBCG] as follows. Let |L| denote the geometric realization of a simplicial complex L, and put X I := i∈I X i for a sequence of pointed spaces X = {X i } i∈ [m] . Theorem 3.1 (Iriye and Kishimoto [IK1] (cf. Bahri, Bendersky, Cohen and Gitler [BBCG])). There is a homotopy decomposition We call this homotopy decomposition the BBCG decomposition. Let us consider a desuspension of the BBCG decomposition. As for the moment-angle complexes, desuspension is completely characterized as: Theorem 3.2 (Iriye and Kishimoto [IK1]). The moment-angle complex Z K is a suspension if and only if its BBCG decomposition desuspends. Then as we will see in Corollary 3.6 below that a desuspension of the BBCG decomposition of Z K (CX, X) is closely related with Golodness of k[K]. So we recall from [IK1] a criterion for desuspending the BBCG decomposition. It is shown in [IK1] that to investigate the fat wedge filtration of Z K (CX, X), the fat wedge filtration of the real moment-angle complex RZ K plays an important role. The fat wedge filtration of RZ K has the following property. We say that the fat wedge filtration of RZ K is trivial if ϕ K I is null homotopic for any ∅ = I ⊂ [m]. Then if the fat wedge filtration of RZ K is trivial, the BBCG decomposition for RZ K desuspends. Moreover, we have: Theorem 3.4 (Iriye and Kishimoto [IK1]). If the fat wedge filtration of RZ K is trivial, then the BBCG decomposition of Z K (CX, X) desuspends for any X. We pass to the connection between Golodness and moment-angle complexes. In [BBP], Baskakov, Buchstaber and Panov observed that the cellular cochain complex with coefficient k of the natural cell structure of the moment-angle complex Z K is isomorphic to the Koszul resolution of k over k[K] tensored with k[K]. As a result, we have: Theorem 3.5 (Baskakov, Buchstaber and Panov [BBP]). There is an isomorphism , k) which respects all products and (higher) Massey products. Corollary 3.6. If Z K is a suspension, k[K] is Golod for any commutative ring k. Then by Theorem 3.2, we obtain: Corollary 3.7. If the fat wedge filtration of RZ K is trivial, then k[K] is Golod over any commutative ring k. We close this section by proving Lemma 2.4. Proof of Lemma 2.4. By definition, we have Z ∂∆ W = S 2|W |−1 for a finite set W , and Z K * L = Z K × Z L . Then we have Z ∂∆ I * ∂∆ J = S 2|I|−1 × S 2|J|−1 . On the other hand, Z K I is a retract of Z K . So if K I∪J = ∂∆ I * ∂∆ J , the cohomology of Z K in any coefficient has a non-trivial product, implying that K is not Golod by Theorem 3.5. Thus the proof is completed. Proof of Theorem 1.3 We first investigate properties of simplicial complexes whose Stanley-Reisner rings have minimal Taylor resolutions. Then by Corollary 2.2 and Proposition 2.3, we consider a simplicial complex K(N) in Section 2. We recall notation for K(N). N is a sequence {N 1 , . . . , N r } of subsets of a finite set W , and N 1 , . . . , N r are minimal non-faces of K(N) such that N i = N i ⊔ {a i } and W ⊔ {a 1 , . . . , a r } is the vertex set of K(N). Put m := |W | + r which is the number of vertices of K(N). For w ∈ W we set where the ground sets of both N w and N w are W − w. Let dl K (v) denote the deletion of a vertex v in K. The following properties of the link and the deletion of K(N) are immediate from Lemma 2.5. We next describe the homotopy type of |K(N)|. Proposition 4.2. We have where we put S −1 = ∅. Moreover, for a sequence M = {M 1 , . . . , M r } of subsets of W satisfying M i ⊂ N i for all i and M 1 ∪ · · · ∪ M r = W , the inclusion |K(M)| → |K(N)| is a homotopy equivalence. Proof. We induct on |W | to get the homotopy type of K(N). When |W | = 0, there is nothing to do. When |W | = 1, we may assume N 1 = · · · = N s = W and N s+1 = · · · = N r = ∅ for some 0 ≤ s ≤ r, so Hence if s ≥ 1, or equivalently N 1 ∪· · ·∪N r = W , then |K(N)| ≃ S 0 , and if s = 0, or equivalently N 1 ∪ · · · ∪ N r = W , then |K(N)| is contractible. We assume the case m − 1 and prove the case m. Notice that for any w ∈ W , there is a pushout of spaces For W = N 1 ∪ · · · ∪ N r , we take w ∈ W − N 1 ∪ · · · ∪ N r . Then we have N w = N w , implying lk K(N) (w) = dl K(N) (w) by Lemma 4.1. Then we get |K(N)| = |lk K(N) (w) * w| ≃ * . For W = N 1 ∪ · · · ∪ N r , we take any w ∈ W , and we have A w = ∅, so by Lemma 4.1 |dl K(N) (w)| is contractible. Since |lk K(N) (w) * w| is also contractible, we obtain |K(N)| ≃ Σ|lk K(N) (w)|. By Lemma 4.1, we have lk K(N) (w) = K(N w ) to which we can apply the induction hypothesis since the ground set of N w is W − w. Thus since N 1 ∪ · · · ∪ N r = W if and only if (N 1 − w) ∪ · · · ∪ (N r − w) = W − w, we obtain the desired result. We next prove the second assertion also by induction on |W |. The case |W | = 1 follows from the identity (4.1). Note that the diagram (4.2) is natural with respect to the canonical inclusions between M, N. Then the second assertion holds by the induction hypothesis as above. (1) In general, we have RZ K * L = RZ K × RZ L for simplicial complexes K, L and RZ ∂∆ [m] = S m−1 as in the proof of Lemma 2.4. Thus we get the desired result. (2) By definition RZ the proof is completed. We now prove triviality of the map ϕ K(N) : |K(N)| → RZ m−1 K(N) of Theorem 3.3 when N i ∩ N j = ∅ for any i, j, that is, under the condition (2) of Theorem 1.3. When N 1 ∪ · · · ∪ N r = W , ϕ K(N) is trivial since |K(N)| is contractible by Proposition 4.2. Then we assume N 1 ∪ · · · ∪ N r = W . We put Then we have M 1 ∪ · · · ∪ M r = W and M i ⊂ N i for all i. So by Proposition 4.2 the inclusion |K(M)| → |K(N)| is a homotopy equivalence. Since the map ϕ K is natural with respect to inclusions of simplicial complexes by definition [IK1], there is a commutative diagram . Then it is sufficient to prove that the composite around the right perimeter is null homotopic. Proposition 4.4. We have We now suppose N i ∩ N j = ∅ for any i, j, and fix 2 ≤ i ≤ r. We define M i from M. By our supposition, there exists w i ∈ N 1 ∩ N i . Put Then M i j ∩ M i k = ∅ for j = k with j, k ≥ 2, so quite similarly to Proposition 4.4 we have in which the i th coordinate sphere contracts up to homotopy. It follows that the inclusion RZ K(M) U → RZ K(M 2 ) U ∪w 2 ∪ · · · ∪ RZ K(M r ) U ∪wr is null homotopic by contracting each coordinate sphere. Thus since RZ K(M 2 ) U ∪w 2 ∪ · · · ∪ RZ K(M r ) U ∪wr ⊂ RZ m−1 K(N) , we obtain: Proposition 4.5. If N i ∩ N j = ∅ for any i, j, then the inclusion RZ K(M) U → RZ m−1 K(N) is null homotopic. By Proposition 4.4, we have RZ K(M) U = S |M 2 | × · · · × S |Mr| which we abbreviate by P , and let T be the fat wedge of S |M 1 | , . . . , S |Mr| . Then by Proposition 4.3, the inclusion T → RZ m−1 K(M) is a homotopy equivalence, and by Proposition 4.5, the inclusion P → RZ Proof. Suppose N i ∩ N j = ∅ for any i, j. By Proposition 4.6, it is sufficient to prove: Claim : For any vertex v of K(N), dl K(N) (v) = K(M) * ∆ S for some S, M such that any two elements of M are not disjoint, where S may be empty. We prove this claim by induction on |W |. When |W | = 0, the claim is obviously true. The case |W | = 1 follows from Proposition 4.2. Suppose the claim holds for |W | < r, and take a vertex v of K(N). Case v ∈ W : By Lemma 4.1, dl K(M) (v) = K( N v ) * ∆ Av . By our supposition, any two elements of N v are not disjoint. Then the claim is true for K( N v ) * ∆ Av . Case v ∈ W : Since v = a i for some i, we have dl K(N) (v) = K(M), where M = {N j | j = i}. Then the claim is obviously true for K(M).
2017-03-17T05:06:46.000Z
2015-06-05T00:00:00.000
{ "year": 2015, "sha1": "d3350a70da7f10906b6682ee4914ee04a4757682", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d3350a70da7f10906b6682ee4914ee04a4757682", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
250379523
pes2o/s2orc
v3-fos-license
Protein Fibrillation under Crowded Conditions Protein amyloid fibrils have widespread implications for human health. Over the last twenty years, fibrillation has been studied using a variety of crowding agents to mimic the packed interior of cells or to probe the mechanisms and pathways of the process. We tabulate and review these results by considering three classes of crowding agent: synthetic polymers, osmolytes and other small molecules, and globular proteins. While some patterns are observable for certain crowding agents, the results are highly variable and often depend on the specific pairing of crowder and fibrillating protein. Introduction The processes that lead to protein aggregates are under intense scrutiny, particularly those which result in the formation of amyloid fibrils (long, insoluble inclusions that are rich in β-strand structures). Amyloid fibrils have a functional role in many organisms [1], and have been implicated in the pathology of many human diseases, including increasingly widespread neurodegenerative diseases, such as Alzheimer's and Parkinson's [2]. Additionally, many of the proteins and peptides unrelated to human disease fibrillate under laboratory conditions; controlled fibrillation may have important bioengineering applications. Thus, fibrillation is an inherent property of polypeptides and is worthy of study, even in the absence of a role in disease or biological function. Understanding diseases where fibrillation is prominent requires an appreciation of the aggregation of proteins under physiological conditions, conditions that are poorly represented by dilute aqueous solutions. In the cells and extracellular matrices, the proteins fold and misfold in a crowded environment, surrounded by a complex (and nonrandom) mixture of other solutes [3,4]. Ideally, fibrillation would primarily be studied in living organisms [5][6][7][8][9][10], however, measuring the kinetics of fibrillation in the cells poses obvious technical challenges. Instead, researchers have attempted to mimic the crowded environment of cells in vitro, via crowding agents [11]. The crowded solutions may also be used to tease out the mechanistic details of the fibrillation process, since the critical steps along the pathway toward the fibril involve an association of protein chains, and molecular crowding generally favors such an association. The synthetic polymers, including the carbohydrate polymers, Ficoll and dextran, polyethylene glycol, and polyvinylpyrrolidone, among others, have been used. These crowders have multiple effects on the stability, folding, structure and misfolding of proteins, arising from excluded volume, viscosity, weak interactions between the protein of interest and the crowding agent, and changes in solvation [12]. The excluded volume alone cannot explain all of the observed effects [13], and these varied influences complicate the interpretation of the data. Generally, crowding favors the fibrillation of disordered proteins, such as α-synuclein, while disfavoring the fibrillation of oligomeric proteins, such as insulin. However, the results vary depending on the protein and type of crowder. We summarize the results of the fibrillation of proteins under crowded conditions, and attempt to make sense of those data from two perspectives. First, are there common mechanisms of fibrillation? Second, are certain crowding agents preferable for mimicking intracellular conditions? We dedicate this review to the memory of Christopher M. Dobson, who made seminal contributions to the field of protein science and whose work continues to influence many. Dobson's exploration of the effects of crowding on aggregation began in the late 1990s and continued through the 2010s [14]. Several of his works will be discussed in the text, and of particular interest is one study, including a novel method to measure the elongation rate of fibrillation using quartz crystals [15]. Dobson's 2003 review remains an excellent introduction to protein misfolding [16], and did much to bring wider attention to what was then an underappreciated, emerging field of great practical importance. An Overview of Macromolecular Crowding Many studies of protein are carried out in dilute buffered solutions. However, the biological milieu can be very crowded. For example, the cytoplasm of a typical cell can contain upwards of 300 g/L proteins alone [17]. The macromolecular crowding effect exerted by the cellular interior has the potential to alter not only the individual protein properties [18], but the interrelationship between the proteins (for example, the liquidliquid phase separation of the proteins in cells [19]). Historically, theories of macromolecular crowding treated proteins as hard spheres that did not interact, except through steric repulsions. The proteins take up volume that is then excluded from the neighboring macromolecules, resulting in an entropic compaction of proteins and the adoption of the most compact (usually the native) state [18,[20][21][22][23]. Volume exclusion was shown to affect protein stability, folding kinetics [24], enzyme activity [25,26], and aggregation [27,28]. Macromolecular crowding is of utmost importance for understanding the proteopathies and protein aggregation. Many of the protein misfolding disorders, such as Alzheimer's and Parkinson's Disease, occur primarily with age. One idea is that the cells become dehydrated, and the effective concentration of proteins in the cell increases, leading to increased protein fibrillation [29]. Understanding how the fibrillation is affected by macromolecular crowding can help us understand the disease and ultimately design better therapeutics. Since the nascence of the macromolecular crowding field over forty years ago [18,[30][31][32][33], an additional layer has been added to our understanding: enthalpically-driven chemical interactions between the proteins and crowders, including electrostatic interactions, hydrogen bonding, and hydrophobic interactions [34]. If the chemical interactions are repulsive, they are additive to excluded volume effects, but if they are attractive, they counteract the volume exclusion. The weak chemical interactions have been demonstrated as modulating protein stability [35][36][37][38], folding kinetics [39], and activity [40,41]. Synthetic Polymers Historically, synthetic polymers, including the sugar-based polymers Ficoll and dextran [24,38,39,41], polyethylene glycol (PEG) [42], and polyvinylpyrrolidone (PVP) [43,44], were used to represent the cellular environment. A summary of the commonly used polymers, their abbreviations, and average molecular weights are included in Table S1, Supplementary Materials. These polymers affect the proteins via both steric repulsion and weak chemical interactions. The effects of their monomers, some of which are osmolytes [45], can be used to contextualize and decode the effects of the polymers [37][38][39]. The bondline structures of commonly used synthetic polymers and osmolytes are presented in Figures 1 and 2, respectively. Ultimately, the synthetic polymers are not the best representation of cells [36,37]. Another option is to use reconstituted cytosol, lysates [36,46,47] or model proteins, such as hen egg white lysozyme (HEWL or lysozyme), and bovine serum albumin (BSA) [38,48] as the crowders. However, these biopolymers still fall short of accurately replicating the cellular interior. [38] Both synthetic and physiologically relevant crowders pose challenges not seen in dilute solution experiments, including increased solution viscosity, high background, and decreased signal quality due to interactions between crowders and test proteins. [39,41] The effects of crowding on protein structure and function have been probed in living cells, but in-cell experiments pose many of the same challenges, with the additional concern of cell leakage [49][50][51][52][53][54]. References and results from fibrillation experiments under crowded solutions are listed in Tables 1-3. Table 1 details the effects of synthetic polymers, Table 2 of small molecule osmolytes, and Table 3 of protein crowders. We include a version of Tables 1 and 2 organized by protein in the Supplementary Materials (Tables S2-S5 Supplementary Materials). Synthetic Polymers and Protein Fibrillation In their 2010 Journal of the American Chemical Society publication [15], Dobson and coworkers studied the effects of the synthetic PEG 200,000, dextran 200, and the dextran monomer and osmolyte, glucose (Refer to Table S1, Supplementary Materials, for the average molecular weight of these polymers). This study used pre-nucleated fibrils, enabling Ultimately, the synthetic polymers are not the best representation of cells [36,37]. Another option is to use reconstituted cytosol, lysates [36,46,47] or model proteins, such as hen egg white lysozyme (HEWL or lysozyme), and bovine serum albumin (BSA) [38,48] as the crowders. However, these biopolymers still fall short of accurately replicating the cellular interior. [38] Both synthetic and physiologically relevant crowders pose challenges not seen in dilute solution experiments, including increased solution viscosity, high background, and decreased signal quality due to interactions between crowders and test proteins. [39,41] The effects of crowding on protein structure and function have been probed in living cells, but in-cell experiments pose many of the same challenges, with the additional concern of cell leakage [49][50][51][52][53][54]. References and results from fibrillation experiments under crowded solutions are listed in Tables 1-3. Table 1 details the effects of synthetic polymers, Table 2 of small molecule osmolytes, and Table 3 of protein crowders. We include a version of Tables 1 and 2 organized by protein in the Supplementary Materials (Tables S2-S5 Supplementary Materials). Synthetic Polymers and Protein Fibrillation In their 2010 Journal of the American Chemical Society publication [15], Dobson and coworkers studied the effects of the synthetic PEG 200,000, dextran 200, and the dextran monomer and osmolyte, glucose (Refer to Table S1, Supplementary Materials, for the average molecular weight of these polymers). This study used pre-nucleated fibrils, enabling Ultimately, the synthetic polymers are not the best representation of cells [36,37]. Another option is to use reconstituted cytosol, lysates [36,46,47] or model proteins, such as hen egg white lysozyme (HEWL or lysozyme), and bovine serum albumin (BSA) [38,48] as the crowders. However, these biopolymers still fall short of accurately replicating the cellular interior [38]. Both synthetic and physiologically relevant crowders pose challenges not seen in dilute solution experiments, including increased solution viscosity, high background, and decreased signal quality due to interactions between crowders and test proteins [39,41]. The effects of crowding on protein structure and function have been probed in living cells, but in-cell experiments pose many of the same challenges, with the additional concern of cell leakage [49][50][51][52][53][54]. References and results from fibrillation experiments under crowded solutions are listed in Tables 1-3. Table 1 details the effects of synthetic polymers, Table 2 of small molecule osmolytes, and Table 3 Synthetic Polymers and Protein Fibrillation In their 2010 Journal of the American Chemical Society publication [15], Dobson and coworkers studied the effects of the synthetic PEG 200,000, dextran 200, and the dextran monomer and osmolyte, glucose (Refer to Table S1, Supplementary Materials, for the average molecular weight of these polymers). This study used pre-nucleated fibrils, enabling measurements that exclusively probe the elongation rates. They interpreted their results using the framework of scaled-particle theory [55][56][57], which posits that the excluded volume effects decrease with the increasing particle size. The limits of scaled particle theory to analyze the crowding effects on fibrillation were acknowledged, as the parameters of the study can only account for the fibril elongation. As expected, the analysis of fibrillation in the complex cellular matrix has seen limited success [58]. The effects of PEG 200,000 and dextran 200 are considered here. A variety of amyloid-prone proteins of different sizes were used, because scaled particle theory predicts that the fibrillation rates increase with the increasing hydrodynamic radius of the precursor protein. The proteins include the globular proteins, lysozyme and insulin, and proteins lacking a well-defined tertiary structure, including the SH3 domain of the phosphatidyl-inositase-3-kinase (SH3), α-synuclein, and the β-domain of insulin at pH 2. The amyloid elongation was measured as a function of the increasing dextran 200 ranging from 0-60 g/L. Dextran 200 accelerates the relative fibrillation elongation rates of all of the proteins, and this enhancement increases with the increasing hydrodynamic radius of the test protein. As the trends with an increasing extent of acceleration as a function of the protein hydrodynamic radius is consistent with scaled particle theory, these affects are attributed to the volume exclusion by dextran 200. PEG 200,000 was also found to accelerate the relative rate of elongation of insulin, and to a greater extent than dextran 200. The promotion of fibrillation in the synthetic polymers is also observed in several other studies [42,48,59,60]. These findings agree with the pioneering work of Uversky and coworkers, which began with synthetic polymers and α-synuclein, the protein implicated in Parkinson's Disease [48]. The crowders' identity, size, and concentration were considered. PEG, Ficoll, and dextran promote fibrillation by increasing the rate and decreasing the lag time. Of the three types of polymers, PEG is the most drastic accelerant. The PEGs with the largest molecular weight (3350 Da) exert stronger effects than the smaller PEGs (200 Da, 400 Da, 600 Da). The fibrillation is increasingly accelerated as the PEG 3350 concentration increases from 25 to 150 mg/mL. While PEG most effectively promotes α-synuclein fibrillation, the effects of dextran 138, Ficoll 70, and Ficoll 400 were also considered (Refer to Table S1, Supplementary Materials, for average molecular weight). Ficoll 400 is slightly more effective than Ficoll 70 at increasing the fibrillation rate and decreasing the lag time, but both are more effective than dextran 138. Ultimately, the authors observe the modulation of the fibrillation depends on the identity of the polymer, and within a single type of polymer, the fibrillation increases with increasing size and concentration. The authors attribute these effects to excluded volume, and eliminated increased solution viscosity as an explanation, as polymers decreased the lag time of the reaction. A subsequent publication expanded the exploration to a variety of proteins, including S-carboxymethyl lactalbumin, human insulin, bovine core histones, and human αsynuclein [59]. Whereas the proteins selected by Dobson and coworkers vary in degrees of disorder, these proteins additionally vary in oligomeric state. Consistent with other studies of α-synuclein [15,48], the polymers such as Ficoll 70 and PEG 3500 accelerate the fibrillation of disordered proteins, namely α-synuclein and S-carboxymethyl lactalbumin, by increasing the fibrillation rate and decreasing the lag time. The proteins that occupy an oligomeric state before fibrillation, such as bovine core histones, see hindered fibrillation in the presence of PEG 3500. Another example, human insulin, illustrates the complexity of crowding effects, as it can adopt both a monomeric and hexameric state under experimental conditions. The observations for monomeric insulin in the presence of PEG 3500 and Ficoll 70 are consistent with observations for α-synuclein and S-carboxymethyl lactalbumin at a neutral pH; where insulin is a hexamer the polymers slow fibrillation, increasing the lag time, because the oligomer must first dissociate and undergo a structural change. For the oligomeric proteins, therefore, the polymer crowders hinder fibrillation, probably because the crowded conditions favor the formation of the native oligomer. In subsequent publications, the Uversky group probed the role of polymer morphology and flexibility [12]. The authors asserted that the effect of crowding depends on a test protein's shape, size, and degree of order. The commonly-used synthetic polymers, dextran and Ficoll, are compact and flexible polysaccharides. However, most biopolymers in the cell-nucleic acids, proteins, etc.-are more rigid. The cellulose-derived polymers, hydroxypropyl cellulose (HPC) 100, 370, and 1000 were chosen to represent the effects of the more rigid polymers, while the dextrans 100, 250, and 500 were used to represent the more commonly used flexible polymers (See Figure 1 for a structural comparison; Table S1, Supplementary Materials, for average molecular weight). Unsurprisingly, the two types of polymers exhibit opposite effects on the proteins with different characteristics. The dextrans inhibit the proteins that form stable oligomers before or during fibrillation, including insulin at pH 7.5 and α-lactalbumin. By contrast, the dextrans accelerate the fibrillation of the disordered proteins, α-synuclein and histones. Modest effects in either direction are seen with the monomeric globular proteins, lysozyme and insulin, at pH 2.5. This trend indicates that the dextrans operate by excluded volume, favoring the most compact form of the test protein. The HPCs of all sizes, on the other hand, hindered fibrillation for all of the proteins-with the exception of histones, which may be due to the inability of histones to fold under the assay conditions. Of particular interest were the contributions from excluded volume, viscosity, and weak interactions (such as electrostatic interactions, dipole-dipole interactions, and hydrogen bonds). To parse the effects of the excluded volume and viscosity, dextran 500 and Ficoll 400 (which has a similar size but a higher density and lower viscosity) were used as the crowders. These two polymers should exert roughly the same excluded volume, based on their close average molecular weight. Both dextran 500 and Ficoll 400 also hindered α synuclein and monomeric insulin fibrillation. However, Ficoll 400 did so more effectively, indicating that the excluded volume effects of dextran are likely counteracted by viscosity. However, an inhibition of fibrillation was still seen in the solutions of relatively low viscosity, suggesting the contribution of weak chemical interactions between the proteins and polymers [12]. Next, the role of polymer hydrophobicity was probed [61]. Most of the commonlyused synthetic polymers are hydrophilic. UCON 5400 (1:1 copolymer of ethylene-and propylene-glycol, Figure 1) is structurally similar to PEG 4400 but has an extra methyl group on every other unit. The additional methyl group on this polymer, in contrast to PEG, provides an excellent comparison of hydrophobicity. The effects on the secondary structure and intrinsic fluorescence quenching for 10 proteins of varying size, degree of structure, and oligomeric state were probed, while the fibrillation kinetics and morphology were explored. Circular Dichroism (CD), 8-anilonapthalene-1-sulfonate (ANS) fluorescence, and acrylamide quenching demonstrate that, while PEG and UCON do not affect the protein secondary structure, they change the solvent accessibility. Ultimately, UCON is more effective at unfolding the test protein than PEG. As with previous studies, PEG enhances the fibrillation of α-synuclein and monomeric insulin, decreasing the lag time and increasing the elongation rate. UCON, however, inhibits the fibrillation of insulin, and further analysis of the samples with a scanning electron microscopy (SEM) revealed UCON instead promotes the oligomerization of α-synuclein. Uversky and colleagues attributed the PEG effects to excluded volume, while the UCON effects were suggested to arise from changes in the solvent properties. The enhanced fibrillation of α-synuclein and other disordered proteins in synthetic polymers, as seen in Uversky's early studies [42,48,62] and Dobson's publication [15], is observed in other instances. Shtilerman and colleagues observed size-dependent reduction of lag times; PEG 3350 exerts the most dramatic effect, followed by dextran 70 and Ficoll 70, while a similar trend was observed with PEGs of varying sizes. [60]. In another study, β-lactoglobulin fibrillation is accelerated-specifically the lag time decreases and the fibrillation rate increases-in Ficoll 70 and PEG 400 (400 Da), 8000 (8000 Da), and 20,000 (2000 Da). The effect is more pronounced with an increasing size and concentration, and therefore is attributed to excluded volume [63]. Wu and colleagues observed that Ficoll 70 and dextran 70 enhanced the fibrillation of human Tau protein, with dextran 70 exerting stronger effects [64]. A fibrillation-prone fragment of Tau protein was the subject of a study by Ma et al. [62], who also examined a cohort of other pathogenic, fibrillation-prone proteins, including human prion protein (PrP), and its variants E196K and D178N, the A4V SOD1 which is implicated in ALS [65]. In addition, rabbit PrP and hen egg white lysozyme were considered, both of which are not pathogenic. The authors observed that the phosphorylated Tau protein fragment, which is associated with the onset of Alzheimer's Disease, does not fibrillate in dilute solution. However, it fibrillates in the presence of Ficoll 70 and dextran 70, which the authors attribute to one of two explanations. The first possibility is that the phosphorylated Tau is more likely to fibrillate in a crowded environment, while the second is that crowding works to counteract the retardation initiated by phosphorylation. Conversely, the authors found that the macromolecular crowding promote the fibrillation of the non-fibrillation-prone proteins, rabbit PrP and hen egg white lysozyme, at 100 g/L but hindered the fibrillation at 200 and 300 g/L. Whereas Dobson and coworkers saw an acceleration in the presence of crowders, regardless of the protein's structure; thus, these authors concluded that the macromolecular crowding effects vary depending on the protein and crowder selected. Proteins that are prone to fibrillate will do so under crowded, cell-like conditions, as crowding stabilizes the aggregates or multimers along the path to aggregation. For proteins that are not aggregation-prone, such as lysozyme and rabbit Prp, the authors hypothesized that competition between the stabilization of aggregates and of the folded, native state, come into play, which led to the disparate results at 100, 200, and 300/gL crowder. A recent study by Biswas and coworkers [66] demonstrated that the polymers of differing sizes can have opposing effects on the α-synuclein fibrillation. Specifically, with in vitro experiments, the lowest molecular weight PEG, PEG 600, hindered fibrillation by increasing the lag time. PEG 1000 increased the lag time, and the fibrillation rate, but the increase in lag time was more dramatic, hindering fibrillation. The higher-mass PEGs, PEG 4000 and PEG 12,000, both led to decreases in the lag time and in the fibrillation rate, with a more drastic reduction seen with PEG 20000. For the larger PEGs, the decrease in lag time was more dramatic, therefore promoting fibrillation. Overall, the promotion of fibrillation by the higher molecular weight PEGS was consistent with the findings of Dobson and coworkers [15], as well as other earlier studies [42,48]. Next, the authors explored how the presence of PEG 6400 and 8000 affected the fibrillation of the α-synuclein A53T in yeast cells; they found that the effect was opposite of that found in vitro. In living cells, the addition of these two PEGs hindered fibrillation. However, the authors found that the concentration of soluble α-synuclein in the cells was higher with the PEGs added than without, suggesting that in the cells, PEG may be acting to solubilize the α-synuclein monomers, and therefore working to counter aggregation. The globular protein hemoglobin was the subject of a thorough study by Siddiqui and Naeem [67]. Specifically, the authors investigated the effects of PEG 4000, PEG 6000, and dextran 70 on hemoglobin fibrillation, and then used isothermal titration microcalorimetry (ITC) to quantify any weak interactions between the protein and the polymers. Hemoglobin, on its own under the conditions selected in the paper, did not undergo fibrillation. PEG 4000, 6000, and dextran 70, at a concentration of 200 g/L, were all found to promote hemoglobin fibrillation, with dextran 70 exhibiting the strongest effects. From the ITC experiments, it was determined that the specific binding of hemoglobin by polymers was not responsible for this change. The characteristics of the hemoglobin fibrils, however, were shown to be different, depending on the crowder. The fibrillation experiments with Congo Red demonstrated that PEG 4000 promotes the formation of protofibirils, while the fibrils were formed in the presence of PEG 6000 and dextran 70, the results of which were confirmed using morphological studies of hemoglobin aggregates using SEM. The authors then expanded upon the study by determining the effects of such aggregation on living cells, specifically, of human peripheral blood cells (PBMCs). The formation of hemoglobin aggregates in the presence of crowders reduced cell viability, manifested by an increase in lipid peroxidation, a decrease in mitochondrial membrane potential, and an increase in the number of necrotic cells relative to apoptotic cells. Additionally, the cells with aggregating hemoglobin in the presence of crowders showed increased DNA damage. All of these effects were attributed to an increase in the radical oxygen species from oxidative stress. Ultimately, the increase in the protein fibrillation due to the presence of the macromolecular crowding was demonstrated to have potentially devastating physiological impacts. This is of particular concern to the elderly, where proteopathies occur more frequently, as the cells shrink and dehydrate with age. Dobson's work, and many efforts, show that crowding by synthetic polymers accelerates fibrillation. However, several studies report mixed results: polymers either hinder the fibrillation, or have no effect. For example, Ficoll 70 and dextran 70 stabilize the native state of the β-sheet rich protein, bovine carbonic anhydrase, leading to decreased fibrillation rates and fewer aggregates [28]. Kong and Zeng found that the effects on lysozyme fibrillation depend on whether the fibrils are pre-seeded [68]. When the lysozyme fibrils are not pre-seeded, adding PEG slows the fibrillation. This trend is reversed when the lysozyme is pre-seeded, with PEG accelerating the fibrillation, suggesting that crowding stabilizes the intermediate oligomers instead of the fibrils, while the pre-formation of anchors for fibrils encourages fibrillation. The authors saw the effects on lysozyme fibrillation, even at low PEG concentrations of 10-20 g/L. The presence of the effects at such small concentrations of polymer crowders led the authors to conclude that the chemical interactions are a key factor. The Winter group assessed the effects of a variety of crowding agents on human islet amyloid polypeptide (hIAPP), which is implicated in Type 2 Diabetes. Contrary to the other crowding agents in the study, Ficoll 70 does not affect the fibrillation of hIAPP. Low concentrations of dextran 70 (10-20%) also do not affect the fibrillation, while high concentrations of dextran (30-40%) caused a loss in the sigmoidal shape of the data and a lengthened elongation time, which the authors suggest is the result of a more complex fibrillation mechanism. The slight differences in these effects were attributed to viscosity, as dextran exhibits a higher viscosity than Ficoll [69]. Increases lag phase, decreases elongation rate [12] Dextran 500 Insulin, pH = 2.5 Decreases elongation rate [12] Insulin, pH = 7.5 Increases lag phase, decreases elongation rate [12] HEWL Decreases elongation rate [12] α-synuclein Decreases lag phase and elongation rate [12] Histone Decreases lag phase, increases elongation rate [12] α-lactalbumin Increases lag phase, decreases elongation rate [12] PEG Insulin pH = 7.5 Increases lag time and decreases elongation rate [12] α-synuclein Increases lag time and decreases elongation rate [12] α-lactalbumin Increases lag time and decreases elongation rate [12] HPC 370 Insulin pH = 2.5 Increases lag time and decreases elongation rate [12] Insulin pH = 7.5 Increases lag time and decreases elongation rate [12] α-synuclein Increases lag time and decreases elongation rate [12] α-lactalbumin Increases lag time and decreases elongation rate [12] HPC 1000 Insulin pH = 2.5 Increases lag time and decreases elongation rate [12] Insulin pH = 7.5 Increases lag time and decreases elongation rate [12] α-synuclein Increases lag time and decreases elongation rate [12] α-lactalbumin Increases lag time and decreases elongation rate [12] UCON 5400 α-synuclein Osmolytes Osmolytes are small molecules that organisms develop in response to the stress induced by water loss [70]. These small organic molecules, which include amino acids and amino acid derivatives, sugars, urea, and methylamines [71], have been demonstrated to stabilize proteins in vitro [38] and in living cells [72]. Work by Serge Timasheff and coworkers demonstrates that stabilizing osmolytes operate by a preferential hydration method, where the osmolytes are excluded from the protein, resulting in hydration of the protein surface [73]. Native state stabilization, resulting from a combination of steric and chemical effects [74,75], was proposed to be a result of the native state being favored compared to the denatured state of the protein, as there was less surface area to be excluded from the osmolytes [76,77]. Bolen and coworkers referred to this as the "osmophobic effect." This theory was further refined to include the repulsive interactions between stabilizing osmolytes and the protein backbone, that raise the energy of the denatured state relative to the native state [78]. Various studies have confirmed the unfavorable enthalpic interactions between the osmolytes and proteins for protein stability [38], and the kinetics of protein folding [39]. Osmolytes and Protein Fibrillation In their 2010 Journal of the American Chemical Society publication [15], Dobson and coworkers studied the effects of an osmolyte, glucose, on the fibrillation of the monomeric globular proteins, lysozyme and insulin. The proteins lacking a well-defined tertiary structure were also considered, including the SH3 domain of the PI3-Kinase (PI3K-SH3), αsynuclein, and the β-domain of insulin at pH = 2. The results were striking, indicating that glucose has variable effects on the fibrillation of proteins. Specifically, the globular proteins with a compact native structure (lysozyme and insulin) saw their fibrillation hindered by the presence of 200 g/L glucose, while the fibrillation of the three natively unfolded proteins (SH3, α-synuclein, and β-chain of insulin) was accelerated. Dobson and coworkers connected this duality with the nature of the native state of the proteins in question. The globular proteins with a compact native state must unfold to aggregate. Conversely, the disordered proteins, PI3-SH3, α-synuclein, and the β chain of insulin, adopt native state conformations that are already extended. For these proteins, the aggregation-prone transition state is likely more compact, and therefore favored, in an environment crowded with osmolytes. However, the osmolytes hinder the fibrillation of globular proteins, which must adopt a more extended transition state before forming aggregates. A recent study by Islam and colleagues explored the ability of sugars to protect αlactalbumin (α-LA) from aggregation, using sucrose, its monomers, glucose and fructose, and a mixture of glucose and fructose as crowding agents [87]. Although kinetic studies were not employed, Thioflavin T and tryptophan fluorescence experiments, along with Rayleigh scattering and Dynamic Light Scattering (DLS), demonstrated that the sugar osmolytes reduced the amount of aggregates formed, suggesting that sugars have an inhibitory effect on α-LA fibrillation. Of all the solutions, glucose on its own is the least effective inhibitor, while the mixture of glucose and fructose is the most effective. The authors endeavored to explain this phenomenon through molecular docking simulations between sugars and several residues within α-LA. The docking experiments showed that the hydrogen bonds may occur between α-LA and sugars, indicating that weak interactions between the sugars and the protein, rather than solely excluded volume or viscosity effects, may influence the observed inhibition of fibril formation. The work of the Bhat group has recently sought to understand the effect of polyol osmolytes on protein fibrillation, specifically, how the addition of -OH groups alters fibrillation [88,89]. The molecules utilized are ethylene glycol (2 -OH groups), glycerol (3), erythritol (4), xylitol (5), and sorbitol (6). The work of Roy and Bhat dug deeper into the effects of osmolytes in human γ-synuclein which, in the same way as α-synuclein, is intrinsically disordered [88]. Ethylene glycol promotes fibrillation concentrations lower than 4.5 M and suppresses fibrillation at concentrations greater than 4.5 M. Glycerol, the smallest polyol osmolyte, also suppresses fibrillation by increasing the lag time, decreasing the rate of fibrillation, and decreasing the overall number of fibrils. Erythritol and xylitol both increase the lag time with increasing concentration, but xylitol also decreases the rate of fibrillation. Finally, the largest polyol, sorbitol, increases the lag time at low concentrations, but decreases the lag time and the fibrillation rates at the high concentrations. The differing effects on γ-synuclein suggest that the influence of the osmolytes depends on the structure and number of -OH groups contained in the polyols. Specifically, the lag time decreases with the increased number of -OH groups. This trend arises from the degree of preferential exclusion, and whether the osmolyte preferentially stabilizes the monomer, fibril, or an intermediate along the fibrillation pathway. These results suggest that the relationship between the osmolytes and disordered proteins is more complex than initially proposed by Dobson and coworkers. A subsequent study from the Bhat group expanded their exploration of the polyol osmolyte size effects to αand β-synuclein [89]. While α-synuclein is fibrillation-prone, βsynuclein resists fibrillation, due to the lack of a non-amyloid β component (NAC) domain. Generally, polyol osmolytes promote α-synuclein fibrillation. Ethylene glycol induces a decrease in the lag time and an increase in the rate of fibrillation, which the authors attributed to favoring the nucleation of early-stage oligomers. The light scattering data indicated that smaller aggregates are formed in ethylene glycol, which was attributed to the high viscosity of the ethylene glycol solutions. The addition of another -OH group with glycerol alters the way the osmolyte modulated the fibrillation. From concentrations of 0.25 to 2 M, glycerol decreases the lag time and increases the fibrillation rate of a-synuclein. However, at high concentrations (>2 M), the mechanism changed-the lag time increases and the fibrillation rate decreases relative to the buffer, a trend observed previously with glycerol and α-synuclein [42]. The authors rationalized that at low concentrations of glycerol, preferential exclusion from the protein surface dominates, favoring the fibril over the disordered monomer. At higher concentrations, this effect is overtaken by the increasing viscosity of the solution. The fibrillation of α-synuclein is promoted in the presence of erythritol at all concentrations; the lag time decreases, and the apparent rate of fibrillation increases. Xylitol and Sorbitol, however, show non-monotonic effects, in the same way as glycerol, with an inflection point at 1.5 M. The xylitol decreases the lag time of the fibrillation with increasing fibrillation, while the apparent rate of fibrillation increased until 2 M, where it decreased, relative to the buffer. After an initial increase at 0.25 M, the sorbitol also decreased the lag time with increasing concentration and increased the fibrillation rate, with a slight decrease relative to the buffer at 1.5 M. Ultimately, the authors attributed these effects at high concentrations to an increase in the viscosity of the solution, as the viscosity increases with the concentration and number of -OH groups. These findings were further supported when the authors tracked the ANS-binding fluorescence intensity and light scattering as a function of the concentration; decreasing the scattering and ANS binding-an effect that increased with the concentration and number of the polyols. This indicated that the concentration of the fibrils formed decreased with the increasing concentration and size of the osmolyte, while SEM indicated that the fibrils formed exhibited a different morphology. Ultimately, the authors proposed a two-fold model for fibrillation in the presence of osmolytes, depending on concentration. At low concentrations, the monomers diffuse readily, and the formation of the fibrils is facilitated due to preferential exclusion. At high concentrations, where the viscosity is high, the monomers diffuse less readily, and shorter fibrils are favored, due to preferential exclusion, instead of longer ones. Two papers compared the effects of stabilizing osmolytes and destabilizing osmolytes on the fibrillation of proteins. The N-terminal fragment of the E. coli hydrogenase maturation factor, HypF (HypF-N), a model amyloidogenic protein, was the subject of a study by Roy and colleagues [90], while insulin was explored by the Belfort group [91]. Consistent with the observations from Dobson's study, insulin fibrillation is hindered by mono-, di-, and trisaccharide osmolytes via preferential exclusion and stabilization of the native protein; the lag times are increased and nucleation rates are decreased. As the size, and therefore potential for preferential exclusion, increases, so do the effects. The opposite trend, however, was seen with the destabilizing osmolytes urea and guanidinium HCl. The lag times are decreased, and the nucleation and fibrillation rates are increased. Similar results were observed for the model protein, HypF-N; the stabilizing osmolytes hinder fibrillation, likely through a stabilization of the native state. The one exception is proline, which promotes fibrillation. Where this study diverges from the other, however, is that, for HypF-N, unlike insulin, guanidinium HCl, and urea, hindered protein fibrillation, which was attributed to the interactions between the osmolytes and water. Many studies have explored the effects of osmolytes on the small peptide hormones and model peptides and proteins (see e.g., [90,[92][93][94][95][96] and references cited therein). Since these hormones and peptides do not fall into the categories delineated by Dobson and colleagues, they will be considered separately. Harries and colleagues used a model peptide, termed MET16, which on its own forms a stable, monomeric B-hairpin but can also unfold and then aggregate into fibrils [92]. The fibrillation was measured in the presence of glycerol, sorbitol, and triethylene glycol, in addition to PEG 400 and 4000. The small molecule osmolytes slow fibrillation, while PEGs exert little to no effect. In addition, unlike the other studies, the presence of cosolutes was not found to affect the morphology or yield of fibrils. Of particular interest to the authors was that, while the sorbitol and triethylene glycol have a similar size, the effects of the sorbitol on MET16 fibrillation were greater. Although all of the cosolutes-polyols and PEGS-were found to operate via preferential hydration, they had varying effects on MET16, implicating the role of peptide sequence and soft interactions between peptide and crowder. Since MET16 must unfold to fibrillate, these results are compatible with Dobson's findings and predictions for globular proteins. Taurine Glucagon Promotes fibrillation [96] Ascorbic acid HEWL Hinders fibrillation [82] HEWL, hen egg white lysozyme; PI3-SH3, Src-homology 3 domain of phosphatidyl-inositol-3-kinase; hIAPP, human islet amyloid polypeptide; SOD1, superoxide dismutase 1. Hinders fibrillation [69] HEWL, hen egg white lysozyme; PI3-SH3, Src-homology 3 domain of phosphatidyl-inositol-3-kinase; hIAPP, human islet amyloid polypeptide; SOD1, superoxide dismutase 1. Ueda and coworkers evaluated the effects of the sugar osmolytes, glucose, sucrose, and trehalose on the fibrillation of 3HmutWil, the peptide portion of a mutated, amyloidprone light chain Wil of the Vλ6 protein. The unfolded version of this protein has been implicated in a monoclonal plasma cell disorder [93]. To mimic the fibrillation of the unfolded peptide, the fibrillation experiments were carried out at pH = 2, where 3HmutWil is unfolded. Under these conditions, the sucrose, glucose and trehalose increase the lag phase, hindering the 3HmutWil fibrillation. The sucrose had the greatest effect, trehalose was similar, while glucose was the least effective at hindering the fibrillation. However, when the pre-seeded fibrils were added to the sucrose, glucose and trehalose solutions, fibrillation was unaffected by the addition of the osmolytes. CD and 2D nuclear magnetic resonance (NMR) experiments in the presence of the sugars indicated that the 3HmutWil was refolded in the presence of sugars. These data combined indicate that the sugar osmolytes affect 3HmutWil fibrillation by stabilizing the compact native state, and do not influence the fibril elongation process. Ultimately, a lack of difference between the 2D NMR spectra, in the presence or absence of sugar, indicates that the structural changes are likely not due to direct interactions between the folded state of 3HmutWil and sugar osmolytes, but rather through preferential hydration of the native state. Winter and colleagues investigated the effects of non-sugar osmolytes on the fibrillation of the islet amyloid polypeptide (IAPP), which is implicated in Type 2 diabetes [94]. The IAPP is unfolded in its native monomeric state but can form transient structures. The effects of the stabilizing osmolytes, TMAO and betaine, the destabilizing osmolyte urea, and the combinations of urea/betaine and urea/TMAO were considered. The lag phase of fibrillation is unaffected by the addition of 1 and 2 M TMAO and betaine, but the fibrillation rate decreases, while the sigmoidal shape of the curve is lost. The authors suggest that TMAO and betaine may stabilize smaller oligomers or protofibirils, rather than the monomer or longer fibril. The TMAO effects were observed to be stronger and more concentration-dependent than betaine. Interestingly, the addition of urea increased the lag phase of IAPP fibrillation in a concentration-dependent manner, suggesting that urea stabilizes the unfolded native state, delaying the fibrillation. The addition of TMAO to a solution of urea fully counteracts the delay induced by urea, while betaine counteracts the effect of urea by only a small amount. This suggests that the observed effects are due to interactions between the stabilizing and destabilizing cosolutes added, rather than interactions between the individual cosolutes and the peptide. The AFM imaging found that the morphology of fibrils was unchanged by the addition of cosolvents, pointing towards interactions with the native protein as the cause for any changes to fibrillation kinetics. Ultimately, the observed effects are attributed to preferential exclusion from the unfolded native state by TMAO and betaine, leading to small oligomers, while urea preferentially hydrates the unfolded native state, prolonging the lag phase. Although different mechanisms are adopted, both delay the fibrillation of an unfolded peptide, and ultimately contradict the effects observed by Dobson and coworkers for the unfolded proteins in the presence of sugar osmolytes. Park and colleagues investigated the effects of four osmolytes on the fibrillation of residues 106-126 of the human prion peptide, PrPc [95]. A conformational change in this 20-residue peptide is implicated in the conversion of the non-fibrillation-prone prion protein, PrPc, into the fibrillation-prone version, PrPsc. This peptide contains a polar headgroup and a hydrophobic tail. The ability of the stress molecules to alter the fibrillation of this segment of PrP was explored. Ectoine, hydroxyectoine, mannosylglycerate, and mannosylglyceramide, which are produced by cells under stress conditions, were selected. All inhibit fibrillation, with ectoine and mannosylglyceramide showing the strongest concentration-dependent effects. Additionally, the cells treated with these stress molecules showed increased viability, compared to the untreated cells, pointing to the therapeutic potential of stress molecules against aggregation-based diseases. These two osmolytes were proposed to hinder fibrillation by preferential exclusion from the native protein. The other stress molecules likely operated via a different mechanism. Hydroxyectoine, which is more polar than ectoine due to an extra OH group, and mannosylglycerate, which has a negative charge and may interact with the negative head group, freed the hydrophobic tail to aggregate. Ultimately, the effects of the stress molecules are mechanism-dependent, and may vary based on the structure of the chosen molecule and test protein. Otzen and colleagues considered the effects of a host of osmolytes-including polyols, amino acids, and methylamines-on the fibrillation of the hormone, glucagon [96]. The polyols had a minimal effect on glucagon fibrillation. The amino acids exhibited a decrease in the lag time of fibrillation, promoting fibrillation. The one exception was taurine, which increased the lag time, delaying fibrillation. The methylamines, specifically sarcosine and betaine, also decreased the lag time. Concentration-dependence was explored, but only sarcosine exhibited a noticeable concentration-dependent decrease in the lag time. CD was used to probe the lack of effects on the kinetics-to see if any possible differences arose in the fibril structure. The polyols did not affect the structure of the fibrils, but the amino acids and methylamines led to the production of a different class of fibril than was observed in the dilute solution. These observations led the authors to conclude that a blanket theory cannot be applied to osmolyte-protein systems; the effects vary, depending on the protein and osmolyte chosen for the study. Protein Crowders Many studies have explored the effects of synthetic polymers and osmolytes on the fibrillation of proteins [48,64,68,81,86,87]. Some have even expanded the study to physiologically relevant conditions, by exploring the effects of these crowders on cell viability and toxicity [67,95]. However, while the osmolytes do populate cells under stress conditions, the synthetic polymers are not naturally found in cells. As the cell contains a high concentration of proteins in the cytoplasm, the proteins can serve as physiologically relevant crowding agents, and can inform our understanding of how fibrillation occurs in the cell. Hen egg white lysozyme and BSA are commonly used as the crowding agents, due to their durability and ease of purchase [38,39,48]. However, working with proteins as crowding agents poses complications, as some of the proteins used as crowding agents can themselves fibrillate [82][83][84][85]98], and cannot be used with techniques commonly used to assess fibril formation and morphology, such as CD, as the protein crowder's signal would interfere with that of the test protein [69]. Only a few studies, therefore, have endeavored to test the effects of protein crowders on protein fibrillation. As seen with both polymer and osmolyte studies, the effects are not uniform. Protein Crowders and Fibrillation Uversky and colleagues explored the effects of the protein crowders, BSA and lysozyme, on the fibrillation of α-synuclein [42,48]. Both were found to accelerate α-synuclein fibrillation by decreasing the lag time and increasing the acceleration rate, even at a low concentration (60 g/L BSA, 50 g/L lysozyme). In fact, both were more effective at promoting α-synuclein fibrillation, even at lower g/L concentrations, than the chosen synthetic polymers. The subsequent studies on the effect of BSA at 30 g/L on the fibrillation of another disordered protein, bovine s-carboxymethyl-a-lactalbumin, confirmed that the BSA promotes the fibrillation of disordered proteins [59]. Interestingly, the authors addressed BSA and lysozyme as inert protein crowders. At the pH chosen, the α-synuclein is negatively charged, while the lysozyme is positively charged (+8), and the BSA is negatively charged (−17). Under these conditions, the BSA exhibited stronger effects than the lysozyme. The authors eliminated any charge-charge contribution, and instead, only considered the effects of excluded volume. However, later studies have demonstrated that charge-charge attractions between test proteins and protein crowders can destabilize the native state of a protein, while repulsions can also destabilize a protein, but to a lesser extent [38]. These revelations indicate that the effects of chemical interactions between proteins, especially in the context of living cells, need to be further explored and analyzed. The opposite effects were seen when the BSA and lysozyme were used as crowding agents in the study of the fibrillation of another natively unfolded peptide, IAPP; indicating that crowding by proteins must be more complicated than excluded volume alone [69]. In this analysis, Winter and colleagues did consider the chemical interactions that might occur between the protein crowders and IAPP, but not between the synthetic polymers and IAPP. Both the BSA and lysozyme hindered IAPP fibrillation in a concentration-dependent matter and decreased the number of fibrils; more drastic effects were seen with the lysozyme than with the BSA. This hindrance of fibrillation was also seen with synthetic polymer crowders. Despite the similar effects in terms of kinetics, the authors focused on the difference between the mechanisms adopted by the proteins and synthetic crowders by imaging the aggregates formed in the presence of both classes, using AFM. The fibrils that formed in the presence of Ficoll and dextran maintained the same morphology as the fibrils formed in a dilute solution. In the presence of the BSA and lysozyme, globular IAPP monomers and oligomers are observed-suggesting that the protein crowders stabilize the off-pathway species, hindering fibrillation. The authors concluded that the two types of crowders both delay fibrillation, by stabilizing the off-pathway monomers and oligomers. However, the protein crowders showed more pronounced effects at lower concentrations, suggesting weak chemical interactions dominated these conditions, while excluded volume and viscosity and diffusion effects were probably the main contributors to the effects seen with synthetic polymers. Summary and Conclusions The effect of macromolecular crowding on protein fibrillation protein depends on the class of crowder studied. Stabilizing osmolytes act mainly on the native state of the globular and oligomeric proteins, increasing the equilibrium thermodynamic stability. This decreases the population of non-native states, which may explain the commonly seen reduction in fibrillation rates. The effects on the native state outweigh the effects further down the pathway, except in two cases: the pairing of glycine betaine with BSA, and of proline with HypF-N. In these instances, the osmolytes favor fibrillation, probably by stabilizing the intermediates or lowering barriers along the fibrillation pathway. Destabilizing osmolytes, such as urea, may have the opposite effect at low to moderate concentrations. However, urea is also a denaturant, such that high concentrations favor complete unfolding and disfavor fibrillation. In contrast to the globular proteins, the osmolytes exhibit variable effects on the fibrillation of disordered proteins and small peptides. Here, native state stabilization is not important; instead, the alteration in the free energy of the intermediates and barriers along the fibrillation pathway takes precedence. Large, synthetic polymers, such as dextran or Ficoll, increase the fibrillation rates in disordered proteins and small peptides. The size and concentration of these crowders reduces the available solution volume, and accelerates the association steps that are key to fibrillation. For globular and oligomeric proteins, the results are more varied. The protein crowders promote fibrillation in some cases and hinder it in others. This variability is likely due to weak, nonspecific protein-protein interactions that become noticeable at the high concentrations used in these experiments. Overall, the large number of effects at the molecular level make it difficult to conclude much about the common fibrillation mechanisms using in vitro crowding agents. All come with caveats, and it is difficult to recommend an ideal certain type. Ultimately, in vitro measurements alone are of limited value. For more than 40 years, scientists have characterized the effects of the cellular interior on protein function. The advances in technology, such as in-cell NMR [9,99], single-cell mass spectrometry [100], fluorescence [8,10,101,102], FRET [103] and flow cytometry [104], electron microscopy [5][6][7], and cryo-electron tomography (Cryo-ET), allow scientists to characterize the protein and the protein aggregate structure and function in cells. Cryo-ET is particularly effective at structural characterization of the neurotoxic aggregates [105][106][107]. These advances, and others that will come, empower scientists to expand upon the efforts discussed and characterize all of the aspects of protein aggregation, including kinetics, in living cells.
2022-07-09T15:23:32.374Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "8b5bd18d0a2ba18adb8257f1d249ce5714fa515e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/12/7/950/pdf?version=1657106184", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f9238b5f09cd01599f60a96f8d30fe1efa364cc", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
198846030
pes2o/s2orc
v3-fos-license
Content-Dependent Image Search System for Aggregation of Color, Shape and Texture Features The existing image search system often faces difficulty in finding an appropriate retrieved image corresponding to an image query. The difficulty is commonly caused by the users’ intention for searching image is different with dominant information of the image collected from feature extraction. In this paper, we present a new approach for the content-dependent image search system. The system utilizes information of color distribution inside an image and detects a cloud of clustered colors as something - supposed as an object. We apply segmentation of an image as a content-dependent process before feature extraction in order to identify is there any object or not inside an image. The system extracts 3 features, which are color, shape, and texture features and aggregates these features for similarity measurement between an image query and image database. HSV histogram color is used to extract the color feature of the image. While the shape feature extraction used Connected Component Labeling (CCL) which is calculated the area value, equivalent diameter, extent, convex hull, solidity, eccentricity, and perimeter of each object. The texture feature extraction used Leung Malik (LM)’s approach with 15 kernels. For applicability of our proposed system, we applied the system with benchmark 1000 image SIMPLIcity dataset consisting of 10 categories namely Africans, beaches, buildings historians, buses, dinosaurs, elephants, roses, horses, mountains, and food . The experimental results performed 62% accuracy rate to detect objects by color feature, 71% by texture feature, 60% by shape feature, 72% by combined color-texture feature, 67% by combined color-shape feature, 72 % combined texture-shape features and 73% combined all features. INTRODUCTION All technology is born to various purposes.For example, search engines are created to sort through massive amounts of data online.In each new technology, improvement is obtained by combining existing technology to create something new that is better than the technology used before.Digital information is increasing because of globalization which eliminates distance, space and time.That makes the level of information accuracy has an important role in the search process.The image search process on big data made a problem that is not easy to solve [1].One solution that can be applied is Content-Based Image Retrieval (CBIR).CBIR increased the accuracy and efficiency of the image search system and managed large amounts of image data [2]. The term Content-Based Image Retrieval (CBIR) is purported experiments on automatic retrieval of images from a database by color and shape features.Although CBIR took features that could be either primitive or semantic, the feature extraction process must be able to identify dominant content and images [3].IBM QBIC (Query by Image Content) is proposed methods to query large online image database used the content of the image as the basis of the queries [4].The Photobook system is an interactive tool for browsing and searching image and image sequence that they made direct used of the image content rather than relying on text annotations [5].SIMPLIcity is an image retrieval system used the semantic classification method with a wavelet-based approach for feature extraction and integrated region matching based on image segmentation [6]. CBIR aims to measure performance in getting images similar to search schemes.But the system cannot capture image content properly.Because the existing system focuses on the selection of extraction features and methods [7]. This paper developed a new approach about content-dependent image search system for aggregation of color, shape and structure features.The system introduced color feature extraction using RGB and HSL then it normalized the unique data.The main idea of this approach using a clustering technique that is Hierarchical K-means and Optimal K.And the results are used to detect objects.Then we extract the feature from a similar image with the desired image. Ali Ridho Barakbah [8] has made a previous study of image search with feature extraction of colors, shapes, and textures with automatic weighting.Extract their color features using 3-Dimensional Color Vector Quantization.While feature extraction used eccentricity, area, same diameter, and convex area.Structural features are extracted using the Curvelet method.The key to this research [9] is the automatic weighting mechanism when selecting features based on a combination of color, shape and structure features.The researcher applies an automatic weighting mechanism to select features by analyzing the distribution of color information to determine representative features.The researcher extracts the color moment in the image and calculates the color distance for color weight and texture density for structural weights.The shape feature is measured to extract the shape area then the area is calculated to adjust the shape of the texture density to determine the shape weights. Praheep Anantharatsamy [10] has proposed a content-based image retrieval based on three major types of visual information: color, shape, and texture.Comparison of distance calculations for all three space distance dimensions of retrieval.In the experimental results, they investigated several extraction feature methods and search algorithms from the content-based image retrieval.The results of the extraction feature selection obtained the highest accuracy based on the nearest 5-neighbor. Naveena A K [11] has proposed a CBIR system based on color moments, wavelet and edge description to take desired images from the database.Extracting features used are color, texture, and shape.For color features used color moments, the texture feature used wavelet transform and the shape feature used edge histograms using canny edge detection.Comparison of the results of retrieval of each color, texture, and shape and comparison of features combined show that the combination of features provides better retrieval results than each feature.K. Mala [12] has proposed a technique to produce image content descriptions with three features, there are: Color auto-Correlogram, Gabor Wavelet, and Wavelet Transform.The feature extraction process is based on the input request image of the IDB and features stored in the feature dataset.Manhattan distance is applied to users who are given query images and feature vectors calculated from database images to measure similarity.Another efficient feature such as Wavelet transformation for edge extraction, the shape is also used in conjunction with color and texture features to provide better results and can be used to obtain high-precision shooting.From the analysis conducted on the experimental results, it is illustrated that the proposed method achieves a better level of precision compared to other methods available.Thus the features taken from the proposed method achieved an average accuracy rate of 83% for the Corel database, while 88% for the Li database and 70% for the Caltech-101 database in the image search system. Andrea Kutics [13] has described a new method for detecting objects in natural images with an unlimited domain.Which aims to capture important information in terms of assessment in terms of user semantics at the visual level for efficient shooting.The main obstacle to developing this method is the difficulty of accurately segmenting images into prominent areas.To overcome this difficulty, they developed a vector inhomogenous diffusion model that used several features. Aamer Mohamed [14] has proposed an efficient content-based image retrieval with a scheme based on semantic object detection (SOD).The feature extraction process used a discrete cosine transform (DCT) blocks where retrieval results from query images use the calculation of a histogrambased approach.SOD is used with the aim of reducing the size of the database from the results of the drawing approach taken.By using SOD, it displays the improve retrieval results. ORIGINALITY The existing image search system often has difficulty finding images taken according to the image request.Difficulties are generally caused by the user's intention to look for different images with the dominant information of images collected from feature extraction.In this paper we present a new approach to image search systems that depend on content.This system uses color distribution information in images and detects color clouds that are grouped as something that is considered an object.We apply image segmentation as a process that depends on content before feature extraction to identify whether there are objects or not in the image.The system extracts 3 features, such as features of color, shape and texture and combines these features for measuring similarities between image requests and image databases.The HSV color histogram is used to extract image color features.While the form feature extraction uses Connected Component Labeling (CCL) which calculates area values, equivalent diameter, area, convex hull, solidity, eccentricity, and circumference of each object.The texture feature extraction uses the Leung Malik (LM)'s approach with 15 kernels. SYSTEM DESIGN Figure 1 shows the system design that we proposed research.There are 5 main functions in our system, there are: (1) preprocessing, (2) clustering, (3) object detection, (4) feature selection and similarity measurement.Each process is explained in section 4.1-4.5. Pre-processing Image preprocessing is the first stage of detection in order to improve the quality of images with color metric extraction and normalization.Color Metric Extraction reduced computational burdens and quantization of color can be used to represent images without significantly reducing image quality [15]. The color space included RGB and HSL.RGB is a color space which comprises the red, green, and blue spectral wavelength.The most frequent presentation of colors in image processing is RGB.Since RGB color space has some limitation in high-level processing, other color space representations have been developed [16].HSL is known to improve the color system of HSV because it could present brightness better than saturation.Besides, the hue component in HSL color space is integrated with all chromatic information, it's stronger than the main color for image color segmentation [17]. The value between RGB and HSL is not the same, that makes this paper applied normalization.For the case of studies, used the softmax algorithm.The Softmax algorithm could achieve maximum and minimum values, but according to the specified limit.Transformations using Softmax are more or less linear in the middle range and have nonlinear fluency at both ends.The output range is between 0 and 1 [18]. Clustering Object detection process involved the process of finding the best number of clusters from resizing images and clustering from original size images with getting optimal K and Hierarchical K-means.Optimal K Detection could find out the global optimal solution.Moving variance is defined as a variant in the cluster when determining calculated clusters and assesses that it has reached a globally optimal solution based on trends [19] [20].With experiments using widely available test data, comparing cluster results from this technique and existing non-hierarchical clustering techniques show the predominance of techniques.The ideal cluster has a minimum variance within the cluster ( ) that represents internal homogeneity and maximum variance between clusters ( ) which states external homogeneity. (3) (4) (5) Where: N = Amount of all data = Number of cluster data i = Variant of cluster i The best clusters that have been obtained are processed using Hierarchical K-means.In [21], they optimized the initial centroid for Kmeans.This utilizes all the results of grouping K-means at certain times, although some of them reach local optima.Then, the results are combined with the Hierarchical algorithm to determine the initial centroid for K-means.The algorithm is explained as follows: Algorithm Hierarchical K-Means 1. Determine K as the initial cluster number.2. Determine p as the amount of computing.3. Determine i = 1 as the increase value.4. Apply the K-Means algorithm.5. Record the centroid from the cluster results as .6. Add i = i + 1. 7. Repeat step five if i < p. 8. Assume as a data set, with the value K as the number of clusters to be formed.9. Apply hierarchical clustering algorithms.10.Save the centroid value from the results of clustering as . Then, as the initial initialization of the clusters mean value from K-means clustering. Object Detection The object detection in CBIR that made is one of the uniqueness of this research.In the object detection stage is involved the process of background remover and determining the object. Background Remover The background remover is one technique between the background images and image objects.This technique could be done by adjusting the object in the middle of the background image.The process is based on the results of the position normalization cluster values, RGB and HSL.The following is the removing technique that has 6 steps. 1. Initialize and specify tpmcluster value with value 1.It is intended to be used as a place to store the results of removing the cluster. 2. In Figure 2, the searching process is carried out by taking 25% of the image width calculated from the 0th column and 25% from the image height calculated from the 0th row.After that, changes in cluster value will be made with the value 0 based on the searching for the cluster value with the highest number.At the last stage, the technique is carried out which is to searching for cluster values that are not equal to 0 and then it find the cluster value with the highest number. Determination of Objects The last part of the object detection stage is the process of determining results, including objects or not objects.The determination is done by calculating the ratio of objects to the background. It detects a cloud of clustered colors as something -supposed as an object if the number of object ratios is greater in the ratio than the number of outside ratios.The inner ratio is obtained from 25% -75% of the image and the outer ratio is obtained from the overall image minus the inner ratio.The simulation can be seen in Figure 6 below. Feature Extraction The process of taking an image by selected the appropriate feature extraction method and the measurement approach [2].Extraction features that researchers used were color and texture. Color Feature Extraction The color feature extraction used 3D-Color Vector Quantization and HSV histogram.The main idea of 3D-Color Vector Quantization is that the system uniformly represents the color of the image at a certain position in the RGB vector color space.This means to reduce the complexity of the RGB color in the image and unite the color close to the vector space [8] [22].The size of the quantity is 5X5X5 so that it forms 125 positions in RGB. Shape Feature Extraction The feature extraction form of this paper using Connected Component Labeling [23] is applied based on the threshold of many models to identify all groups.Spatially connected groups where all pixels in connected components have pixel intensity values that are the same or connected to each other.That makes after all groups have been determined, each pixel is labeled according to the component specified.For extraction, the first step feature applies edge detection using canny detection [24] which aims to smooth the image and noise.This paper applied Connected Component Labeling (CCL) to get extracts of contours from the object.The contour is processed to obtain area values, equivalent diameters, extent, convex hull, solidity, eccentricity and perimeter of each object.The calculations applied to represent each pixel in the former's matrix use the mean (µ), median, standard deviation (σ), variance (σ 2 ), skewness (s), and kurtosis (k).Leung Malik's approach implementation involves the process of segmenting canny detection edge detection [24].Calculations applied for texture extraction researchers use calculations of the mean (µ), median, standard deviation (σ), variance (σ 2 ), skewness (s), and kurtosis (k). The texture feature extraction metadata that is formed is as follows: (13) Where: = extracted texture features from i kernel. Similarity Measurement After features extraction, the metadata of color and structured are created.The metadata of images query is used to measure the similarity with proximity to the metadata of a image database.For the measurement approach, the researcher uses Normalize Canberra Distance where it can normalize distance data on each attribute [8].Normalize Canberra Distance approaches data from image queries with image databases.The equation from Normalize Canberra Distance: (13) Where: N = number of attributes = metadata from image query = metadata from image database EXPERIMENT AND ANALYSIS For the testing phase in this paper used the benchmark 1000 image SIMPLIcity dataset from Wang et al. [26] which consists of 10 categories namely Africans, beaches, buildings historians, buses, dinosaurs, elephants, roses, horses, mountains, and food. Background Remover and Object Detection Based on the results of the background remover, it was found that there are images that cannot be supposed as image objects.For example the image of a mountain with a sky view, the system mostly suppose the image object is sky because the value of sky color extraction is more dominant than the color of the mountains.Except for dinosaur and flower images, the resulting color extraction is more dominant so it is supposed to be an object.Figure 9 illustrates some of the results of background remover. Figure 9. Background Remover Results In this paper, the researcher has attempted background remover by doing background replacement, the process of replacing background colors that have been removed with other colors.It aims to examine the background has been replaced by another color information whether or not to influence the similarity measurement.We attempted the results of the similarity to the detection of objects in each category using a method of calculation of the ratio of errors, accuracy, and scoring by determining the top 10 pictures from image search results for the query (14) (15) (16) Where: = category of retrieval image cq = category from image query Tables 1,2 and 3 is displayed the results of errors, precision, and score values from the comparison between the results of the object removed by the object with the background color changed.Based on these results it could be seen that the background changed with other colors produces a high error, precision, and a low score compared to the background of the discarded object.The colors should indicate the object are not suitable because the background provides other information.However, if the information is discarded, it will provide better information.From the table above it can also be seen that the object detection technique is still not good if applied to images with backgrounds that are as varied as people, buildings, mountains, and buses.However, for the implementation of images with backgrounds that are not as diverse as dinosaurs, flowers, elephants, horses, food, and beaches, they have successfully detected objects well. Selection of Image Similarity Techniques The process of selecting the similarity measurement technique compared the Normalize Canberra Distance approach with cosine distance on the features of color shapes and textures.The following cosine distance equation is used as a comparison: (17) Where: N = number of attributes = metadata from image query = metadata from image database Tables 4, 5, and 6 described error results, accuracy and score values from the comparison of approach methods.It could be seen that the method of normalizing Canberra distance is better to be applied to the color feature and texture feature extraction approach compared to the cosine distance method.Whereas for the shape features it is better to use the cosine distance method when compared to the Normalize Canberra Distance method. Result Retrieved Some things that can be analyzed based on the results of the system and re-display the collection of database images that have similarities with image query to get the best solution in the form of images with the closest distance value to image query.Testing is done by comparing the features of color, shape, and texture through the calculation of the k-nearest neighbor's algorithm (k-NN) method with k = 3. CONCLUSION This paper presents a new approach to content-dependent image search system for aggregation of color, shape, and texture features.The system introduces color feature extraction using color extraction RGB and HSL then it normalization the unique data.The main idea of this approach is using clustering techniques that are Hierarchical K-means and Optimal K Selection.Our image search extracted three kinds of features which are color, shape, and texture.We used 3D-Color Vector Quantization color for extraction feature.The process of extraction features shape used Connected Component Labeling (CCL) which is calculated the area value, equivalent diameter, extent, convex hull, solidity, eccentricity, and perimeter of each object.The texture extraction feature was used by Leung Malik (LM) 's approach with 15 kernels. In this research, the techniques that have been carried out are less able to find the desired object correctly if the images sought provide information that has a variety of backgrounds compared to information from the object..For distance approach calculation techniques this paper have analyz that the calculation using the Canberra distance normalizes more high accuracy values than using cosine.The results are used for detecting the extraction accuracy rate of 62%, 71% by texture feature, 60% by shape feature, 72% by combined color-texture feature, 67% by combined color-shape feature, 72 % combined texture-shape features and 73% combined the feature.Analysis obtained from the experimental results, the system is more optimal if the feature extraction used is to combine all the features compared to just using each feature.For further work, we will apply an automatic weighting mechanism system to select this feature automatically. 8 . No. 1, June 2019 EMITTER International Journal of Engineering Technology, ISSN: 2443-of the color component i in the pixel image j, and N is the number of pixel images.The shape feature extraction metadata that is formed is as follows: (12) Where: = the result of feature extraction from the area in block x = the result of feature extraction from the equivalent diameter in block x = the result of feature extraction from the extent in block x = the result of feature extraction from the convex hull in block x = the result of feature extraction from the solidity in block x = the result of feature extraction from the eccentricity in block x = the result of feature extraction from the perimeter in block x 4.4.3Texture Feature Extraction The texture feature extraction used Leung Malik (LM).The LM filter banks are rotationally variant.Therefore, the derivatives of the LM filter bank would change when subjected to different orientations.A set of images obtained under a known set of imaging conditions is considered as input data to the LM filters [25].The LM filter set consists of 48 filters but in this paper used 15 filters.Source: www.robots.ox.ac.uk/~vgg/research/texclass/filters.html Figure Filter bank Leung Malik Table 1 . Calculation Results of Average Error Ratio Table 2 . Calculation Results of Average Accuracy Table 4 . Calculation Results of Accuracy Values from Comparison of Similarity Measurement Table 5 . Calculation Results of Error Values from Comparison of Similarity Measurement Table 6 . Calculation Results of Scoring Values from Comparison of Similarity Measurement
2019-07-26T14:45:46.140Z
2019-06-15T00:00:00.000
{ "year": 2019, "sha1": "1686869fa3b60c7c0398c2597fd5489b7d09f2dd", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.24003/emitter.v7i1.361", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1686869fa3b60c7c0398c2597fd5489b7d09f2dd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
235470855
pes2o/s2orc
v3-fos-license
Total numbers and in-hospital mortality of patients with myocardial infarction in Germany during the FIFA soccer world cup 2014 Environmental stress like important soccer events can induce excitation, stress and anger. We aimed to investigate (i) whether the FIFA soccer world cup (WC) 2014 and (ii) whether the soccer games of the German national team had an impact on total numbers and in-hospital mortality of patients with myocardial infarction (MI) in Germany. We analyzed data of MI inpatients of the German nationwide inpatient sample (2013–2015). Patients admitted due to MI during FIFA WC 2014 (12th June–13th July2014) were compared to those during the same period 2013 and 2015 (12th June–13th July). Total number of MI patients was higher during WC 2014 than in the comparison-period 2013 (18,479 vs.18,089, P < 0.001) and 2015 (18,479 vs.17,794, P < 0.001). WC was independently associated with higher MI numbers (2014 vs. 2013: OR 1.04 [95% CI 1.01–1.07]; 2014 vs. 2015: OR 1.07 [95% CI 1.04–1.10], P < 0.001). Patient characteristics and in-hospital mortality rate (8.3% vs. 8.3% vs. 8.4%) were similar during periods. In-hospital mortality rate was not affected by games of the German national team (8.9% vs. 8.1%, P = 0.110). However, we observed an increase regarding in-hospital mortality from 7.9 to 9.3% before to 12.0% at final-match-day. Number of hospital admissions due to MI in Germany was 3.7% higher during WC 2014 than during the same 31-day period 2015. While in-hospital mortality was not affected by the WC, the in-hospital mortality was highest at WC final. ods, the FIFA WC 2014 from 12th June 2014 to 13th July 2014 and during the comparison periods from 12th June to 13th July 2013 as well as from 12th June to 13th July 2015 without soccer WC and during the additional comparison-period between 14th of July and 14th August 2014. The total number of MI patients was significantly higher during FIFA WC 2014 than in the comparison-period 2013 (18,479 vs. 18,089; P < 0.001; representing an increase of 2.1%) and the comparison period 2015 (18,479 vs. 17,794; P < 0.001; representing an increase of 3.7%) ( Table 1). Additionally, total number of MI patients was also higher during the FIFA WC 2014 in comparison to the period between 14th of July to 14th August 2014 (18,479 vs. 17,482, P < 0.001; representing an increase of 5.4%). When analysing the total numbers of MI patients of the months June and July of the whole timeframe of the years 2011-2015, we identified no statistical differences between the years (P = 0.297) and no trend over time (β 96. 9 (Table 1). In contrast, we identified some differences between the groups regarding interventional treatments. While the use of cardiac catheter (68.7% vs. 71.5%, P < 0.001) and percutaneous coronary intervention (54.4% vs. 57.1%, P < 0.001) increased from 2013 to 2014, the numbers were stable between 2014 and 2015. The total numbers of coronary artery bypass surgeries were comparable between the 3 periods, whereas the usage of drug eluting stent implantations increased and the use of bare metal stents decreased from 2013 to 2015 (Table 1). Transfusions of blood constituents were more often administered in 2014 in comparison to 2015 (9.2% vs. 8.6%, P = 0.047). There were also no statistical differences regarding in-hospital mortality (P = 0.297) in the different years 2011-2015 when comparing the months June and July as well as no differences regarding mean temperature (P = 0.328) ( Figure S1 in the supplementary material). Fig. 2A). www.nature.com/scientificreports/ Similarly, the comparison periods of the years 2013 and 2015 showed also an up and down of the daily admission number due to MI over the observational periods (Fig. 3). Interestingly, the in-hospital mortality was highest at the final match between Germany vs. Argentina with 12.0%. Lowest rate of in-hospital death was found at the 1st July 2014 with 6.3% and the matches Argentina vs. Switzerland and Belgium vs. USA ( Fig. 2A). Highest total number of in-hospital death was 144 deaths at the final match ( Figure S2 in the supplementary material). The mortality rate at days with lower admission rate (< 600 admissions with MI per day) versus those days with higher admission rate (≥ 600 patients) were not statistically different (P = 0.406). We detected no trend regarding the proportion of patients aged ≥ 70 years as well as interventional treatments for MI over the observational FIFA WC 2014 period (Fig. 2B,C). Impact of the games of the German national team on admissions due to MI, in-hospital mortality and treatments. During the FIFA WC 2014, the total number of admitted MI patients (634 (434-666) vs. 591 (485-642), P = 0.624) as well as the in-hospital mortality rate (8.9% vs. 8.1%, P = 0.110) were not significantly affected by the games of the German national team (Table 2). Additionally, we analysed whether the timepoint of the first German goal in the matches of the German national team had an influence on the admission of patients with MI. In three of the seven games the first goal of the German national team was in the first half and respectively in the first 12 min of the matches. The timepoint of the first goal of the German national team had no influence on admissions of patients with acute MI (P = 0.321). The performed interventional treatment rates for MI did not differ between the match days of the German national team vs. other days of the WC period. These results were confirmed in the logistic regression models, showing no association between games of the German national team and recurrent MI events as well as mortality rate (Fig. 1B). However, we observed an increase of the in-hospital mortality rate from the match days before the final ranging between 7.9% and 9.3% to 12.0% at the final match day (Fig. 4A). While the last three games of the German national soccer team (quarter-final, semi-final and final) were accompanied by the highest rate of MI patients ≥ 70 years, the interventional und surgical treatments for patients hospitalized for MI did not changed during these games (Fig. 4B). The German nationwide inpatient sample showed no significant increase of the total number of MI events over the match days of the German national team during the FIFA WC 2014 (β − 0.006 (95% CI − 0.026 to 0.015), P = 0.502) (Fig. 4A). In addition, no statistical uptrend of in-hospital death during the WC from game one against Portugal to the WC final against Argentina in the linear regression analysis could be detected (β − 0.0003 Discussion Large sporting events potentially increase the occurrence of cardiovascular events [9][10][11]13 as well as related mortality [12][13][14] , but the results are not consistent across studies [15][16][17] . Soccer is the most popular sport in Germany and more than 34. Identification of potent triggers in the pathogenesis of MI are of outstanding importance 18 . Worldwide, ischemic heart disease with its acute manifestation MI is the single most common cause of death with a still increasing frequency 19 . It accounts for approximately 20% of all deaths in Europe 19 and the USA 20 . MI events are usually based on atherosclerotic stenosis of the coronary vessels, atherosclerotic plaques with plaque rupture leading to coronary thrombus formation resulting in myocardial hypoperfusion, inadequate myocardial oxygen supply, myocardial cell damage and myocardial cell death 21 . Besides these internal factors, 21,22 , MI is often preceded by specific triggers, which can include common activities such as physical exertion, alcohol consumption and heavy meals, air pollution but also stressful events [23][24][25] . In this context, it is well known, that the onset of MI follows a circadian and seasonal periodicity [26][27][28][29][30][31][32][33] . Beyond the seasonal and circadian variation, an increase of cardiovascular events was observed on Christmas and New Year's Day as well as returning to work at Mondays which were associated with increased mental stress, respectively 18 . Thus, we compared in the present study the temperature during the investigated months (June and July) of the years 2011-2015 and found no significant difference between the investigated years for Germany. In addition, it has to be mentioned that the government did not change and the socioeconomic and political status of Germany were stable during the observational period. The German federal elections (Bundestagswahlen) were in the years 2013 and 2017 and the chancellor did not change over the whole observational period. Furthermore, we detected no influence of the weekdays on the admission rate during the FIFA WC 2014. While 48% of the MI patients reported clinical triggers, moderate to heavy physical exertion was seen in 23% and emotional stress and upset in 18% of the MI patients 18 . Mental stress according to tension, sadness, frustration and anxiety increases the risk of myocardial ischemia 18,24 . Although large sporting events are substantially less dramatic events than environmental catastrophes [1][2][3][4][5][7][8][9][10][11][12] , studies have shown that these events could also affect the occurrence of cardiovascular diseases 9-11, 13, 17 as well as related mortality 12-14 . Total admissions for MI patients. Our study results show a strong and substantial increase in the total numbers of MI during the soccer WC 2014 compared to the same 31-days comparison-periods 2013 and 2015. This finding is in accordance with previous studies about European soccer matches influencing cardiovascular events 9, 10, 13-15, 18, 34, 35 . In contrast to our study, Wilbert-Lampen et al. identified a significant increase of MI incidence in the German population during the FIFA WC 2006 exclusively at match days of the German national team compared to later and earlier years, but not an increased occurrence during the whole soccer WC (games without German team participation were not accompanied by significant increase regarding the occurrence of MI) 9 . The increase during the matches with German participation was more pronounced in men than in women 9 . In contrast, we could not confirm a higher admission rate of MI patients at match days with games of the German national team compared to those WC days without, although our sample was 8.5-times larger than that of Wilbert-Lampen et al., who investigated the number of cardiovascular emergencies in different emergency departments in Bavaria 9 . Nevertheless, differences in the studied periods might explain the controversies, www.nature.com/scientificreports/ since we looked for MI events at the day of the soccer game, other studies searched for affections of the game on MI events of both, the game day as well as the two days after the match 17 . In addition, it has been reported that particularly, shoot-outs and defeats were associated with increased admissions 9, 10, 14 . Carroll et al. reported that the risk of admission due to MI was increased by 25% in England on the day England lost to Argentina in a penalty shoot-out and the following 2 days of the FIFA WC 1998 10 . In contrast, Gebhard et al. reported higher MI rates after victories of the Montreal Ice Hockey team 11 . Since the German team was not defeated at the FIFA WC 2014 and won the championship, we were not able to distinguish between match days with defeats and wins. However, our study demonstrated in accordance with most studies that WC soccer events are potent triggers of MI that should not be underestimated. Pathophysiological, it has been hypothesized, that emitted stress hormones might directly affect endothelial, monocytic and platelet functions 17,36 . Stress resulting in increased sympathetic nervous activity and coagulability contributes to an increased risk for transformation of a non-vulnerable atherosclerotic plaque into a plaque, which is susceptible for disruption accompanied by thrombogenic stimulus and can induce further vasoconstriction as well as increased coagulability, resulting in acute coronary syndrome including MI 14,17 . This was supported by a study showing that watching the FIFA WC 2010 between Spain and the Netherlands, Spanish fans had significant higher levels of testosterone and cortisol compared with control days 17 In contrast, Katz et al. showed a 63% increase of sudden cardiac death in Switzerland during the FIFA WC 2002. Although the increase was more pronounced in men, the incline was also in women significantly visible 13 . Once again, since the German team was not defeated and won the FIFA WC 2014, we were not able to identify a defeat as a trigger of increased mortality. Nevertheless, the highest mortality rate and as well the maximum of total numbers of in-hospital deaths were observed at the final match, which might indicate for a high stress level of the German soccer fans at the very exiting final game of Germany against Argentina, which was won not before the extension time with 1:0. Although, most peaks of the in-hospital mortality rate during the FIFA WC 2014 in the timeframe before the final match were at days with lower number of admissions for MI and the total number of deceased patients were comparable to the other days of the WC, in contrast, at the final match both, the in-hospital mortality rate as well as the total number of deceased patients were at their maximum during the FIFA WC 2014. Thus, the matches before the final did not affect the in-hospital mortality significantly, while the final match was accompanied by a substantial increase of in-hospital mortality. Limitations We have to report some limitations regarding our study that require consideration: Firstly, one major limitation is that the mentioned study results are based on ICD discharge codes, which might result in incomplete dataset due to under-reporting or under-coding. Secondly, clinical data like information about troponins, echocardiograms or concomitant medications are not available and therefore missing for additional analyses. Thirdly, with the data of the German nationwide inpatients sample reliable door-to-device or door-to-balloon time could not be calculated. Thus, the focus of the present study was on clear and strong endpoints such as in-hospital death and also recurrent MI which are very unlikely to be miscoded or not-coded. Conclusions Watching the FIFA WC 2014 was a trigger of the occurrence of MI. While the number of admissions due to MI in Germany was 3.7% higher during the FIFA WC 2014 than during the comparison-period 2015, the in-hospital mortality of MI was not affected by the WC. Nevertheless, the final match of Germany vs. Argentina with a scant victory of Germany was accompanied by highest in-hospital mortality throughout the WC period. Our data may help to find better ways of planning hospital capacities, which is essential for delivering sufficient capacity at the right time point to meet future enormous health-care challenges. Diagnoses, procedural codes, and definitions. Based on a diagnosis-and procedure-related remuneration system in Germany (German Diagnosis Related Groups [G-DRG] system), it is mandatory for all hospitals to transfer the coded patient data of each patient on diagnoses, (coexisting) conditions, and procedures to the Institute for the Hospital Remuneration System in order to get their remuneration 42,43 . Diagnoses are coded according to the International Classification of Diseases and Related Health Problems, 10th Revision with German Modification (ICD-10-GM) and surgical, diagnostic or interventional procedures according to the German Procedure Classification (OPS, surgery and procedures codes [Operationen-und Prozedurenschlüssel]). Thus, we were able to identify all patients admitted for MI (ICD codes I21 and I22) during the WC period and the comparison-period 33 . Methods Study outcomes. The outcomes of this study were number of admitted MI patients, death of all causes during the hospital stay (in-hospital death) and recurrent MI (ICD code I22), which was defined as recurrent MI during the first 28 days after a previous MI. Ethical aspects. Since this study did not involve direct access by the investigators to data of individual patients, approval by an ethics committee and informed consent were not required, in accordance with German law. Statistical methods. Descriptive statistics for relevant comparisons of MI patients admitted during the FIFA WC 2014 and those admitted during the comparison-periods 2013, 2014 and 2015 as well as MI patients admitted during the FIFA WC 2014 on match days of the German national soccer team and those on match days without German team-participation are provided as median and interquartile range (IQR), or as absolute numbers and corresponding percentages. Continuous variables were tested using the Mann-Whitney-U test and categorical variables were computed with Fisher's exact or chi 2 test, as appropriate. Temporal trends regarding total numbers of MI, interventional treatments and in-hospital mortality rate were analysed over the period of the FIFA WC 2014 and linear regressions were used to test for significant increase/ decrease. The Results were presented as Beta (β) and corresponding 95% confidence intervals (CI). Univariate and multivariate logistic regression models were analysed to investigate the impact of the FIFA WC 2014 as well as the impact of the match days with participation of the German national team on the study outcomes total numbers of MI patients, recurrent MI and in-hospital mortality. The Results were presented as Odds Ratios (OR) and corresponding 95% CI. Multi-variate logistic regression model, testing the independence of predictors for in-hospital outcomes, included the following parameters for adjustment: age, sex, active cancer, coronary artery disease, heart failure, chronic obstructive pulmonary disease, arterial hypertension, hyperlipidaemia, smoking, diabetes mellitus, atrial fibrillation/flutter (ICD code I48), renal insufficiency (comprised diagnosis of chronic renal insufficiency stages 3 to 5 with glomerular filtration rate < 60 ml/min/1,73 m 2 ), cardiac catheter, percutaneous coronary intervention, and coronary artery bypass surgery. The software SPSS (version 20.0; SPSS Inc., Chicago, Illinois) was used for computerised analysis. P values of < 0.05 (two-sided) were considered to be statistically significant.
2021-06-19T06:17:03.340Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "dcd2f392a69be3d6a42c65bc1edc1c3a71f27765", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-90582-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9404ca86a049bd137001e45aa55f2f1110957f83", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
256737796
pes2o/s2orc
v3-fos-license
Readability and topics of the German Health Web: Exploratory study and text analysis Background The internet has become an increasingly important resource for health information, especially for lay people. However, the information found does not necessarily comply with the user’s health literacy level. Therefore, it is vital to (1) identify prominent information providers, (2) quantify the readability of written health information, and (3) to analyze how different types of information sources are suited for people with differing health literacy levels. Objective In previous work, we showed the use of a focused crawler to “capture” and describe a large sample of the “German Health Web”, which we call the “Sampled German Health Web” (sGHW). It includes health-related web content of the three mostly German speaking countries Germany, Austria, and Switzerland, i.e. country-code top-level domains (ccTLDs) “.de”, “.at” and “.ch”. Based on the crawled data, we now provide a fully automated readability and vocabulary analysis of a subsample of the sGHW, an analysis of the sGHW’s graph structure covering its size, its content providers and a ratio of public to private stakeholders. In addition, we apply Latent Dirichlet Allocation (LDA) to identify topics and themes within the sGHW. Methods Important web sites were identified by applying PageRank on the sGHW’s graph representation. LDA was used to discover topics within the top-ranked web sites. Next, a computer-based readability and vocabulary analysis was performed on each health-related web page. Flesch Reading Ease (FRE) and the 4th Vienna formula (WSTF) were used to assess the readability. Vocabulary was assessed by a specifically trained Support Vector Machine classifier. Results In total, n = 14,193,743 health-related web pages were collected during the study period of 370 days. The resulting host-aggregated web graph comprises 231,733 nodes connected via 429,530 edges (network diameter = 25; average path length = 6.804; average degree = 1.854; modularity = 0.723). Among 3000 top-ranked pages (1000 per ccTLD according to PageRank), 18.50%(555/3000) belong to web sites from governmental or public institutions, 18.03% (541/3000) from nonprofit organizations, 54.03% (1621/3000) from private organizations, 4.07% (122/3000) from news agencies, 3.87% (116/3000) from pharmaceutical companies, 0.90% (27/3000) from private bloggers, and 0.60% (18/3000) are from others. LDA identified 50 topics, which we grouped into 11 themes: “Research & Science”, “Illness & Injury”, “The State”, “Healthcare structures”, “Diet & Food”, “Medical Specialities”, “Economy”, “Food production”, “Health communication”, “Family” and “Other”. The most prevalent themes were “Research & Science” and “Illness & Injury” accounting for 21.04% and 17.92% of all topics across all ccTLDs and provider types, respectively. Our readability analysis reveals that the majority of the collected web sites is structurally difficult or very difficult to read: 84.63% (2539/3000) scored a WSTF ≥ 12, 89.70% (2691/3000) scored a FRE ≤ 49. Moreover, our vocabulary analysis shows that 44.00% (1320/3000) web sites use vocabulary that is well suited for a lay audience. Conclusions We were able to identify major information hubs as well as topics and themes within the sGHW. Results indicate that the readability within the sGHW is low. As a consequence, patients may face barriers, even though the vocabulary used seems appropriate from a medical perspective. In future work, the authors intend to extend their analyses to identify trustworthy health information web sites. Objective In previous work, we showed the use of a focused crawler to "capture" and describe a large sample of the "German Health Web", which we call the "Sampled German Health Web" (sGHW). It includes health-related web content of the three mostly German speaking countries Germany, Austria, and Switzerland, i.e. country-code top-level domains (ccTLDs) ".de", ".at" and ".ch". Based on the crawled data, we now provide a fully automated readability and vocabulary analysis of a subsample of the sGHW, an analysis of the sGHW's graph structure covering its size, its content providers and a ratio of public to private stakeholders. In addition, we apply Latent Dirichlet Allocation (LDA) to identify topics and themes within the sGHW. Methods Important web sites were identified by applying PageRank on the sGHW's graph representation. LDA was used to discover topics within the top-ranked web sites. Next, a computerbased readability and vocabulary analysis was performed on each health-related web page. Flesch Reading Ease (FRE) and the 4 th Vienna formula (WSTF) were used to assess the readability. Vocabulary was assessed by a specifically trained Support Vector Machine classifier. Results In total, n = 14,193,743 health-related web pages were collected during the study period of 370 days. The resulting host-aggregated web graph comprises 231,733 nodes connected Overview The Internet has become an increasingly important resource for health information, especially for lay people [1][2][3][4][5][6][7]. Web users perform online searches to obtain health information regarding diseases, diagnoses, and different treatments [1]. However, the information found does not necessarily comply with the users' health literacy level and-consequently-might not be well understood by the respective reader. This can result in an overall poorer general health status, as well as greater barriers for the access to adequate medical care [8]. In addition, another major problem of written information is the gap between the language of medical experts and lay people. Even with a higher level of education, medical vocabulary poses problems for people reading relevant health information [9]. Moreover, the medical terms associated with the etiology of a disease tend to differ between health professionals and patients [10][11][12]. Health information on the web is provided by different stakeholders, each with its own set of interests [4]. Thus, the provided health information material does not necessarily reflect the needs of a (lay) health information seeker. Therefore, it is important to (1) identify information providers, (2) quantify the readability of as well as the type of vocabulary, and (3) to analyze how different types of information sources are suited for people with differing health literacy levels. Given the great variety and vast amount of health information available on the internet, a manual or semiautomatic approach for analysis seems futile. To the best of the authors' documents written for laymen and documents written for (medical) experts on the basis of 10.000 texts from various German health content providers. The resulting SVM classifier was tested against two datasets (n1 = 1202, n2 = 1200) and achived an accuracy of 0,8458 and 0,8741 respectively. Subsequently, it was applied to online health websites in the context of a Firefox browser extension in 2015 [29]. The SVM outputs a class probability using Platt Scaling [30]. This class probability is then transformed to an "expert level" expressing vocabularybased text difficulty, which was named L. In 2018, Zowalla and Wiesner [23] analyzed 2931 articles of the "Public Health Portal of Austria"(www.gesundheit.gov.at) using FRE, the 4 th Vienna formula (WSTF) and the measure L. Their analysis revealed low readability levels paired with a "moderate level of vocabulary difficulty." In 2018, L, WSTF and FRE were also applied by Keinki et al. [24] on 51 German cancer information booklets. They report, "that the majority of the 51 booklets (92.16%) is hard to read". In 2020, the study design was replicated by Wiesner et al. [25] for Psoriasis/Psoriatic Arthritis material written in German. They found, that "patient education materials in German require, on average, a college or university education level [..] even though the vocabulary used seems appropriate". McInnes and Haglund [26] entered 22 health condition terms in five different search engines and computed the readability scores of the first 10 web sites retrieved via each individual search using the Gunning Fog Index (FOG), Simple Measure of Gobbledygook score (SMOG), Flesch-Kincaid Grade (FKG) and FRE. They found, that "Websites with.gov and.nhs TLDs [top level domains] were the most readable while.edu sites were the least". A recent study by Worrall et al. [27] used Google search to collect the first 20 web pages for searches related to the coronavirus diseases and assessed the readability using FOG, FRE, FKG and SMOG. They conclude that "only 17.2% [(n = 165)] of web pages [were] at a universally readable level." In addition, Worrall et al. reported, that "Public Health organisations and Government organisations provided the most readable COVID-19 material, while digital media sources were significantly less readable" [27]. In addition to classic readability metrics such as FRE or WSTF, other approaches for computing the readability of (German) text material exist. In [31], vor der Brück et al. describe the readability checker DeLite, which uses 48 morphological, lexial, syntactic, and semantic indicators to assess the readability of a text written in German. A similar approach is presented by Berends and Vajjala in [32], which uses 165 custom features to assess the readability of German geography text books for secondary school. However, neither approach can easily be applied as the related source code is not publicly available. In addition, these tools are not commonly used for readability assessment of (health-related) text material. Other studies [33][34][35] leveraged crowd sourcing to measure the readability of text material. In this context, crowd workers are used to judge the readability of a given text. However, such approaches require high financial resources as the related crowd workers need to be paid. The costs highly depend on the amount of text material to be reviewed, which might not be feasible for large scale analyses of text material from the web. Topic modeling on health information material. Topic modeling is a well-accepted technique to discover abstract topics in unstructured text. It is often applied to clinical and/or health-related content posted on social media, online newspapers or on web sites in general [36][37][38][39][40][41][42]. In 2014, Paul and Dredze [36] showed, that topic models can be leveraged to infer health topics in Twitter messages. To do so, they analyzed 144 million health-related Twitter posts and discovered 13 topics, e.g. "cancer & serious illness", "dental health", "exercises" or "injuries & pain", in the dataset. Another study by Liu and Yin [39] used topic modeling to analyze the abstract topics of 477,904 posts in r/loseit of the reddit community. They identified 25 topics concerning the overall theme "weight loss" such as "food and drinks", "exercises", or "communication". Another study by Muralidhara and Paul [37] leveraged topic modeling to discover the abstract health-related topics contained in 96,426 Instagram posts with hashtags related to health. Overall, they identified 47 health-related topics covering ten broad themes such as "acute illness", "alternative medicine", "chronic illness and pain", or "substance use". The most prevalent topics were related to "diet" and "exercise". In 2017, Melkers et al. [38] assessed the content of 89 dental blogs by using topic modeling techniques. In total, the authors found 176 abstract topics inside the data and grouped them into four leading themes: "Status/Social", "Dental care", "Dental practice related", and "Other". Liu et al. [40] collected 642 newspaper articles related to third hand smoke and analyzed the text material by using LDA. They discovered ten topics, e.g. "cancer", "risks of smoking", or "air quality" and grouped them into three major themes. In 2020, Min et al. [42] analyzed the content of 145 web sites related to "occupational accidents" by using topic modeling. They discovered 14 topics with three themes: "workers' compensation benefits", "illicit agreements with the employer", and "fatal and non-fatal injuries and vulnerable workers". Bahng and Lee [41] analyzed posts on the social question-and-answer platform "Naver Knowledge-iN" by using LDA "to identify patients' perceptions, concerns, and needs on hearing loss." They found 21 topics, which "mostly correspond to sub-fields established in hearing science research", and grouped them into five main themes such as "noise-induced hearing loss" or "sudden hearing loss". Crawling the German Health Web. In 2020, we demonstrated the suitability of a distributed focused web crawler for the acquisition of a large sample of the GHW [13]. The presented system run for 277 days and had an average harvest rate of 19.76% and the recall estimated via a seed-target approach was 0.821, which indicates, that our approach is a suitable method to acquire most health-related content found under the country-code top-level domains (ccTLDs) ".de", ".at", and ".ch". The crawler uses an SVM text classifier to estimate the health relevance of a given web page. It was trained on a large data set (n = 70.048) acquired from various German content providers to distinguish between health-related and non-health-related web pages. The classifier was evaluated based on two different datasets. The first dataset (TD1) consisted of 17.514 documents and was based on a-priori class labeling, the second one (TD2) consisted of 384 real-world web pages and was annotated by using a crowd sourcing approach. Both, TD1 and TD2, had an equal class distribution. The system achieved an accuracy of 0.937 for TD1 (TD2: 0.966), precision on TD1 of 0.934 (TD2 = 0.954), and a recall of 0.944 (TD2 = 0.989). The results indicated that the presented crawler was a suitable method for acquiring a large sample of the GHW in a fully automated manner. Subsequently, we call the acquired sample of the GHW the "Sampled German Health Web" (sGHW). This paper presents a follow-up study of the research conducted in 2020 [13]. The latter analyzes the acquired data, namely the sGHW graph and the content of health-related web pages after running the distributed focused web crawler presented in [13] for 370 days. Aims of the study. In line with the methodology presented in [13], the authors decided to concentrate on health-related web pages available free of charge on the internet in the D-A-CH region that can be found under the respective ccTLDs ".de", ".at", and ".ch". In this context, the aim of this study was four-fold: 1. Analyze the current situation, that is, the volume of and the information providers behind health-related web pages in the D-A-CH region. 2. Demonstrate the suitability of a fully automated approach to compute the following three aspects of the sGHW: its readability by using established readability formulas, its type of vocabulary, and the prevalent topics. 3. Quantify the level of readability of and the type of vocabulary used within the sGHW. In addition, identify the topics presented within health-related web pages in the sGHW. 4. Evaluate whether web pages offered by certain types of information providers are better suited for citizens with lower health literacy levels than others. Definition of health information In the context of this study, we define "health information" or the "health relevance" of a given web page very openly. Therefore, we include, among others, the following topics: • Diseases and their diagnoses, • Diagnostic procedures, therapies or treatments, • Pharmaceutical Information (e.g., about medications), • Homeopathy, • Nutrition, sports and lifestyle information that is intended to lead to a "healthier" life (prevention), • Information on health care structure (hospitals, doctor's offices, etc), • Information from and about self-help groups, • Content generated by patients or users on the topic of health, e.g. in social media or internet forums. Thus, websites considered as "health-related" do not necessarily comply with the criteria of evidence-based medicine and may have both laypersons and professionals as their target audience. Information on the health condition of animals or their treatment (veterinary medicine) is not considered as health information in the context of this study. Study setting This study of health-related web pages consisted of four stages: 1. Regarding study aim 1, we used the focused web crawling system presented in [13] to collect health-related web pages and to create a health-related host-aggregated web graph. As in [13], we applied the PageRank algorithm [43] to identify important web sites in the sGHW on the aforementioned graph representation. 2. Then, one author screened the 1000 top-ranked web sites for each ccTLD by visiting the related web site in the incognito mode of a Chromium browser. In addition, the same author looked for legal information (imprint) of the web site's owner. If a legal entity could be identified, a background check was conducted using popular search engines. 3. Based on these findings, one of the following nine categories was assigned to each web site's information provider: Government or Public Other (O). The categories were defined on the basis of [13]. A detailed explanation for each category is given in S1 Appendix. To mitigate rater bias, the assignment was done twice with a gap of two months between each run. If there was a tie, the rater reviewed the case again and resolved it by performing an additional background check. In addition, the interrater reliability metrics percent agreement (PA) [44] and Cohen's κ [45] were computed. 4. At the last stage, a fully automated readability and vocabulary analysis was conducted on the 1000 top-ranked web sites for each ccTLD. In addition, topic modeling was applied on the same data. The resulting topics were then paraphrased in a group discussion. These analyses were intended to answer the aims of the study 2 to 4. Graph analysis Several studies have extensively analyzed the graph structure of the web [46][47][48]. In this context, a graph node represents a web page and an edge represents a link between two web pages. In our study, we generated a host-aggregated graph in order to reduce its computational complexity and explore its properties [49]. To do so, individual web pages are combined and represented by their parent web site (including outgoing and ingoing links). On the resulting hostaggregated sGHW graph, we applied the following metrics or algorithms: • Average degree is the average number of edges connected to a node [50]. For a directed web graph, this is defined as the total number of edges divided by the total number of nodes. • Modularity measures the strength of division of a graph into clusters or groups [50,51]. Graphs with a high modularity have dense connections between the web sites within certain clusters but sparse connection to other web sites, which are contained in different clusters. • PageRank is a centrality-based metric that allows identification of web sites (nodes) of importance inside a graph [43]. The underlying assumption is that an important graph node (web site) will receive more links from other important nodes (i.e., higher in-degree). Other metrics such as network diameter and the average path length (i.e., the average number of clicks which will lead from one web site to another) are frequently used for graph analysis [50,52]. Coverage of relevant web sites The coverage (or completeness) of our focused web crawl was evaluated by comparing the overlap to another web crawl. For this purpose, search results of the commercial search engine provider Google were used. The underlying assumption is that a (commercial) search engine provider such as Google has already indexed a large part of the web. To compute the overlap, search queries with relevant (medical) terms were sent to the application programming interface (API) of the related search engine over a period of time. Based on the results, it is then possible to determine the percentage of URLs returned by Google that are included in our focused web crawl. The related proportion is an indicator regarding the completeness of our sampled dataset. Web site ranking strategies Web sites can be ranked by using different, potentially combined approaches ranging from estimating the traffic of a given website, the amount of unique visitors in a given timeframe, manual or search-engine based approaches or graph-based ranking algorithms [53]. Many ranking strategies originate from the field of search engine optimization (SEO) and aim to reproduce confidential black box ranking algorithms of (commercial) search engine providers such as Google. In most cases, related metrics and rankings are offered by commercial third party providers such as ALEXA [54], Sistrix [55], Searchmetrics [56] or SimilarWeb [57] as part of their business. However, their methods to rank a given web site as well as influencing factors remain confidential. Obviously, this leaves an enormous gap with respect to transparency and reproducibility [53,58]. In this study, we solely relied on PageRank [43], a clearly defined and transparent algorithm which is well established in computer science in order to assess the relevance of graph nodes. In particular, we apply PageRank to the host-aggregated graph representation of the sGHW. Therefore, our ranking is not based on any traffic estimations, popularity or visibility indices measured by third party providers. Moreover, it is not influenced by commercial interests and can easily be reproduced by other researchers. It provides a ranking of the sGHW based on its link structure as collected by our focused web crawler. Readability analysis Definition. Readability describes the properties of written text with respect to the readers' understanding of a document [59,60]. It depends on the complexity of a text's structure, the sentence structure and the vocabulary used. Flesch reading ease scale. The FRE is a well-established readability metric for the English language [61]. FRE relies on the average sentence length (ASL) and the average number of syllables per word (ASW). FRE assumes that short words or sentences are usually easier to understand than longer ones. We applied the modified FRE scale by Toni Amstad [62] for the German language. It is defined as follows: FRE ¼ 180 À ASL À ð58:5 � ASWÞ Vienna formula. In contrast to the FRE, the Vienna formula (WSTF) was originally developed for the German language by Bamberger and Vanacek [63]. They derived different versions of the Vienna formula for prose and non-fictional text. Typically, the 4 th WSTF is used for text analysis. It relies on the average sentence length (ASL) and on the proportion of (complex) words with three or more syllables (MS): Vocabulary-based text difficulty. The German language makes use of many compound words (e.g. "Halsschmerzen", "Magen-Darm-Erkrankung", "Zuckerkrankheit"). These terms are quite layman friendly (for an average patient) but are very lengthy. Consequently, average word length or syllable counts are not a good indicator to decide if a given word is easily comprehensible (that means, if it can be easily understood by people with a grade level of 6-7). Machine learning techniques can be used to compensate for the limitations of established sentence-based readability measures such as FRE scale or WSTF [28,64]. To quantify the vocabulary-based text difficulty (i.e., the "expert-centricity" of a given text), we defined the measure L 2 [1, .., 10] similar to [23][24][25]29], which leverages the SVM classifier of [28] as described in "Related Work". Before using this pretrained classifier to assess the vocabulary-based difficulty of medical text material, several preprocessing steps are necessary [65]. As a first step, text material is cleaned from syntactic markup (i.e. boilerplate code, HTML tags). Next, each text is tokanized (i.e. split into single word fragments) and each character is converted to lower case (case folding). Stop words are removed (e.g. "the", "and", "it") as they do not influence the difficulty of a text. Next, stemming techniques are applied in order to map tokens to their stems and reduce morphological variations of words (e.g. "goes" becomes "go"). Finally, the text content of a document is transformed into a document vector based on previously selected features from [28]. For each text, the SVM classifier outputs a class probability using Platt Scaling [30]. The class probability is then transformed to the value L, which expresses vocabulary-based text difficulty. Low values of L indicate a very easy text written for the elementary level or elementary school; a value of 3-4 corresponds to an easy text (intermediate level / junior high school), a value of 4-5 to a moderate text (laymen with medical educational background), a value of 5-6 to a difficult text, a value of 7-8 to a very expert-centric text and a value of > 8 indicates that an academic (medical) background knowledge or working experience in the medical domain is required. The procedure and the related processing steps are described in detail in [29]. Topic modeling In this study, we applied topic modeling to identify themes and topics within the GWH. Specifically, we used LDA to identify the main topics of the three times 1000 top-ranked web sites [66]. Since LDA is an unsupervised algorithm, we relied on perplexity to determine the optimal number of topics [66]. To do so, we trained LDA models using Gibbs Sampling [67] with 3000 iterations for 1 to 90 topics (with a step size of 10) on the full dataset of the three times 1000 top-ranked web sites consisting of 3,746,055 web pages. To mitigate word sparsity, we conducted stemming and removed words with little to no analytical value (e.g., "der" (article), "und" (conjunction), "jetzt" (particle)). In addition, only words with a minimum frequency of 200 were kept in the text corpus. To estimate LDA's hyper parameters (named α and β), we applied a method from Asuncion et al. [68] which is based on Minka [69] and an EM procedure nesting the actual Gibb's sampling algorithm. Thus, the approach determines optimized hyper parameters as part of the topic inference. Moreover, we relied on Wallach et al. (Equation 7) [70] in order to assess the prevalence of topics in web pages as described in [71] (Section 3.4). To describe the statistical dispersion of the topic distribution, we used the Gini coefficient [72]. The preprocessing steps and software libraries used to conduct this analysis are described in more detail in Section "Computational Processing & System Environment". Each topic consists of a set of keywords and was visualized using word clouds. The word clouds were subsequently labeled by eight volunteers with different backgrounds including "Medical Informatics", "Health Economics", "Physics", "Social Economics", "Marketing", and "Electrical Engineering": A spread sheet document containing the word clouds to be labeled was provided along with instructions to each volunteer (see S2 Appendix). The results were then aggregated by one of the authors and given to two other volunteers ("Medical Informatics" and "Civil Engineering"), who conducted the final paraphrazing for each topic in a group discussion. Summarization into themes was conducted via a group discussion among two of the authors. Graph analysis The graph database Neo4j, version 4.1.1, was used to store the host-aggregated web graph, which was generated by the focused crawler. The Neo4j graph algorithm plugins were used to compute PageRank and related metrics on an Ubuntu 20.04 LTS 64-bit server. Statistical analysis The statistics software R (The R Foundation for Statistical Computing), version 3.6.3 (February 29, 2020), on an Ubuntu 20.04 LTS 64-bit computer was used to compute PA, Cohen's κ and the Pearson correlation coefficient (PCC). Computational processing & system environment Readability analysis. Given the results of our previous study [13], it became obvious that sequential processing of the huge amount of crawled data would take too much time and resources. For this reason, a parallel and distributed system architecture is necessary to process the crawled data efficiently. There are several frameworks that allow for such distributed processing; in this study, we relied on the Apache Storm framework [73]-a software development kit for building scalable computation systems in Java. Fig 1 depicts the architecture of our distributed text analysis framework. A set of spouts emit yet unprocessed URLs along with their underlying text material (as tuples) from the crawl database. The tuples are assigned to cluster nodes (based on their hostname) and directed to text analysis components. First, the raw text material is tokenized (i.e., split into single word fragments) and transformed into a bag of words, which is added to the given tuple. Next, several statistical measures such as syllable counts, (complex) word counts, or character counts are computed. Each tuple is then processed to compute the readability measures FRE and WSTF. To do so (see lower part of Fig 1 "gear icon" marked with the label "R"), the tuple's full text is fed to a natural language processing (NLP) pipeline. Regular expression filters sanitize the input and remove disturbance artifacts (e.g., different hyphen encoding schemes). Finally, the aforementioned readability metrics are computed. For sentence detection, we rely on the Apache OpenNLP library [74] and its sentence model for the German language [75]. Liang's hyphenation algorithm is used to estimate syllable counts [76]. Next, the tuple is processed to gauge the vocabulary-based text difficulty (see lower part of Fig 1, "gear icon" marked with the label "SVM"). Several pre-processing steps are necessary to apply the pre-trained classifier to our text material [28,65]: As a first step, regular expression (regex) filters are applied in a similar manner as for FRE and WSTF. Second, a text is tokenized, converted to lower-case and stop words are removed. The latter is important as stop words do not influence the difficulty of a text. Third, the remaining tokens are reduced to their stems (e.g., goes becomes go) in order to limit linguistic variations by means of Porter's Snowball Stemmer [77]. Each text is transformed into a bag of words representation (document vector) based on a broad list of previously selected terms from the medical domain as such terms greatly influence the vocabulary-based difficulty of a text. Each document vector is then fed into the classifier and the related output is mapped to the vocabulary measure L. Finally, each enriched tuple is stored in a PostgreSQL (v10.15) database for subsequent analysis. The computing cluster consists of 22 virtual machines running on Ubuntu 18.04 LTS 64bit. Two physical servers (each equipped with two Intel Xeon E5-2689 and 256GB of memory) of a Cisco unified computing system provide the computational resources and run as a virtualization platform to allow shared resource allocation. The analysis was conducted between August 6 and August 30, 2020. Topic modeling. Fig 2 depicts the architecture of our analysis framework to conduct topic modeling using LDA. As a first step, the bag of words representation of each web page is fetched by multiple threads from the PostgreSQL database containing the pre-processed web pages. If a corresponding web page had not yet been handled via the readability analysis, pre-processing steps are conducted in the same way as for the Classification pipeline from Section "Readability Analysis". As an additional step, terms are filtered based on their minimum frequency within the document collection. Next, LDA is applied to the given document collection. We relied on the LDA implementation contained in the Topic Grouper framework by Pfeifer and Leidner [78]. The LDA-procedure and analysis to determine a reasonable number "n" of topics using the perplexity score (see "Methods" section) was conducted on a bare-metal server (equipped with two Intel Xeon E5-2630 v4 and 384 GB of memory) running Ubuntu 18.04 LTS with Java 11.0.9 between November 5 and December 30, 2020. Graph analysis The focused web crawling system [13] ran from May 27, 2019 to May 31, 2020 and collected 14,193,743 health-related web pages. The resulting host-aggregated web graph of the sGHW comprises 231,733 nodes (web sites) connected via 429,530 edges (links between web sites). A total of 82.63% (191,479/231,733) of the web sites belong to the ccTLD ".de"; 7.89% (18,272/231,733) to".at", and 9.48% (21,976/231,733) to ".ch". The graph has a network diameter of 25. The average path length is 6.804. The average degree is 1.854. Modularity was computed to be 0.717. Fig 3 depicts the size-rank plot of the degree distribution of the host-aggregated sGHW graph. In-and out-degree represent the number of hyperlinks to or from all web pages that belong to an individual host. From what we can see visually, there is a concavity, indicating that the distribution does not follow a power law. This is in line with the results by Meusel et al. in [48], who conducted a similar analysis for a host-aggegated graph of an unfocused web crawl. As the ccTLD ".de" has the highest share within the graph, a global ranking according to PageRank would be dominated by ".de" web sites. For this reason, we used the 1000 topranked web sites according to PageRank in the subsequent analyses for each ccTLD separately. Workflow of the processing steps and software components for topic modeling: (1) text material is retrieved from a central relational database; (2) several processing threads perform a collection of pre-processing tasks; (3) LDA is applied to the resulting document vectors. The software takes raw text material as an input and outputs n topics. The n is a user-defined input parameter to LDA. https://doi.org/10.1371/journal.pone.0281582.g002 Coverage of relevant web sites To measure the coverage of our focused web crawl, we computed the overlap of our data against the commercial search engine provider Google. For this purpose, term-based search queries were sent to a Google Search Engine configured for the ccTLDs ".de", ".at", and ".ch" over a period of 306 days (September 16, 2020 to July 19, 2021). A total of 4,093 web sites for the most common diseases and 2,736 for the random selection of rare diseases were returned by Google. Our focused web crawl covered a total of 3,519/4,093 (85.98%) of the most web sites for common diseases and 2,425/2,736 (88.63%) of the web sites for rare diseases. In summary, the web crawl contained 5,944/6,829 (87.04%) of the web sites returned by Google. This suggests that we obtained a high coverage of health-related German web sites as our results parallel the coverage of a very comprehensive commercial web crawler. Ranking of web sites The most important host-aggregated URLs (according to PageRank) were categorized according to the categories introduced in Section "Study Setting". The raters achieved a PA of 0.879 and a Cohen's κ of 0.797. According to Landis and Koch [81], these κ values correspond to a "substantial agreement". In 10.82% (364/3000) of the cases, no majority vote was achieved. Such cases were subsequently cleared following the procedure described in "Study setting". The category "Social Network" was not selected, as no social network was contained in the 1000 top-ranked web sites for each ccTLD. Table 1 lists the 25 top-ranked web sites according to PageRank with their respective information provider for ".de". In total, 214 out of 1000 (21.40%) are published by governmental or public (health) institutions (GPH), 23.70% (237/1000) are published by non-profit organizations (NPO) and 43.50% (435/1000) by private organizations or individual persons (PO), i.e. web sites of medical professionals or related businesses. 62 out of 1000 (6.20%) are published by mainstream or local news agencies (M), 39 out of 1000 (3.90%) by pharmaceutical companies (PC) and 0.80% (8/1000) originated from private or personal blogs (PB). The category "Other" was given to 5 out of 1000 web sites (0.50%). Table 2 lists the 25 top-ranked web sites according to PageRank with their respective information provider for ".at". In total, 145 out of 1000 (14.50%) are published by GPH, 14.70% (147/1000) are published by NPO and 60.30% (603/1000) by PO. 40 out of 1000 (4.00%) are published by M, 46 out of 1000 (4.60%) by PC and 1.20% (12/1000) originated from PB. The category "Other" was given to 7 out of 1000 web sites (0.70%). Table 3 lists the 25 top-ranked web sites according to PageRank with their respective information provider for ".ch". In total, 196 out of 1000 (19.60%) are published by GPH, 15.70% (157/1000) are published by NPO and 58.30% (583/1000) by PO. 20 out of 1000 (2.00%) are published by M, 31 out of 1000 (3.10%) by PC and 0.70% (7/1000) originated from PB. The category "Other" was assigned to 6 out of 1000 web sites (0.60%). S3 Appendix provides a full listing of the 1000 top-ranked web sites for each ccTLD. Dataset characteristics Overall, the web pages from 2720 of the top ranked web sites were included for readability and vocabulary assessment. These web pages account for 26 A complete listing for each web site with data on the number of sentences, words, complex words, and syllables is given in S3 Appendix. 280 out of the 3000 top-ranked web sites could not be analyzed as (a) the related web pages were either not visited or not stored by our focused crawler, (b) text material could not be extracted, or (c) was too short for further analyses. Readability analysis All web sites were analyzed according to the readability metrics FRE, WSTF and L, as outlined in the Methods section. The applied metrics FRE, WSTF and L are based on different scales. For a more accessible presention, we mapped the values of each scale to five classes in order to note text difficulty across the metrics in a uniform way. We applied the same mapping as presented by Wiesner et al. [25]. The mapping for each metric is given in Table 4. The class distribution for FRE, WSTF and L, for each information provider type, is given in S4 Appendix. For the ccTLD ".de", the web site with the lowest readability was "www.uksh.de" (n = 168,185) with an FRE value of 0.147 (SD = 2.105) and a WSTF of 14.936 (SD = 0.923). This corresponds to VD (very difficult to read). For the ccTLD ".at", the lowest readability was computed for "www.mycare.at" (n = 1398) with an FRE value of 0.025 (SD = 0.330) and a WSTF of 15 (SD = 0) (VD). "www.implantat-berater.ch" (n = 251) had the lowest readability in ".ch" with FRE = 0.091 (SD = 0.827) and WSTF = 14.998 (SD = 0.0152) (VD). For the ccTLD ".ch", the best readable web sites in all three countries were offered by web sites for which the focused crawler only collected a low amount of web pages (n < 10) (see S3 Appendix). According to FRE, most web sites (90.533%; 2,716/3000) are difficult (D) or very difficult (VD) to read. This corresponds to the WSTF scores for which 2,539/3000 (84.633%) web sites are difficult or very difficult to read. The distributions for each ccTLD are depicted in web sites (44.07%, 1322/3000), a score between >4 and <9 corresponds to a level suitable for persons with medical knowledge or a strong medical background. The web sites of the ccTLD ".at" scored the lowest vocabulary measure with L = 5.796 (SD = 2.543), followed by L = 5.885 (SD = 2.499) for web sites under the ccTLD ".ch". Web sites under the ccTLD ".de" scored the highest vocabulary measure with L = 6.340 (SD = 2.572). The distribution of the classification results over all web sites is depicted in Fig 6. PLOS ONE In this context, 281 out of the 3000 top-ranked web sites could not be analyzed for reasons explained in the "Readability Analysis" section. PLOS ONE therefore function as almost interchangeable measures to characterize sentence complexity. Also, high vocabulary difficulty moderately correlates with sentence complexity. Topic modeling In order to determine a suitable number of topics, we performed LDA topic modeling with a varying topic number and observed perplexity (see "Methods"). Fig 8 depicts the corresponding perplexity graph: With LDA hyper parameter optimization in place, an increasing number of topics allows to better predict the document collection. However, the gain lessens considerably beyond 50 topics. Therefore, we decided to work with n = 50 topics for further analysis. Table 5 shows the inferred 50 topics, their marginal distribution, and the most relevant terms of the web pages (N = 3,746,055) of the top 3000 web sites (1000 for each ccTLD). The marginal distribution of a topic was measured by the probability that the topic was sampled from web pages, while the relevance of a term was measured by the probability that it was sampled from its topic. Word cloud representations of these topics can be found in S5 Appendix. The topics were summarized into 11 themes (see "Methods"). The most prevalent theme was related to "Research & Science", followed by "Illness & Injury", "The State", "Healthcare structures", "Diet & Food", "Medical Specialities", "Economy", "Food production", "Health communication", "Family", and "Other". PLOS ONE In addition, we found a theme "Health communication" including two topics: "Health (disussion) forum" (T22), "Doctor rating portal" (T27). "Other" was assigned to T40 and T45, which could not be named by the volunteers. Figs 9-11 depict the theme distribution per information provider type for each ccTLD. The theme distribution for each ccTLD and for each information provider type seems to be similar between each country. Mainstream or local news agencies (M) report primarily on the topics "Illness and Injury" and "Economy". Governmental or public (health) organizations (GPH), on the other hand, mainly focus on "Research & Science," "Healthcare Structures," and "Illness and Injury". In contrast, NPOs report predominantly on "Illness and Injury," followed by "Research & Science" and "Healthcare Structures". This is similar to the topic distribution for private organizations (POs) and pharmaceutical companies (PCs). Overall, it seems that the primary content of the sGHW across all ccTLDs is focused on "Research & Science," "Illness & Injury," and "Healthcare Structures". Fig 12 depicts the theme distribution per ccTLD. On average, the theme "Research & Science" accounts for a 21.04% ("Illness & Injury": 17.92%; "Healthcare Structures": 15.27%; "The State": 10.52%; "Economy": 10.50%; "Medical Specialities": 7.30%; "Diet & Food": 6.36%; "Other": 3.35%; "Food production": 2.94%; "Health Communication": 2.90%; "Familiy": 2.00%) of all topics across all ccTLDs and provider types. This suggests, that the content of the sGHW is similar between the countries of the D-A-CH region (at least for the ccTLDs studied) and that the information need of users may not vary greatly between the individual countries. With respect to study aims 2 and 3, our readability analysis reveals that the majority of the collected web sites is difficult or very difficult (D+VD) to read (see S4 Appendix), as shown by the WSTF (84,63%; 2539/3000). This ratio is similar for each ccTLD: 86.20% (862/1000) for ". de", 84.40% for ".at", and 83.30% (833/1000) for ".ch". This finding coincides with the outcome of the German adoption of the FRE scale: 2691/3000 (89.70%) web sites are D or VD. Again, the ratio is similar for each ccTLD: 88.30% (883/1000) for ".de", 90.70% (907/1000) for ".at", and 90.10% (901/1000) for ".ch". Thus, health-related web sites are often written at high readability level and might not suit the intended group of readers. This is in line with the results of other studies, which also reported high readability levels for such resources [18-20, 22, 23, 26, 27]. Our vocabulary analysis revealed that 44.00% (1320/3000) web sites use vocabulary that is well suited for a lay audience. Again, the ratio is similar for each ccTLD: 48.50% (485/1000) for ".de", 41.90% (419/1000) for ".at", and 41.60% (416/1000) for ".ch". This suggests that relatively few medical expert terms have been used on related web pages, or expert terminology has been actively avoided. The distribution of in-and out-degrees i.e. links per host by rank is in line with the results from Meusel [48]. Although the latter publication analysed a large but unfocused crawl, the nature of its respective distribution is similar to ours. This suggests that the distribution of incoming and outgoing links in the sGHW is not different from the rest of the web. We found that the sentence complexity measures FRE and WSTF are strongly correlated on health-related web pages such that they can be used interchangeably. Also, high vocabulary difficulty moderately correlates with sentence complexity. On average, the theme "Research & Science" accounts for a 21.04%; "Illness & Injury": 17.92%; "Healthcare Structures": 15.27%; "The State": 10.52%; "Economy": 10.50%; "Medical Specialities": 7.30%; "Diet & Food": 6.36%; "Other": 3.35%; "Food production": 2.94%; "Health Communication": 2.90%; "Familiy": 2.00% of all topics across all ccTLDs and provider types. This suggests, that the content of the sGHW is similar between the countries of the D-A-CH region (at least for the ccTLDs studied). Overall, we demonstrated that a focused crawling approach and subsequent graph analysis can be leveraged to conduct a full scale readability and vocabulary assessment on a large sample of a language-specific part of the health-related web (study aim 4). Limitations Several limitations apply for this study. First, we only considered the ccTLDs ".de", ".at", and ". ch" to avoid the need for a language classification system, as most web sites on these ccTLDs are written in German. Therefore, our dataset covers only a certain fraction of the GHW. For example (German) web sites published under ".com", e.g. the web site of the electronic health record provider "www.vivy.com", are not contained. In addition, our web crawl represents only a snapshot of the time when it was taken, i.e. web sites, which were created after the end of our crawl, are also not included in our dataset as we abstained from performing update operations to reduce computational complexity. A famous example for such a web site is the national health portal of Germany "gesund.bund.de" operated by the German Ministry of Health and released on 1 st September 2020. Second, with a mean accuracy of 0.951 our classifier might have produced false positive results during the crawling process (see [13]). Third, we used a focused web crawling system to collect health-related web pages and to extract the raw text material from HTML content. For this reason, disturbance artifacts, such as different kinds of hyphens, XML fragments or misencoded characters, may still have been included in the extracted text material and thus have influenced our readability analysis. In addition, some analyzed web sites may only contain a small amount of (content) web pages which might lead to an either underestimated or overestimated average readability and/or vocabulary score (see S3 Appendix). This is due to the automatic nature of our web crawling process: (1) we omit (content) web pages, which were classified as non-relevant, (2) we respect crawler ethics (i.e., robots.txt), and (3) we are using an estimated priority value to determine crawling priority. Consequently, we might have missed additional relevant (content) web pages for a given website. Next, we relied on the PageRank algorithm to determine a ranking of the most important web sites contained within the generated host-aggregated sGHW graph. This does not necessarily comply with the perspective of an individual user who is using a (commercial) search engine to find relevant health content nor does it correlate with visibility indices or "organic ranks" provided by (commercial) third party services. However, we think that ranking web sites based on PageRank, which was computed on the host-aggegated sGHW graph is justified as it is not biased by commercial interest and can be reproduced easily. Even more importantly, it is a well accepted approach to assess the importance of a graph node in graph theory [50,82]. Moreover, detecting syllables is not a trivial task for the German language and is not always reliably [83]. As the adapted FRE and the WSTF are computed on the basis of the mean number of syllables per word, they can be influenced by the aforementioned inaccuracies. However, this applies to all NLP analysis tools for German text material. In addition, there is a lack of proper validation studies on the application of readability measures for German health-related text material. However, due to the frequent use of these instruments in the scientific community and their use by the German Agency for Quality in Medicine to assess the readability of their patient education guidelines and S3 guidelines [84], we consider them as a reference that allows comparisons of analyses of readability of health-related text material written in German. Furthermore, solely computing the readability of text material disregards the individual knowledge and motivation of readers [63]. Aspects related to illustration and design were not included in the analysis. Consequently, the suitability of health-related web sites cannot exclusively be judged based on its readability or its used vocabulary [63]. Other methods, such as the Suitability Assessment of Materials (SAM) instrument [85] or DISCERN [86] go beyond measures of word and sentence length and cover other aspects of a web page that influence the understandability (or quality) of health information and text comprehension. However, these instruments require manual work and a sufficient number of judges to ensure an objective assessment. Moreover, with regard to our study, assessing 3,746,055 texts (i.e. web pages) would impose very high financial and human resources, which is not feasible. Comparison with prior work Readability of health information material. Previous studies investigated the readability of health-related web pages [18,26,27] or the vocabulary difficulty of health education material provided as PDF brochures [24,25]. In contrast to McInnes and Haglund [26] or Worrall et al. [27], we obtained our data collection by using a specifically trained focused web crawler [13] instead of retrieving it via a (commercial) search engine provider such as Google. Thus, our data collection is not influenced by commercial interests. McInnes and Haglund [26] analyzed 352 web sites and computed a mean FRE of 46.08, which is difficult to read. In 2020, Worrall et al. [27] report that "only 17.2% [(n = 165)] of web pages [related to COVID-19 were written] at a universally readable level." These findings are supported by Brütting et al. [18] who found low readability scores for 45 prominent web sites on melanoma immunotherapy written in German. These results are in line with our findings which reveal that the majority of the collected web sites is difficult or very difficult (D+VD) to read (see S4 Appendix). In a previous study [61], Keinki et al. analyzed information booklets for German cancer patients. The authors found a mean vocabulary score of L = 5.09 signaling a higher difficulty for lay people. Wiesner et al. [25] found a mean vocabulary score of L = 3.66 for health education materials on Psoriasis/Psoriatic Arthritis written in German, indicating the use of less complex medical terminology. In contrast to the aforementioned studies, our study revealed higher mean vocabulary scores: L = 6.340 (SD = 2.572) for ".de", L = 5.796 (SD = 2.543) for ".at", and L = 5.885 (SD = 2.499) for ".ch". This difference might result from the fact that we focused on health-related material contained in the GWH rather than limiting our study to patient information material only. Consequently, our data collection might contain web pages targeting (medical) experts, who make use of (medical) expert vocabulary. Topic modeling on health information material. Previous studies applied topic modeling techniques to a variety of health information material such as content posted on social media, online newspaper articles or on web sites in general [36][37][38][39][40][41][42]. Most of these studies [38][39][40][41][42] focused on a specific health-related topic such as "hearing loss", "weight loss", "dental health"or"occupational accidents". Only two studies [36,37] analyzed health topics covered by posts in social media (Twitter and Instagram). Compared to the study by Paul and Dredze [36] on health topics on Twitter, we identified similar themes and/or topics within the sGHW such as "cancer & serious illness", "injuries & pain", "diet & exercise" and "family". Muralidhara and Paul [37] explored health topics on Instagram and discovered ten broad categories. Compared to their work, we were able to identify similar topics such as "acute illness", "alternative medicine", "chronic illness and pain", "mental health", "diet" as well as "substance use". In contrast to the studies by Paul and Dredze [36] and Muralidhara and Paul [37], we focused on the German language and the sGHW rather than on social media. In addition, contrary to [38][39][40][41][42], we explored general health topics within the sGHW rather than focusing on one certain (health-related) discipline. Conclusions and further research. In this study, a system was presented which computes the readability and vocabulary difficulty of health-related web pages gathered by a focused web crawler in a fully-automated way. We showed, that a graph representation of the sGHW can be extracted during the data collection phase, which can then be used to compute a ranking of the top 1000 web sites for the ccTLDs ".de", ".at", and ".ch". In addition, we demonstrated that LDA can be used to explore the collected dataset. In total, we were able to identify 50 topics, which were summarized into 11 themes. Our results indicate that the readability within the sGHW is low. For this reason, publishing organizations and authors should reevaluate existing text material and reduce sentence complexity. However, our findings suggest that the use of vocabulary often suits the target audience but could be improved. Therefore, we recommend the use of both sentence dimension and vocabulary dimension as supportive measures to ensure and provide understandable online health information. Therefore, content providers should be supported by proper tooling during text production: I.e., one could envision a cloud service where health content providers could check their health-related web content automatically for readability and vocabulary difficulty. In addition, users should be supported by proper browser-based tooling (i.e., browser extensions such as [29]) to identify easy-to-read content but also to get an indication of the quality of the related content. In future work, the authors intend to extend their analyses to identify trustworthy health information web sites. To do so, we plan to combine the DISCERN instrument [86] with crowd-sourcing approaches. Using these insights and with the acquired data available, an implementation and evaluation of a trustworthy health-specific search engine for information seeking citizens will be possible.
2023-02-11T06:17:30.780Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "da8786b9292670a44b791507d4d3ec210e9c3869", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bbdcaaffc4019bb0f6fec7af0a75b9d4be9d2403", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
173990210
pes2o/s2orc
v3-fos-license
Socio-cultural dimensions of congenital adrenal hyperplasia: An ethnographic study from Chennai, South India Aim: This study aims to provide a medical anthropological perspective on how congenital adrenal hyperplasia (CAH) is perceived and constructed by parents and doctors in India. It aims to put forth the complexities that are associated with CAH and the various experiences that parents and doctors share as a result, while also exploring the influences that culture and medicine have on each other. Methods: An ethnographic approach was taken to understand CAH in this study, in which families and doctors of children with CAH were interviewed. Fieldwork was done for 2 months in Chennai, Tamil Nadu. Results: A major finding of this study was the faith that parents had on biomedicine in general and doctors in particular. While parents continued to follow the instructions provided by the doctors, they also exercised their agency by questioning the decisions taken by the doctors. The research also revealed that there is constant worry and fear in parents about the future of their children due to the stigma attached to CAH. Conclusion: A constant discourse between medicine and culture can be noticed while analyzing the complexities associated with CAH. The study tries to show that medical decisions that doctors take in matters concerning CAH are culturally driven. Surgical corrections done in order to categorize the child into one of the two sexes is an example for the same. Similarly, various structures of family, marriage, and kinship have been medicalized owing to the strong influence medicine and culture have on each other. female children with CAH [ Table 1]. In addition, out of the five families with female children diagnosed with CAH, two families had more than one daughter diagnosed with CAH and one other family was expecting another child with a definite CAH condition [Tables 2 and 3]. Additionally, an adult female with CAH was also interviewed. A clinic in Chennai was the field site where most of the interviews with the respondent families were conducted. Two families, however, were interviewed in their respective houses. Each of the doctors who participated in this research gave their interviews in their respective clinics. An ethnographic study is largely based on qualitative methodologies and places importance to the narratives of individuals involved in the concerned area of research. [2] Qualitative methodologies such as interviews were used in the process of data collection. Semi-structured interviews that were aided by a question guide were used throughout the research to gather details on intersexuality. Interviews were conducted by the first author and were held in privacy. Two separate question guides with a list of questions that could possibly be asked to doctors and families of intersex children were prepared. While the question guides for doctors focused on queries relating to how they would explain this condition to the parents, the treatment protocol, and the best available solution for this medical condition, the question guide for the parents focused on queries that paved the way for understanding their experiences and emotions in the process of identifying, accepting, and treating their child with CAH. Questions were not, however, posed verbatim as they were in the question guide but were asked in accordance to the response of the respondent. Apart from conducting interviews, observation as a tool for data collection was used wherever possible. Since this is a medical anthropological study, socio-cultural domains were given prominence. results Results of the analysis of the interviews show that parents have immense faith in biomedicine and the doctors. However, parents also questioned decisions taken by doctors at regular intervals, thereby becoming active participants in the treatment protocol. The research also revealed that there is constant worry and fear in parents about the future of their children due to the stigma attached to CAH. Perceptions of doctors and parents on CAH differed and are explained further in this paper. One of the most prominent results that emerged out of this research was that of the belief these families had on doctors in particular and biomedicine in general. Nitcher and Nordstorm's (quoted in Haripriya [3] ) address the factors necessary to build a practitioner/client communication. According to their research, apart from the practitioner/ doctor's knowledge of the body and medicine, his/her ability to empathize and build a rapport with the patient also becomes important in seeking healthcare (Haripriya [3] ). In a similar research done in Chennai, Tamil Nadu by anthropologist Haripriya [3] on the problems involved in practitioner/patient interactions, the belief in kairaci (good fortune) of the doctor as an essential factor in establishing the practitioner/patient communication was highlighted. She threw light on how the kairaci or good fortune of the doctor became an important factor in building nambikai (faith) of the patient on the doctor. Though parents interviewed for this research did not talk about kairaci, their faith on the doctors was largely built on the base of the latter's medical knowledge and his/her empathizing and listening skills. This faith also influenced parents' decision to consider and continue with the allopathic treatment process for CAH. When parents learnt that their child had CAH, a range of questions pertaining to the why, what, when, and how of CAH arose and when these questions were answered by their doctor in words other than just medical jargons a sense of trust and faith on the doctor developed in the minds of these parents. The father of a boy with CAH said: "He (doctor) explained to us that this is the problem the very first time he saw my son. That is, instead of just checking the problem and giving treatment he drew a diagram and showed us. We do not know any medical terms. So, he explained to us why this occurs, what are the problems because of which this occurs and what are the problems that occur because of this and why this happens…." While the faith in their doctor and biomedicine endured, some of the parents, however, seemed to contemplate and question the decisions taken by their doctor. With the help of internet some parents had built a knowledge base of their own, separate from that provided by their doctor, because of which they were able to play an active role in the decision-making process of their child's treatment, thereby putting the "authoritative knowledge" of the doctor in question. [4] One of the fathers while talking about the side effects of the tablets given for CAH said: "Actually, more than belief I have studied about it. I searched a lot in the Internet and looked for the solutions for this problem. We will take his (doctor) opinion but we should also look at other aspects because we have to maintain our son's health. So, for that we need to gain our knowledge and hence I read a lot." Once the parents acquired this knowledge either with the help of internet or through close family and friends, they became aware of the other possible treatment procedures or methods for treating CAH. A couple of parents had tried alternative treatment methods only to eventually come back to biomedicine due to poor or no results. One of the parents who talked about acupuncture as an alternative treatment said: "They said they can find out the problem by feeling the pulse and the treatment will be provided by touching and instigating the nerves. It was a change for us. Every time there are medicines or injection, so when they said that was not required we thought of trying it. If not we thought we will come back to medicine. We took a 6-month break keeping that in mind. But there was no satisfaction so we came back (to allopathic treatment) ourselves." Disease or a condition? While parents took their children for treatment to control CAH, they did not necessarily consider it a disease. They rather considered it a condition or a shortcoming (kurai), even though the Tamil words noi or vyadi that parents frequently used to refer to CAH directly translates to disease in English. Parents were of the opinion that a couple of other mental or physical conditions or disorders surpassed intersexuality in terms of severity. Mental imbalance or physical disability were, for instance, considered far more painful, longlasting, and easily visible to the naked eye as opposed to intersexuality. A parent said: "We cannot call this a disease. This is a shortcoming. What was supposed to come when he was 14-15 years has come early. So we are suppressing it. We are only suppressing it and not doing anything else. How to say it[…]if it was something lifelong, like if it was handicap it would have been for a lifelong time. This is not like that." Since parents considered CAH a shortcoming, they engaged in actions that ensured that their child's condition was hidden from the outer world. Parents feared that their children might be stigmatized if their condition was revealed. These families viewed society as a strong propagator of stigma and to protect themselves and their children from this stigma they refused to share details of their child's condition to their family and friends and sometimes even to their offspring. One parent said: "And we cannot share these issues with anybody[…]. Society is a big (pause) problem for us. Social barriers are a very very big thing. When he goes to school, I have spent a lot of time thinking, how he will be or feel in that school. We (wife and him) still have that feeling." Parents also expressed their concern for their child's future, which they feared might be affected by the stigma attached due to CAH. Living in a heteronormative, patriarchal society parents of a girl child worried more about her future in terms of marriage and reproduction. While the parents of girls with CAH worried about her marriage and ability to reproduce, parents of boys with CAH feared the consequences that their child might have to face in the age of puberty when he might have to articulate and understand the difference in his physical appearance from other boys of his age, if there is any. Additionally, the very idea of considering CAH a shortcoming reveals that there is a set notion about what is normal and what is not. Such notions most often result in pathologizing what we do not regard as the "normal." Though, on the one hand, parents considered CAH a shortcoming, they were on the other hand relieved that it could not be purely identified from the child's physical attributes, baring the ambiguity in the child's external genitalia (that may or may not occur). They unanimously believed that CAH was rather "internal" than "external" and this belief made it easy for them to tackle the notions of stigma. As the sociologist Erving [5] writes in his seminal work "Stigma," visibility of the factors that may cause stigma and the capacity of the viewer to decode it is to be kept in mind when addressing issues of stigma. That is why one can often hear parents state something similar to what this father of a CAH boy said "[…]physically he looks very active, he has more energy. Everything is good, but the only thing is some internal[…]as I said earlier, the growth and all is the problem." Cultural impact Parents and doctors try everything at hand to ensure that the child with CAH grows up with a "normal" childhood and the decision to not disclose the details of this condition to anyone is a measure taken to ensure the same. Although nondisclosure may help their child's life and future, it will not prevent this genetic condition from being transmitted to the next generation. Hence, when consanguineous marriages were identified as a possible reason for the occurrence of CAH, doctors advised parents to forgo consanguineous marriage practices in the future. With the advent of new genetics, family and kinship have become medicalized. And this medicalization, which brings about an awareness of the medical history of one's family, helps individuals deal better with hereditary medical conditions. [6] Once individuals are aware about their hereditary medical condition, they scout for options that can stop or at the least minimize its effect on the body and mind of the next generation. Avoiding consanguineous marriages, which has been the cornerstone of the Dravidian kinship system, is one such attempt to prevent CAH from affecting the next generation of the family/lineage. Isabelle Clark-Decès [7] in her book "The Right Spouse" talks about something similar where Tamils, among whom the consanguineous marriage practice is believed to be common, have slowly forgone marrying within close kin stating scientific reasons for the same. She writes about how Tamils now believe that 50% of the genetic makeup is derived from the father and the other 50% from the mother and that brothers and sisters share 50% of their genetics, thereby stating that if close kin marry, their offspring will have to face medical defects. [6] One of the grandparents interviewed said, "in the future we should not get our children married within relation/relatives. Apart from that there is nothing." dIscussIon Anthropological studies on CAH or intersexuality have not been conducted in India as far as we could find, though similar researches have been done in the West. Katrina, [8] an American anthropologist, writes in her book on the history of intersex, the current treatment protocol that is followed by doctors, and the experiences they share in the process of the treatment. Although in certain cases surgical corrections are done to avoid medical complications, in most cases the desire and compulsion to categorize children with CAH into the heteronormative structures of the society has been prominent. Like Katrina states the term intersex is in itself heavily laden with the heteronormative ideas of gender and sex, for it is derived from the "natural" binary model of gender with the term "intersex" falling perfectly between the two "true" sexes. [8] Both parents and doctors feel it is essential for the child to be "normalized" and fit into one of the two sexes in order to avoid a life with much stigma and pain. However, is gender necessarily a binary? If so, then why do some scholars including the biologist Anne Fausto-Sterling [1] state that sex is a continuum? Philosopher Judith Butler [9] states that "a body's sex is simply too complex. There is no either/or. Rather, there are shades of difference." If there are true shades of differences then why does gender binary cloud the presence and existence of other genders and sexes? The culture and the norms of the society can be identified as possible reasons for the same. The idea of average that eventually becomes the norm could be recognized as one major reason for this insistence on categorization into the gender binaries. Georges, [10] a philosopher and physician, in his book "The Normal and the Pathological" talks about how the average is used to form norms which in turn influences the categorization of normal and pathology. Focusing largely on the influence of social norms on medicine, Georges writes that "normal" for physiologists is determined by an average and establishing a norm in purely biological terms is hard, for each one's biological condition is different and forming an average in biological terms will only lead to variations going unnoticed. This eventually paved the way for social determinants to be used in forming the average. A number of scholars who have worked on intersexuality have addressed the socio-cultural influence on the medical constructions of many biological conditions. Such an influence can be distinctly seen in the treatment protocol followed for treating CAH. Using surgical corrections to "normalize" the child is a result of this socio-cultural influence. When physicians take to surgical corrections, when there is no medical complication otherwise involved, to ensure the child grows "normally" with little or no stigma attached, they are allowing the socio-cultural factors to influence their medical decisions. The one interview with an adult with CAH exemplifies the main arguments of this article, about the role of social and cultural factors in accessing treatment for CAH. The young woman discussed her inability to talk about the condition with anyone openly due to the stigma attached. In Tamil culture, menarche is celebrated with a public ritual announcing a girl's coming of age. Not having attained puberty is therefore a huge problem for the woman. However, she expressed confidence in being able to help her sisters with similar conditions, through biomedicine. Currently undergoing treatment for late menarche, she even contemplated suicide at one point, prevented by support from a male friend who is also planning to marry her, after knowing her condition. At the same time, various social structures of the society have also been medicalized, an example of which is the possible change in the Dravidian marriage systems. This constant discourse between the society, culture, and medicine has resulted in each influencing the other largely. Isolating one from the other will therefore only result in establishing partial reality. Decisions on treating intersexuality have to be taken keeping in mind this interaction of medicine and culture. As one of the doctors said, in a nation like India, where power of gender binaries is intense, categorizing the intersex child into one of the two sexes is probably the easiest/best choice available for the parents, doctors and maybe the child as well. She said: "There (in UK) they do not have to put gender in everything. Here, right from your LKG admission form you are asked to mention your sex including father's name, mother's name. Everything is optional there. So they do not bother about what has been documented and whatever the child prefers they assess…." This paper takes a medical anthropological perspective to understand how CAH is perceived by parents and doctors and how that might help unravel the complexities that are experienced by people with CAH and their care givers, in this context, the parents. Using ethnographic methods is a helpful way to explore issues that otherwise get hidden from public domain, whether it is the extended family, or the larger society. There is hardly any study done on CAH from a social science perspective as far as our research could find, especially in India. Through the findings of this research a close connection between medicine and culture can be drawn that shows the influence each have on the other. The constant discourse between medicine, society, and culture has to be acknowledged and understood in order to establish multiple ways of treatments for DSD or intersexuality, as well as for the process of diagnosis of the same. Further in-depth research, exploring the socio-cultural aspects associated with CAH, can in turn provide a better understanding on CAH and initiate a possible awareness among people on the socio-cultural and medical complexities surrounding this condition. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2019-06-04T13:35:53.481Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "afbca3d36101d072bae6dbe8f33cc2da47642094", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijem.ijem_177_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "afbca3d36101d072bae6dbe8f33cc2da47642094", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
234253734
pes2o/s2orc
v3-fos-license
Late Pleistocene and Holocene Afromontane vegetation and headwater wetland dynamics within the Eastern Mau Forest, Kenya The Mau Forest Complex is Kenya's largest fragment of Afromontane forest, providing critical ecosystem services, and has been subject to intense land use changes since colonial times. It forms the upper catchment of rivers that drain into major drainage networks, thus supporting the livelihoods of millions of Kenyans and providing important wildlife areas. We present the results of a sedimentological and palynological analysis of a Late Pleistocene–Holocene sediment record of Afromontane forest change from Nyabuiyabui wetland in the Eastern Mau Forest, a highland region that has received limited geological characterization and palaeoecological study. Sedimentology, pollen, charcoal, X‐ray fluorescence and radiocarbon data record environmental and ecosystem change over the last ~16 000 cal a bp. The pollen record suggests Afromontane forests characterized the end of the Late Pleistocene to the Holocene with dominant taxa changing from Apodytes, Celtis, Dracaena, Hagenia and Podocarpus to Cordia, Croton, Ficus, Juniperus and Olea. The Late Holocene is characterized by a more open Afromontane forest with increased grass and herbaceous cover. Continuous Poaceae, Cyperaceae and Juncaceae vegetation currently cover the wetland and the water level has been decreasing over the recent past. Intensive agroforestry since the 1920s has reduced Afromontane forest cover as introduced taxa have increased (Pinus, Cupressus and Eucalyptus). Introduction African wetlands are dynamic ecosystems experiencing substantial land use and increasing hydroclimatic variability and stresses to biodiversity (Chapman et al., 2001;MEMR, 2012). Pollen-based analyses that reconstruct changes in past vegetation assemblages and distributions across the highlands of eastern Africa are beginning to characterize the spatiotemporal complexity of montane forests (Livingstone, 1967;Olago et al., 1999;Rucina et al., 2009;Finch and Marchant, 2011;Schüler et al., 2012;Opiyo et al., 2019). Several mountains in Kenya support lake systems that preserve lacustrine sediment archives of palaeovegetation dynamics since the Late Pleistocene (Hamilton, 1982;Marchant et al., 2018;Gil-Romera et al., 2019). Palustrine ecosystems in montane environments have similarly been used to establish environmental histories in equatorial eastern Africa (Hamilton, 1982;Heckmann, 2014;Finch et al., 2017). Among the montane ecosystems of Central Kenya with elevations >3500 m above sea level (asl), such as the Aberdare Range and Mau Escarpment, permanent lake ecosystems are not frequently supported due to steep topography, hydroclimate and sediment infilling. Thus, palustrine, soil, cave and other terrestrial geoarchives are important sources useful for analyses of vegetation change in response to climate, anthropogenic and local-scale mechanisms of environmental change. Longer term insights on ecosystem change from these crucial landscapes that are rich in biodiversity and provide a wide range of ecosystem services can be useful to inform their contemporary management (Gillson and Marchant, 2014). Kenyan highlands are the headwaters to several large catchments; their forests generate and capture orographic and occult precipitation forming crucial headwater sources for major river systems (Nkako et al., 2005;Cuní-Sanchez et al., 2016;Los et al., 2019). The Mau Forest Complex is one of the five key water towers in Kenya (Nkako et al., 2005;MEMR, 2012). As most of rural sub-Saharan population relies on rain-fed agriculture (Wolff, 2011), and~70% of Kenyans live in rural areas (Pieterse et al., 2018), understanding the functioning of these headwaters can inform management on the historical evolution and variability of these ecosystems and the ecosystem linkages to the hydrology. Notwithstanding the important roles that mountain ecosystems play with impacts across their wider catchments, high-elevation wetlands receive less ecosystem protection than large lowland wetlands, but are important contributors to biodiversity, landscape diversity, habitat connectivity and social-ecological resilience. This contribution is not just within the highland areas but across the catchment; for example the Mara River flows some 500 km to the south-west through agricultural areas and the Maasai Mara-Serengeti ecosystems and into Lake Victoria and the Nile. The Mau Forest Complex is one of the remaining forest blocks of the western Rift Valley in Kenya, supporting indigenous forests and wildlife and several large communities. The forests have undergone significant change since the late 1800s, conspicuously with the development of industrial forestry during colonial government administration (Klopp, 2012) and ongoing land-use change towards agriculture and continued industrial forestry of exotic tree species (Kenya Gazette Supplement 2012). Mau forests are divided into seven forest blocks: Eastern Mau Forest is the smallest of these, making the small and isolated remnant indigenous forests the most susceptible to further anthropogenic land-use modifications, climate change effects, ecological disturbances and introduced species (Okeyo-Owuor, 2007;Kinyanjui, 2011;Were et al., 2015). The remaining patches of indigenous forest are protected by legislation for their environmental and ecosystem services and cultural use and heritage (Republic of Kenya, 2016). Much of the lower elevation forests have been converted to agriculture over the past decades (Olang and Musula, 2011;Swart, 2016) with 25% of the forest converted from AD 1994 to 2009 due to excision and settlement encroachment into the forest (Mwangi et al., 2017). The Government of Kenya recently excised 353 km 2 of the forest to resettle victims of ethnic clashes as well as members of the Ogiek community previously evicted from the forest. Recent politicking has led to increased illegal settlement, logging and charcoal burning in the forest as a source of income (Nkako et al., 2005;Were et al., 2013). These increased human population pressures on the forests have further fragmented wildlife populations, with subsequent erosion and water distribution issues having consequences for downstream ecosystems and populations (Gichana et al., 2015;Mwangi et al., 2017;Dutton et al., 2018;Mwanake et al., 2019). Neighbouring Mau forest blocks have undergone varying degrees of anthropogenic modifications impacting vegetation biodiversity, soil geochemistry and topsoil seed bank; however, ecological restoration potential remains (Kinjanjui et al., 2013). The environmental history of the Eastern Mau Forest complex is relatively understudied (Marchant et al., 2018). Early colonial maps described the Mau escarpment as forested but explorers did not penetrate the region before the 20th century. The colonial government established the Mau Forest Reserve (among others) as demands for timber increased with road and railway construction, and exports (Cranworth, 1912). By the AD 1920s, early forest delineations (Troup, 1932) already noted heavy modifications to forests although there was no investigation of the natural history until geological mapping in the 1980s and 1990s (Williams, 1991). Here we present the first investigation of long-term terrestrial ecosystem change in Eastern Mau documenting how the forest has changed since the Late Pleistocene. After the Last Glacial Maximum, glaciers retreated on the highest mountains of eastern Africa followed by changing elevation vegetation patterns on the mountains (Hamilton, 1982;Van Zinderen Bakker and Coetzee, 1988). As conditions warmed (Loomis et al., 2017), the African Humid Period generally brought higher moisture regimes to the regime from 14 000 to 6000-4000 years BP; with the timing of the transition to relatively drier conditions being time-transgressive and having high spatiotemporal complexity across Africa (Shanahan et al., 2015;Phelps et al., 2020), including highland regions (Street-Perrott et al., 2007). East Africa experienced high precipitation variability characterized by high rainfall in the Early Holocene and increasing, yet highly variable aridity towards the present as evidenced by major drought events (Stager et al., 2003;Verschuren et al., 2009). The effect of increasing CO 2 through the Holocene and varying C 3 :C 4 vegetation varied between high-and low-elevation environments (Urban et al., 2015), the last 10 000 BP from the Sacred Lake record on Mount Kenya (Olago et al., 1999) are summarized as dominated by C4 vegetation that reflect the increasing atmospheric CO 2 , temperature and precipitation. By the end of the African Humid Period during the Late Holocene, moisture regimes became relatively drier but with high spatiotemporal variability. This was characterized by wetland and lake level variability in lowlands (Verschuren, 2001;Öberg et al., 2012;De Cort et al., 2018), changing montane moisture regimes (Barker et al., 2001, Street-Perrott et al., 2007 and forest pollen assemblages (Rucina et al., 2009;Githumbi et al., 2018a,b;van der Plas et al., 2019;Courtney Mustaphi et al., 2020). Pollen evidence of recent human land use and forest resource use varies between mountains and sites (Ryner et al., 2008;Heckmann et al., 2014;Iles, 2019). For example, in Uganda (Hamilton et al., 1986;Jolly et al., 1998;Lejju, 2009), forest resource use and conversion of land cover have increasingly occurred during in recent centuries (Troup, 1932;Petursson et al., 2013;Gil-Romera et al., 2019;Courtney Mustaphi et al., 2020). Study region: Kiptunga Forest Block The Nyabuiyabui wetland (2865 m asl) is located in the Kiptunga Forest Block of the Eastern Mau Forest Block (Fig. 1) that covers an area of 29 000 ha. The surficial geology consists of an extensive thick catena of Early Pleistocene Mau ashes with ferrous basal tuffs (Jennings, 1971;Williams, 1991). Soils are relatively young and productive Udands (Andisols) that contain volcanic tephra. The mantling and subsequent aeolian and hydrological erosion of these deposits have shaped much of the current topography of the mountain ridge and the basin and fluvial channels of the forest and wetland. Kiptunga Forest currently supports populations of birds, reptiles, antelopes, primates and hyenas. The Ogiek community, who historically practised predominantly hunter-gatherer livelihoods, inhabit the region, yet recently, the local populations have increasingly practised pastoralism with cattle, goats and sheep; much of the forested lower elevations have been converted to agriculture (Sang, 2001;Spruyt, 2011). The Mau forests contain several headwater catchments that flow into the Rift Valley or towards Lake Victoria, most notably through the Mara River, which flows into Maasai Mara and Serengeti. As an orographic precipitation water tower, Mau provides water for rural and urban settlements, pastoral communities and wildlife. The high biodiversity of the Mau forest, the Maasai Mara National Reserve and Serengeti National Park have led to the designation of the three regions as Important Bird Areas. They are also habitat to high numbers of large game and host both indigenous and threatened animals such as the bongo and the yellow-backed duiker, the golden cat, the leopard and the African elephant (Nkako et al., 2005). Scattered minor pockets of remnant indigenous broadleaf forests include Croton, Dombeya, Hagenia, Juniperus, Olea spp., Podocarpus and Prunus. The Maasai Mau block is entirely indigenous comprising a Juniperus-Podocarpus mosaic interspersed with indigenous vegetation glades (Nkako et al., 2005). The highly valued indigenous timber species are Albizia gummifera, Olea capensis, Juniperus procera, Polyscias kikuyuensis, Podocarpus spp., Pouteria spp., Prunus africana and Strombosia spp. The major land cover classes (forests-shrublands, grasslands, croplands, urban areas, barren land cover and open water) have undergone tremendous changes over the last few decades with a decrease in Afromontane vegetation cover accompanied by an increase in agricultural and fragmented land and modifications to wetlands (MEMR, 2012;Were et al., 2013). In addition to broad-scale land-use transition there is quite common selective harvesting of tree species, and collection of poles, non-timber forest products and firewood. Few patches of indigenous forest remain due to intensive tree replanting-harvesting cycles of introduced tree species partitioned into plantation plots (Sanya, 2008). Study site: Nyabuiyabui wetland The Nyabuiyabui hydronym is an Ogiek word meaning 'spongy' (Spruyt, 2011) or 'marshy', and may support a floating vegetation mat during wetter intervals. Nyabuiyabui wetland ( Fig. 1) covers an area of 122 ha within the Kiptunga Forest Block. The current waterlogged/open water area extends to about 6 ha although water levels vary in response to local hydroclimatic conditions. Nyabuiyabui has minor ephemeral inflows and a single outflow to the south-west that forms a tributary of the Mara River network. The wetland is shallow with observed water depths at the centre of the basin well below 50 cm during March-April 2014 and lower still (<20 cm) during April 2015. Anecdotal discussions with local people suggested that the water level has been decreasing during the remembered past and that there was open water within the wetland and a flowing outflow channel under the road bridge before 1972. There is evidence of recent anthropogenic modifications of the wetland, namely infrastructure, grazing and introduced species. Logging-access roads have been constructed around the wetland as well as a concrete bridge over the outlet. A cut line running along the west margin to stop grass fires from threatening the forests, a now unused pumping station that supplied water to the Kiptunga Forest Station buildings and former tree nursery uphill remain near the wetland shore immediately east of the bridge. The presence of the pump house also supports the plausibility of higher water levels during the early to mid-20th century. Cattle graze along the wetland margin and impact the hummocky ground and morphology of the Poaceae-Cyperaceae-Juncaceae tussocks. The wetland is surrounded by small patches of indigenous forest and monoculture tree plots of varying ages with planting dates from AD 1935 to 2006 and predominantly during the 1960s (Sanya, 2008;. As of 2015, plantation plots of Cupressus lusitanica were the most predominant in the watershed surrounding the wetland. The closest archaeological records are human and animal remains from an excavated burial cave, north-east of the wetland but significant forestland use could not be determined from the finds. The finds were ascribed to the Late Stone Age due to their similarity to finds from that period (Faugust and Sutton, 1966;Merrick and Monaghan, 1984). Field methods In 2014, a suitable coring site was determined by probing the sediments with fibreglass rods along transects to locate the maximum accumulation. A 537-cm core was recovered from 0°26 ′11.28 ″S, 35°47′58.74″E, 2920 m asl near the wetland centre using a hand pushed hemicylindrical Russian corer 5 cm in diameter (Fig. 1). Sediment cores were collected in 50-cm drives with 10-cm overlapped sections from parallel coring holes. Cores were transferred to longitudinally split PVC tubes, wrapped in plastic wrap and aluminium foil, shipped to the University of York, UK, and refrigerated at 4°C. Laboratory analysis Six bulk sediment subsamples and three sieved and picked organic matter (plant material) samples were accelerator mass spectrometry (AMS) radiocarbon dated at Queen's University Belfast 14 CHRONO laboratory, UK; Scottish Universities Environmental Research Centre (SUERC), Glasgow, UK; or Direc-tAMS, Bothell, USA. The IntCal13 curve (Reimer et al., 2013) was used to calibrate the dates and an age-depth model was developed using a BACON R script with default settings (Blaauw, 2010;Blaauw and Christen, 2011;R Development Core Team, 2017). Several iterations of potential age-depth models were run to explore the potential ranges for bounds to stratigraphic zonations and to extrapolate a basal date. More confidence was given to sieved plant material (mostly grass charcoal fragments) that frequently provide narrower dating uncertainties over bulk sediment AMS radiocarbon dates (Rey et al., 2019) because of the relatively short growth time of aboveground grassy fuel. Loss-on-ignition (LOI) and particle-size distribution analyses were carried out to characterize the sediments: organic matter content, carbonate content and mean clastic particle size every 5 cm down core. LOI analysis involved weighing the wet samples and then again after drying at 105, 550 and 950°C (for 24, 5 and 3 h, respectively) to calculate the dry weight, organic matter and carbonate contents, respectively (Heiri et al., 2001). Particle-size distribution analysis was carried out using the Malvern laser granulometer (MEH/MJG180914). The procedure involved pretreating 1-cm 3 wet sediment subsamples with 30% hydrogen peroxide in a hot water bath to digest organic matter and reduce particle aggregation (Syvitski, 1991). If the subsample contained <3.5% organic matter, the hydrogen peroxide treatment was skipped. At a pump speed of 1500, 1-2 g of sample was added until laser obscuration in the measurement column was 4%. The granulometer repeated three measurements and calculated an average result (Malvern Instruments Ltd, 2007). The cores were scanned using a Cox Analytical Systems ITRAX core scanner at the Department of Geography and Earth Sciences, Aberystwyth University, UK. The ITRAX core scanner collected optical imagery of the core face using an RGB digital camera, and measured magnetic susceptibility using a Bartington MS2E sensor at 1-cm intervals and aircorrected between measurements. Magnetic susceptibility is the degree of magnetization in response to a magnetic field measured in intensity values that are dimensionless units (χ). The X-ray fluorescence (XRF) results represent a semiquantitative measurement of elemental composition in kcps (thousands of counts per second) of the sediment matrix. In total, 22 elements were examined at 0.05-cm intervals through XRF with a 3-kW water-cooled molybdenum anode X-ray tube (60 kV, 35 mA, 200-ms exposure). The results are influenced by potential X-ray absorption and/or scattering across the core due to variability in water content, particle size distributions, mineralogy and surface roughness of cleaned core face (Croudace et al., 2006). Subsamples of 1 cm 3 of sediment were extracted at 1-cm intervals from the wet core face for macroscopic charcoal analysis. This was soaked in sodium hexametaphosphate solution to disaggregate the organic material and clay particles (Bamber, 1982). A drop of hydrogen peroxide whitened noncharcoal organic matter (Schlachter and Horn, 2010;Whitlock et al., 2010). Samples were gently wet sieved through a 125µm mesh, and the retained charcoal pieces were identified by visual inspection and probed with a metal needle (Hawthorne and Mitchell, 2016;Vachula, 2019) and tallied under a Zeiss Axio Zoom V16 microscope at magnifications of 10-40×. Counts were converted to charcoal concentration values (number of particles per unit volume, pieces cm -3 ). The data were analysed using: Rbacon package version 2.3.9.1 to develop the age-depth model using Bayesian approaches. Dates identified as outliers in the initial BACON run (default settings) were the bulk sediment samples. A study published in 2018 looking at the impact of model choice, dating density and quality on chronologies found that using BACON outliers were found to have little to no impact on model precision (Blaauw et al., 2018). Rioja version 0.9-21 (Juggins, 2020) was used to run the hierarchical constrained clustering -CONISS (ITRAX, charcoal and pollen), calculate the statistically significant number of assemblage zones (Bennett, 1996) and C2 for stratigraphic plots (Juggins, 2003). CONISS analysis was carried out on the complete pollen dataset but for ease of view we present the dominant taxa in each grouping in the text and the complete pollen diagram in the Supporting Information. Data accessibility All data generated from this study will be openly available via the African Pollen Database (Vincens et al., 2007), a constituent of the Neotoma Paleoecology Database and data repository Williams et al., 2018). General lithology description and geochronology Six lithological units were identifiable from the 537-cm sediment core that was mainly dark organic-rich silty sediment. The top 8 cm comprised organic-rich detritus, mainly plant roots. This changed into a layer of dark grey clayey silt that extended until 124 cm (between 84 and 124 cm there is an increase in the coarse sand fraction). From 124 to 385 cm the sediment changes to a dark brown silt layer (>80%) with grey/black laminations between 224 and 274 cm. Starting with a thick black layer at 386 cm, the sediment becomes a darker brown sandy silt until 474 cm. From 475 cm to the bottom, the sediment changes back to a dark grey sandy silt with light coloured laminations and concretions at depths with increasing sand. The transitions in sediment colour and type along the core are rapid and distinct except in the bottom 1 m where laminations are visible. Nine radiocarbon dates were used in BACON (Blaauw, 2010) to develop a plausible age-depth model ( Fig. 2; Table 1). The Bayesian model recognizes four ages as outliers: 2449 ± 35 cal a BP at 50-51 cm, 10 721 ± 47 cal a BP at 100-101 cm and 14 424 ± 45 cal a BP at 384-853 cm are older than the dates below them, and 13 963 ± 60 cal a BP at 315-316 cm is younger than the dates above it. Due to the relatively low number of available dates to construct a robust age-depth mode,l we use the most parsimonious suite of dates based on macrofossils and the most likely date sequence (Fig. 2). In addition, the palaeoenvironmental changes are primarily discussed within depth boundaries and the ages discussed within broad time intervals/stratigraphic stages (Walker et al., 2019), i.e. Late Pleistocene to Early Holocene (538-~240 cm), the early and Middle Holocene (240-100 cm) and the Late Holocene from 100 cm to the top (present). Sedimentology The sedimentology is highly variable along the core (Fig. 3). LOI results show that the sediment bulk density ranged between 0.1 and 3.2 g cm −3 with an average of~1 g cm −3 and standard deviation (SD) of~0.5, organic matter ranged from 0 to 100% with an average of~21% and SD of~18, while the carbonate content ranged between 0 and 40% with an average of~6% and SD of~7. The sediment is composed of organic material mixed with varying amounts of silt, sand and clay (Fig. 3). The silt content is highest with an average of 72.44 ± 1.13% (ranging from~34 to 86%), followed by sand with an average of 17.84 ± 1.13% (ranging between 2 and~63%) and finally clay with an average of 9.63 ± 0.39% (ranging between~1.6 and 19%). The sand component is most variable throughout the sediment core. A stratigraphically constrained cluster analysis highlights three distinct zones. From the bottom of the core to 370 cm covering the end of the Pleistocene, the clay content ranges between~3 and 15%, silt between 52 and 81%, and highest variance is in the sand content between 10 and 55%. Clay content is the lowest, averaging~7% followed by~67% silt and~26% sand; this zone can be described as a sandy silt. The second zone from 365 to 120 cm, covering the end of the Pleistocene and Early Holocene, shows an increase in clay and silt percentages to~11 and~80%, respectively, and a decrease in the sand content to~9%. The clay content ranges between 5 and~18%, silt~57-to~86%, and sand 2 to~33%; this zone can be described as a clayey silt. The top of the record from 115 cm to the top of the core consists of a sandy silt-like bottom section where the average clay is~11%, silt is~65% and sand is~26%. The clay content remains at~11% while the silt decreases from 80 to 65% and the sand increases from 9 to~26%. Bulk density is higher in the bottom half of the core (530 to~260 cm) during what would be the Late Pleistocene. It gradually decreases, and between 155 and 75 cm (Early and Middle Holocene) it averages 0.5 g cm −3 . From 70 cm to the top of the core (Late Holocene), average bulk density increases back up to~1 g cm −3 . The organic matter and carbonate content are lowest in the lower half of the record (Late Pleistocene) at~12 and 4%, respectively. This increases to 16 and 10% between 260 and 160 cm around the Late Pleistocene/Early Holocene interval. The section from 155 to 75 cm (Early to Late Holocene period) has the highest increase in organic matter at an average of 40%, while the carbonate content decreases to 6%. The top of the record (from 100 to 0 cm), representing the Late Holocene, has organic matter and carbonate content of~35 and~6%, respectively. Figure 2. Age-depth model developed from nine radiocarbon dates using the Rbacon package (Blaauw and Christen, 2011) and weighted mean (red dashed line) and the 95% confidence interval of~8 million random walks through the calibrated radiocarbon age probability densities (Reimer et al., 2013) (low and upper limit = grey line). Optical image and magnetic susceptibility measured during ITRAX core scanning (see Methods). Down-core zonations were defined using the magnetic susceptibility, ITRAX and pollen assemblage zones, showing broad agreement in major changes in the sediment stratigraphy. Geological stages of the Quaternary (Walker et al., 2019) are shown below. AHP, African Humid Period (Demenocal et al., 2000); H, Holocene; Ps, Pleistocene. Note that there was no optical image or magnetic susceptibility data for the deepest section of core from 450-537 cm. [Color figure can be viewed at wileyonlinelibrary.com] Elemental profile and magnetic susceptibility (χ) The ITRAX core scanner was set to detect counts for 22 elements ( Fig. 4; Table 2); Ni and I were not included in further data analyses because both occurred at very low concentrations and had a limited stratigraphic pattern (Githumbi, 2017). Magnetic susceptibility readings expressed as χ, or volume susceptibility, represent the ratio of magnetization in samples (per unit volume) to the magnetic field created by the sensor, and are dimensionless with a scale of 10 −5 SI units (Burrows et al., 2016). Elemental composition was dominated by Fe (average 85.25 cps), Zr (6.5 cps) and Y (2.6 cps). Rb (1.3 cps) and Ti (1.2 cps) are the only other elements with average cps values >1 through the sediment core. Magnetic susceptibility (χ) varied through the sediment record between −4.14 and 70.89 × 10 −5 SI with an average of 2.97. A stratigraphically constrained cluster analysis divided the elemental composition record into three significant zones labelled ITRAX1, ITRAX2 and ITRAX3 (Fig. 4). ITRAX1 extends from 484 to 260 cm covering the Late Pleistocene to the Early Holocene. Fe (ranging from 82.46 to 92.82 cps), Zr (2.67 to 7.14 cps), Y (1.03 to 3.24 cps), Rb (0.91 to 2.33 cps), Mn (0.23 to 7.69, with an average of 0.9 cps) and Ti (0.79 to 1.59, with an average of 1.2 cps) had the highest average counts in this zone. Magnetic susceptibility averaged 4.47. In the next zone, ITRAX2 (from 260 to 127 cm), covering part of the early to mid-Holocene, the same elements still dominated with a slight increase in Zr, Y and Rb, while Ti and Mn decreased. Mn decreases from~1.6 to 0.6 cps. The top zone, ITRAX3, extends from 127 cm to the core top and includes the rest of the Holocene to the present. Between 113 and 100 cm, all the elements experience a spike except Ti, K and Fe, which decrease. In this zone, all the elements exhibit an increased trend in counts towards the top except K, Rb and Sr (Fig. 4). Pb shows the greatest increase from 0.022 (ITRAX3), 0.031 (ITRAX2) to 0.21 in ITRAX1, almost a 10-fold increase. Hg shows a similarly large increase in the uppermost sediment samples. Charcoal record (125 µm) Charcoal concentration fluctuated from the Late Pleistocene to present at 0-1198 pieces cm -3 with a mean of 217 pieces cm -3 . Charcoal varies throughout the record and the CONISS analysis identifies three significant zones. The bottom of the record to 307 cm is the first zone, CHAR3, and covers the Late Pleistocene period, the second zone is from 306 to 105 cm and is divided into two subzones CHAR2B and CHAR2A, and the top zone starts at 104 cm and covers the Late Holocene. CHAR3, covering the Late Pleistocene and Early Holocene transition, experiences fluctuations in charcoal concentration; minimum, maximum and mean charcoal concentration values are 25, 863 and 230 pieces cm -3 . The mean concentration is higher than the mean throughout the whole record. These values are lower in the next zone, CHAR2B, to 14, 727 and 205 pieces cm -3 , respectively. In CHAR2A there is a significant increase in the charcoal concentration, with minimum, maximum and mean values of 16, 1198 and 238 pieces cm -3 , respectively. The topmost zone is CHAR1 where charcoal concentration decreases significantly. Minimum, maximum and mean values here are 0, 513 and 121 pieces cm -3 , respectively. Pollen analysis Pollen taxon diversity varied down the Nyabuiyabui sediment core with >70 pollen types (Supporting Information, Fig. S1; Table 3) observed and enumerated. The sample with the highest diversity had 68 pollen taxa identified while the sample with the lowest had 18 pollen taxa identified. To aid interpretation and discussion, only the most common pollen types or those types that consistently contributed >2% to the pollen sum for any level are presented, although the full spectra were used for cluster analysis and were grouped into Afromontane, trees, shrubs, herbs and aquatic taxa ( Fig. 5; Table 3). Aquatic taxa included Ludwigia, Nymphaeae, Cyperaceae and Typha. Cupressus and Pinus are Neophytes that appear in the record during the last~200 years. Table 3 contains the list of taxa that comprise each grouping. The second pollen zone, NBPOLL2, extended from 330 to 100 cm (end of the Pleistocene to the mid-Holocene) and was divided into sub-zone NBPOLL2A from 330 to 220 cm (end of the Pleistocene and Younger Dryas) and sub-zone NBPOLL2B from 210 to 100 cm (Early and Middle Holocene). There was a sharp decrease in the NBPOLL2A pollen counts across all the vegetation types (average sample count of~970 in zone NBPOLL1 fell to~390 in zone NBPOLL2A). The Afromontane taxa remained at~29%, with Olea, Podocarpus and Juniperus each dominating at~6%. However, the tree and shrub taxa decreased significantly dropped from 20 to 7% and from 11 to 5%, respectively. This was accompanied by an increase in herbs and aquatic taxa from 32 to 34%. In zone NBPOLL2B, the average for Afromontane taxa fell significantly to~18% from 29% in the previous zone (Fig. 5). Afromontane genera dominating this zone were Cordia (~4%), Podocarpus (~4%), Croton (~2%) and Olea (~2%). Most of the tree, shrub and herbaceous taxa that had disappeared in NBPOLL2A reappear (Alangium, Commiphora, Lannea, Maytenus, Polyscias, Syzygium, Abutilon, Fagonia and Rumex). Tree taxa increased from 7 to 13% while shrub taxa increased Figure 4. A stratigraphic plot of the sediment elemental characterization using ITRAX (expressed as relative abundance of cps, counts per second) alongside the CONISS dendrogram (Bennett, 1996) which divided the record into three significant zones. The element iodine (I) was also measured, but did not have values above the analytical detection limit of the ITRAX scanner. Radiocarbon dates, '+' symbols on the left. [Color figure can be viewed at wileyonlinelibrary.com] Table 2. List of elements scanned via the ITRAX core scanner Iodine -I Mercury -Hg Nickel -Ni from~5 to~7%. Herbaceous and aquatic taxa increased to 40 and 11%, respectively. The third pollen zone (NBPOLL3) extends from 90 cm to the top (Late Holocene to present), where the average Afromontane pollen taxa reduced to 16%, tree pollen remained at 12%, and shrubs remained~7%. Herbs fell significantly tõ 24% from 40% while the aquatic taxa and Poaceae increased to~24 and~13%, respectively. The aquatics and Poaceae in this zone were at their highest recorded level. Podocarpus (~5%), Cordia (~3%), Ficus (~3%), Commiphora (~2%) and Rhus (~2%) dominate this zone. Introduced tree taxa (Neophytes), namely Cupressus and Pinus, appear towards the top of the sediment core (20 cm) at high abundance as they were introduced taxa at commercial timbering scales and frequently used on residential and industrial properties. Pollen taxa, charcoal concentration, organic matter content, clay, sand and some elements/elemental ratios are plotted alongside the lithology and radiocarbon dates. Summary aspects Figure 5. Relative pollen abundances of the terrestrial pollen sum from the Nyabuiyabui sediment core, radiocarbon dates ('+' symbols on the left, black font for dates used in the suggested age-depth model in Fig. 2) and charcoal (>125 μm) for comparison. Aquatic taxa are shown as relative abundances of the total aquatic pollen sum (Table 2). from the different proxies are plotted in a single stratigraphic plot (Fig. 6) to ease comparison across datasets. Significant changes in each of the proxies occurs at around the same time, implying that the changes noted have the same or similar drivers. Discussion The Nyabuiyabui wetland sediment record covers the interval from the end of the Late Pleistocene (~17 000 cal a BP) to the present and provides insight into the dynamics of the wetland as well as the wider Mau Forest ecosystem. Given its geographical location, insights from the Nyabuiyabui sediment record have relevance for areas downstream within the Sondu and the Mara River catchment. The results also provide another comparison point of long-term environmental change in the East African highlands. Although lacustrine sedimentary archives are often preferred due to their high time resolution and reliable dating (Ojala et al., 2012), the lack of undisturbed lakes in the Mau Forest necessitates investigations of wetland palustrine sediments to generate knowledge of past ecosystem and environmental changes. The temporal resolution of wetland sediment records in eastern Africa is variable and these shallow-water ecosystems frequently experience higher levels of physical and bioturbation due to their small size and volume compared to lakes (Rucina et al., 2010;Githumbi, 2017). Sedimentary hiatuses have frequently been observed during the Late Pleistocene to Early Holocene from lacustrine and palustrine sediment records across eastern Africa, such as the Rukiga Highlands (Taylor, 1990), Laikipia Plateau (Taylor et al., 2005), Munsa in Uganda (Lejju et al., 2005), Mount Kenya (Street-Perrott et al., 2007;Rucina et al., 2009), Eastern Arc Mountains in Tanzania (Mumbi et al., 2008;Finch et al., 2009Finch et al., , 2014, Pare Mountains in Tanzania (Heckmann, 2014), the Rufiji Delta (Punwong et al., 2013) and Virunga (McGlynn et al., 2013). The distribution of radiocarbon dates collected from Nyabuiyabui is complex and reveals some age-depth reversals, which has been observed in several palustrine sediment studies in the region (Hamilton, 1982;Courtney Mustaphi and Marchant, 2016). This has been observed in lowland wetlands (Awuor, 2008;Öberg et al., 2012;Githumbi et al., 2018a,b;Goman et al., 2020) and montane wetlands (Bonnefille and Riollet, 1988;Mumbi et al., 2008;Finch et al., 2009). The radiocarbon dates from Nyabuiyabui group the sediment stratigraphy into a basal pre-Holocene (Late Pleistocene), Early to Mid-Holocene, and Late Holocene section. By focusing our age-depth model on the dates that are derived from macrofossils, we are able to construct a coherent age-depth relationship that is very useful for assessing broad patterns of sedimentological and vegetation change at Nyabuiyabuyi (Fig. 6). Similar to several other palaeoenvironmental records established from palustrine sediments, the radiocarbon date uncertainties limit the precision of exploring the specific timing of events or rates of change. However, the vegetation change is explored and discussed with reference to depth to acknowledge this uncertainty. We describe the long-term variability in forest composition and fire during the broad periods of the Late Pleistocene, Early to Mid Holocene and Late Holocene. Changes in pollen composition and abundance throughout the core indicate the persistence of an upland forest dominated by Podocarpus, Cordia, Juniperus and Olea with varying degrees of openness possibly caused by ecological turnover and a variable fire regime. Podocarpus, Cordia, Juniperus and Olea appear in all the samples, Afromontane taxa in general appear in all pollen zones. Pollen zone NBPOLL2A (Late Pleistocene to Early Holocene transition period) experienced a loss of montane forest taxa (Abutilon, Alangium, Commiphora, Fagonia, Lannea, Maytenus, Polyscias, Psidium, Rumex and Syzygium). The disappearance as well as the general decrease in tree and shrub taxa is accompanied by an increase in Poaceae and aquatic taxa. This suggests a drying environment with a contraction in the open water area providing more extensive shallow-water littoral areas for taxa such as Typha and Cyperaceae to become established. The general trend towards the top of the core is an increase in herbaceous taxa compared to the woody taxa. The end of the Late Pleistocene (540-220 cm) exhibits the highest arboreal pollen diversity (number of taxa identified) and abundance (total counts in each sample) while the herbs and grasses are at minimal abundances. This diversity is in part due to turnover at 330 cm to an ecosystem dominated by Cordia, Croton, Ficus, Juniperus and Olea from one dominated by Apodytes, Celtis, Dracaena, Hagenia and Podocarpus. The turnover signifies a change from Afromontane taxa that prefer a cooler, dry environment to one characterized by more mesic conditions. This ecosystem transition is similar to that documented by pollen records from the Rukiga Highlands (Taylor, 1990), Ruwenzori (Livingstone, 1967), Burundi highlands (Bonnefille and Riollet, 1988), Lake Albert (Beuning et al., 1997), Mount Elgon (Hamilton, 1987) and Mount Kenya (Street-Perrott et al., 2007;Rucina et al., 2009). This interval is also characterized by the highest biomass burning with high macroscopic charcoal concentrations implying a continuous connected source of fuel. Juniperus, a pioneer species colonizing gaps after a fire, is a dominant taxon in this forest transition, and more broadleaved species such as Olea are also established. The successional role of Juniperus in forests with recurring fire events is observed on Mount Kenya (Rucina et al., 2009) and the southern Aberdare Range (Bussmann, 2001). In the Nyabuiyabui pollen record, peaks in Juniperus lag behind those in Hagenia and may signal a sequence of ecological response to fire. The increase in Afromontane and tree taxa corresponds to increases in magnetic susceptibility, silt and clay as well as peaks in detrital elements. The detrital elements, silicon (Si), titanium (Ti), iron (Fe), rubidium (Rb) and strontium (Sr), show increased terrigenous input indicating higher erosional inputs to the basin through surface runoff conditions after episodes of heavy or continuous rainfall. Several East African Pleistocene palaeoenvironmental records are dominated by signals inferring cooler, dryer conditions such as forest compositional changes to semi-deciduous forest and lower lake levels (Van Zinderen Bakker and Coetzee, 1988;Sonzogni et al., 1998;Olago, 2001;Chalié and Gasse, 2002). Late Pleistocene sediments from the Rumuiku Swamp from Mount Kenya or the Lake Emakaat record show stratigraphic changes as well as increases in wetland fringe taxa (Cyperaceae, Poaceae and Typha), indicating lower water levels (Ryner et al., 2006;Rucina et al., 2009). However, the Lake Challa record indicates increased precipitation due to the intensification of the south-easterly Indian Ocean Monsoon from~16 500 cal a BP to the Early Holocene that was interrupted during the Younger Dryas between~13 300 and 11 700 cal a BP (Verschuren et al., 2009). The varied responses among palaeo-vegetation data from different highland areas of equatorial eastern Africa suggest heterogeneity in hydroclimate-vegetation interactions since the Late Pleistocene to present day. This is not surprising given the local topographic-climate system feedback or position of the mountain in bioclimatic space being important controls on the response of the ecosystem through time (Hamilton, 1982;Loomis et al., 2017;Los et al., 2019). The drivers responsible for changes through time recorded by multiple palaeoenvironmental proxies across different mountains are uncertain; even for those records that are characterized by similar climate change, there are localized factors such as topography, hydrology and soil development to take into account. Given the differential response across the East African Mountains, a comparison of available studies across mountain ranges will improve our understanding of coherent responses across different mountain ecosystems and lead to an understanding of typologies of ecosystem response to rapid climate transitions. The Early and Middle Holocene: 240-100 cm At Nyabuiyabui, the low Ti, Fe and Mn concentrations in the sediments from 256 to 213 cm could indicate increasing dryness (Burrows et al., 2016). There is also a marked decrease in the number of pollen taxa identified and abundance until 190 cm, while the contribution of Poaceae pollen increases, implying a relatively warm and dry interval. The forest becomes increasingly open after 190 cm (the Early Holocene), and the subalpine forest association of Hagenia-Juniperus is interpreted as an early succession phase following fire-induced disturbance (Bussmann, 2001). There is an increase in Apodytes, Celtis, Olea, Podocarpus and Erica representative of montane forest. On Kilimanjaro, a warm and wet climate enabled the development and expansion of Afromontane forest around this period (Schüler et al., 2014). The Early Holocene experienced the highest increase in organic matter and decrease in carbonate content, accompanied by an increased silt and clay content as well as increased bulk density, implying high sedimentation. The increased organic matter content peaking with sulphur content could indicate that the increased sedimentation was anoxic (Tierney and Russell, 2007). Compared to the Late Pleistocene, the climate of equatorial eastern Africa was generally warm and wet (Bonnefille and Riollet, 1988) until~4000 cal a BP when it became drier and there was more open grassland (Olago, 2001;Msaky et al., 2005;Garcin et al., 2012). Many eastern African lacustrine and palustrine sediment records are characterized by sedimentary hiatuses around the Early Holocene, reducing the number of available study sites that contribute to our understanding of vegetation changes occurring in those temporal gaps. The hiatuses cover the African Humid Period in which several eastern African lakes reached their highest levels (Hoelzmann et al., 2004;Foerster et al., 2015;Dallmeyer et al., 2019). Most of these lakes overflowed and merged with other lakes or rivers; for example, Lake Turkana temporarily overflowed into the White Nile (Garcin et al., 2012), Lake Kivu overflowed and the Ruzizi River overflowed into Lake Tanganyika (Haberyan and Hecky, 1987). At Nyabuiyabuyi, organic matter gradually decreases, indicating an autochthonous source of sediment in the wetland (Burrows et al., 2016). The detrital elements (Si, Ti, Fe, Rb and Sr) decrease and reach a plateau with increased amount of organic matter content. The decline in elemental counts may reflect the ecosystem becoming more droughtadapted; lower precipitation leading to reduced sedimentation through runoff would account for the very low counts of Ti, Fe and Mn and elevated Br levels (Burrows et al., 2016). An increase in Ti, Fe and Zn reflects a much wetter interval than previously; these elements peak and reduce drastically, implying a wet interval that ends abruptly. The dark brown sediment from 350 cm to the top has no laminations; this could indicate sediment reworking, resuspension or benthic fauna mixing up the sediments as observed in the Lake Tanganyika sediment (Haberyan and Hecky, 1987). The charcoal record for the Younger Dryas/Early Holocene shows slightly lower concentraions than during the Late Pleistocene, but this increased significantly during the Middle Holocene to reach the highest charcoal concentrations. The Late Holocene: 100 cm to the top Increasingly arid conditions are recorded in the Nyabuiyabui record with continued replacement of Afromontane taxa with arboreal taxa that tolerate drier conditions (Cordia, Hagenia and Podocarpus). Locally there is an increase in aquatic taxa (Cyperaceae/Typha), which could reflect the expansion of wetland margins. Poaceae and the highest NPP counts (nonpollen palynomorph data not presented) suggest an intensification in the use of the wetland by herbivores. This would occur during an arid interval due to necessity and increased access due to lower water levels (Gelorini et al., 2012) or possibly as result of greater use of the area for agro-pastoralism. The highland ecosystems of eastern Africa, such as Mount Kenya, experienced pronounced ecosystem shifts through the Late Holocene (Street-Perrott et al., 2007;Rucina et al., 2009) due to varying arid and mesic environmental conditions as recorded by dramatic ecosystem and lake level changes (Marchant et al., 2018). Some of these transitions were relatively slow while others were more dramatic with a wide spatial signature. For example, continent-wide aridity (Marchant and Hooghiemstra, 2004) is noted around 4000 cal a BP after which eastern African lake levels fluctuate markedly in responce to intervals of varying rainfall and aridity (Gasse, 2000;Verschuren et al., 2000;Cohen et al., 2005;Garcin et al., 2012). Lake Rukwa becomes and remains saline within most of the last 5000 cal a BP with several probable dry periods marked by hiatuses in the core (Barker et al., 2002). Global transformations of the landscape are hypothesized to have reached significant levels globally~3000 cal a BP (Ellis et al., 2013). Data from eastern Africa indicate a shift in the adoption of livelihood patterns, for example as indicated by Wright (2005) while exploring resource exploitation among Neolithic hunters and herders. The spread of agropastoralism in Africa is believed to have occurred from 4000 cal a BP, altering landscapes through grazing and cultivation (Archibald et al., 2012;Phelps et al., 2020). Different land-use activities began in Kenya~4000 cal a BP, i.e. pastoralism at~4000 cal a BP, extensive agriculture at~1000 cal a BP and intensive agriculture~500 cal a BP, and foraging declines~250 cal a BP (Stephens et al., 2019). Archaeological studies in this landscape would improve our understanding of resource use in the catchment; unfortunately, the two closest archaeological sites (burial caves) have not unearthed information about forest and wetland resource use (Faugust and Sutton, 1966;Merrick and Monaghan, 1984). Several severe arid events are recorded during the Late Holocene; a diatom-chironomid record from Lake Naivasha using salinity to infer lake levels over the last 1100 years shows several intervals more arid than any recorded in the 20th century (Verschuren et al., 2000). Sediments from Lake Tanganyika and Lake Kivu record salinity peaks with Lake Kivu having salinity levels three times higher than the modern lake (Haberyan and Hecky, 1987). Within the Nyabuiyabui record, geochemical analysis shows a drop in all element concentrations except Cl, Ar, Cu, Hg and Pb. Increased Cl often results from precipitated chloride of NaCl that occurs during dry conditions (Kristen, 2009). The Ti/Rb ratio is useful to understand sediment source (Arnaud et al., 2014): this peaks twice in the Late Holocene (Fig. 6) at the same time as the Ti/Zn ratio and increases in sand and could indicate sedimentation of eroded material in the area. Lake Bogoria records a decline in high-altitude forest taxa (Kiage and Liu, 2006), whereas there is an increase in drought-tolerant taxa on Mounts Kenya and Elgon (Hamilton, 1982;Vincens, 1986), which imply the establishment of more arid conditions. This loss in high-altitude taxa could also be a signal of widespread clearance recorded in several montane forest sites (Bessems et al., 2008;Kiage and Liu, 2009;Rucina et al., 2009;Gelorini et al., 2012) attributed to land use activities across eastern and central Africa. Significant changes to forest compositions have been inferred from the Eastern Arc Mountains during the past 2000 cal a BP (Heckmann, 2014;Finch et al., 2017) that potentially relate to intensified mountain forest resource use (Iles et al., 2018). A sediment record from the Mara basin records a significant increase in sedimentation as well as increasing Hg levels from the late 1700s (Dutton et al., 2019) that resonates with our insights from the sedimentary record. In the Nyabuiyabui record, there are high levels of introduced Cupressus and Pinus taxa, and increased levels of Asteraceae, Acanthaceae and Vernonia. Cupressus and Pinus provide markers for the onset of colonial forestry operations in the early part of the 20th century (Finch et al., 2014). There is an increasing trend in element concentrations, with spikes in Cu, S, Hg, Ar and Pb towards the top of the sediment record (~25 cm). The onset of the increase corresponds to the advent of industrialization and thus in deposition of heavy elements while the later changes would be due to increased human presence and mechanized agroforestry (Troup, 1932). The Upper Mara was identified as a major source of sediment in the Mara wetland (Dutton et al., 2019) and so increased sedimentation would correspond to increased erosion and runoff in the upper Mara. In recent history (~200 years), eastern African montane forests were either reserved, industrially logged or maintained locally as culturally valued spaces during colonial government rule. Many Afromontane forests were converted to agroforests by the 1930s (Troup, 1932;Wood, 1965) with the Kenya Forest Service managing the higher elevation forests for timber production in public-private partnerships following independence. Recent land-cover change within the Mau Forest (Landsat data from 1986) indicates that forest cover has consistently shrunk in areal extent with an increase in cropland and grassland cover. These changes are correlated strongly with rapid population increase (Odawa and Seo, 2019). Increased population pressure leading to deforestation and illegal logging, together with land cover conversion, mainly to agriculture and settlement, in the Mau escarpment have been identified as contributing to the overall degradation of the Mara basin and changed hydrological and sedimentation regime (Defersha and Melesse, 2012). The upper catchment of the Mara River drains south-east of Mau, with 65% of the catchment area located in Kenya and the rest in Tanzania (Defersha and Melesse, 2012;Mwangi et al., 2016a) covering both the Maasai-Mara and Serengeti wildlife areas and important wetland areas in estuaries with Lake Victoria, such as Lake Masirori (Dutton et al., 2019). The two main Mau land cover types are forest and grassland, in which rapid conversion to cropland and bare land is taking place. Controlling for slope, organic matter content and extreme rainfall events, a study combining field and modelling data to understand the effects of the land-cover change in the Mara basin identified deforestation as a significant cause of change in water quality and watershed degradation, particularly in the upper sections of the Amala and Nyangores tributaries into the Mara River (Defersha and Melesse, 2012). A study by Mwangi et al. (2016b), using the Soil and Water Assessment Tool (SWAT) to understand the effects of agroforestry on the Mara hydrology, concluded that agroforestry impacts cannot be generalized across a catchment without considering climate variability within the watershed. Increased deforestation and conversion to agriculture in the Mau Highlands increases the variability of catchment water flows. As observed in a paired study in the Kapchorwa catchment, peak discharge flow increased significantly after the deforestation of the Nanadi/Kakamega tropical rainforest due to the loss of ground cover that regulates surface runoff (Mwangi et al., 2016a,b). However, areas under current agricultural production when converted to agroforestry lead to a decrease in runoff while increasing groundwater uptakethis is often because of the choice of tree species sich as Eucalyptus that has a massive water demand (Hubbard et al., 2020). Land use, predominantly deforestation and transition to agriculture, was found to influence nitrous oxide levels in the lower elevations of the Mara River network (Mwanake et al., 2019). The effect of forest cover loss leading to reduced capacity to act as a catchment during heavy rainfall and mediate streamflow was identified as a major disruptor to economic activities to the south-west of Mau (Otuoma et al., 2012). Various streams connecting into the Sondu River catchment experience steep increases (floods) after a heavy downpour followed by long intervals of very low stream flow. This has adverse effects on tea plantations and the Sondu hydroelectric power station, which cannot adequately predict waterflow needed for planning (Otuoma et al., 2012). Despite the net loss in forest cover, some cropland and grassland areas have converted back to forest (Odawa and Seo, 2019). Natural regeneration coupled with reforestation and conservation efforts can be applied to curb and even reverse the impacts of forest loss (Marshall et al., 2020). Thus, the Mau forest management efforts need to consider the impacts of forest activities not only on the forest ecosystem, the interactions of local populations with this, and wider hydrological issues across the basins at national and regional levels. Conclusions Nyabuiyabui is a unique montane forest wetland record providing insights into long-term forest dynamics due primarily to climate change as well as long-term wetland development for the largest last remaining closed-canopy forest in eastern Africa, with significant impacts across the wider landscape including the Mara river system. The Nyabuiyabui catchment has undergone significant changes in forest composition from a highly diverse Afromontane to a more open forest ecosystem indicative of increased aridity in the region. Precipitation change is an important driver of vegetation composition over time. The additional effects of fires have been an important component of Mau forests, glades and vegetated wetlands. At Nyabuiyabuyi, the significant increase in elements such as Pb and Hg over the last 200 cal a BP combined with the rise of exotic taxa are clear indicators of human forestry activity at industrial scale that would impact forest ecosystems and associated service delivery across the catchment. Further focused studies on other wetlands located along the Mau Forest complex would greatly improve our understanding of the recent past and interpret how these changes on forest cover impact on hydrology and downstream impacts. Managing and ensuring an intact and functioning foresthydrological system is vital for the Mau Highlands and the wider lowland savanna ecosystems and livelihoods. Long-term data on vegetation change of highland watersheds provide useful context for current climate and land use change debates and how these insights can support management decisions for remediation efforts and future restoration outcomes. Supporting information Additional supporting information may be found in the online version of this article at the publisher's web-site. Figure S1. A stratigraphic plot of the complete pollen taxa identified and the associated CONISS pollen assemblage zonation.
2021-05-11T00:04:16.800Z
2021-01-11T00:00:00.000
{ "year": 2021, "sha1": "5cdce9676ef139e70720eb0ff97755c24988cd25", "oa_license": "CCBYNC", "oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jqs.3267", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "af254e2bb5bbe81e14e7a9cad7c771a14850d828", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
265542786
pes2o/s2orc
v3-fos-license
Adaptive Multi-Dimensional Taylor Network Tracking Control for a Class of Nonlinear Strict Feedback Systems : Nonlinear systems are very common in real life, but because they are not superposed and homogeneous, there are many difficulties in controlling nonlinear systems. Therefore, an adaptive control method based on a multi-dimensional Taylor network (MTN) is proposed for a class of nonlinear systems with strict feedback so that the output of the system can track the given signal. In order to achieve the control effect, we define a new state variable and transform the strict feedback system. After transformation, the original feedback system has a standard form Introduction Non-linear systems are almost ubiquitous in daily life and exist widely in various applications, such as motor [1], power [2], and electrical systems [3].After more than half a century of development, the control of non-linear systems has made considerable progress, and various control methods and strategies have emerged, e.g., backstepping control strategies [4], neural networks [5], fuzzy-based control [6], and system identification [7]. For example, scenario-based model predictive control (MPC) approaches can mitigate the conservatism inherent in robust open-loop MPC.Reference [8] presents a method for evaluating the confidence intervals of RBNN predictions and determines the number of samples required to estimate the confidence interval for a given confidence level.The authors of [9] propose a security-model-based reinforcement learning approach to control nonlinear systems described by linear parameter variation models. In order to improve control accuracy, these methods require feedback.For example, a state feedback Smith predictive controller was proposed for the effective temperature control of a cement rotary kiln precalcining furnace [10].The authors of [11] present a fully distributed adaptive tracking control scheme for multi-agent systems with a strict feedback form.Generally, this type of state feedback control requires all systems' internal state information, which is challenging to achieve in reality. Therefore, control methods based on system output have also been proposed.These control algorithms require only the system's output information to complete the control process.For example, the authors of [12] studied a linear-quadratic (LQ) control problem with irregular output feedback in which a noisy linear system measured the state.In [13], the authors discuss the collaborative design of output-dependent switching functions and full-order affine filters for discrete-time switched affine systems.This control method has a certain degree of versatility and has achieved good application results.However, the controlled system does not have a strict feedback form. Strict feedback systems present a good lower triangle form, while the control systems for a flexible manipulator and some temperatures are in a strict feedback form.Commonly, the backstepping control method is used for such systems, e.g., in [14], a neural-networkbased adaptive gain scheduling backward sliding mode control (NNAGS-BSMC) method is proposed for a class of non-linear systems with uncertain strict feedback.Reference [15] presents a novel tracking controller utilizing an event-triggering implementation for uncertain rigor feedback systems.Adaptive fuzzy decentralized optimal control problems for a class of large-scale non-linear systems with a strict feedback form have also been studied [16].A backstepping method usually has specific prerequisite requirements for the system or control strategy and needs to calculate a higher differential, imposing a high calculation complexity.Such problems have been improved to a certain extent by combining some improved backstepping methods with adaptive control ideas.For example, [17] addresses the adaptive event-triggered control of non-linear continuous-time strict feedback systems.However, the overall calculation process of this method must meet trigger conditions before it is carried out, prohibiting it from meeting real-time performance requirements. With the development of neural networks, new methods have been proposed for nonlinear control problems, exploiting the appealing approximation characteristics of neural networks.For example, [18] addresses the compound learning control of a perturbed uncertain strict feedback system.In [19], the authors studied the data-based compound neural control of an uncertain strict feedback system's online record using a backstepping framework.This algorithm provides a relatively general idea for non-linear control to a certain extent, but as neuron cardinality increases, the computational complexity increases geometrically.The multi-dimensional Taylor network is a newly proposed control structure.Due to its simple structure and convenient application, some promising results have been achieved.For example, the authors of [20] studied non-linear time-delay systems with uncertainties.However, the use of the MTN control algorithm for a strict feedback system has not been thoroughly studied. Spurred by this, this paper proposes an output feedback control method for a strict non-linear system based on an MTN so that the system's output can automatically track the desired signal.Our method initially transforms the original non-linear strict feedback system and redefines the state variables to obtain the new standard form.The state observer then completes the identification process of the adaptive system with the MTN's good approximation characteristics.The adaptive control law completes the system's tracking output based on this.Finally, a numerical simulation of a servo-hydraulic system model is carried out, verifying the effectiveness of the proposed algorithm. The main contributions of this paper are as follows: 1. The traditional MTN control method relies on the unique performance of its basic structure and is designed and used for general controlled objects.Therefore, some characteristics of the controlled object itself are not fully considered and utilized.Therefore, this paper applies the MTN to strictly nonlinear feedback systems for the first time, taking full advantage of the characteristics that different parameters of the MTN can have different outputs with the same result and that processing two sets of internal parameters at the same time can effectively improve control efficiency. 2. In the control process, a set of variable representation rules is designed so that the general strict feedback system can be expressed in a standard form.On this basis, an adaptive parameter-adjustment rule based on a state observer is designed to bring the tracking error close to 0. Thanks to the simple structure of the MTN, compared with a neural network algorithm, it can effectively reduce the number of calculations. The remainder of this paper is organized as follows.Section 2 introduces the strict feedback system and transforms the original system into a standard form.Section 3 presents the design of the state observer, while Section 4 introduces the multi-dimensional Taylor network and its basic structure.Section 5 introduces a parameter identification method based on a multi-dimensional Taylor network, and Section 6 presents the controller's design and a stability analysis of the system.Section 7 illustrates the effectiveness of the proposed control scheme through a numerical simulation of a hydraulic control system.Finally, Section 8 concludes this paper. System Model Consider the following strictly non-linear feedback system: where The main task of this paper is designing an MTN-based output feedback controller for the above-mentioned strictly non-linear feedback system, affording the output of the system y to track a given signal y d . Traditional control methods usually require all state variable information for such problems, that is, x 1 , x 2 , • • • , x n .At the same time, the multi-step backstepping controller design suffers from error accumulation, and the process is complicated and cumbersome.Thus, this paper proposes a feedback algorithm based only on the output to simplify the control algorithm and reduce the calculation burden.In order to realize the control algorithm, the original strict feedback system needs to be transformed. We define the state variables as Then, there is By analogy, Equation ( 5) can be expressed as . where After the above changes, the original strict-feedback non-linear system can be expressed as In addition, A n and B n include the unknown non-linear mappings f i and h i of the original system.Since h i in the original hypothesis is not equal to 0, it is assumed that the gain function B n is a bounded function greater than 0 and that 0 < B min ≤ B n ≤ B max , where B min and B max are constants greater than 0. After the transformation, the original strict feedback system has a general standard shape.Since z 1 = x 1 , after the transformation, the system output is unchanged, and the original control target is consistent.However, A n and B n are unknown, and except for z 1 , the higher-order state z i is unavailable, so a state observer needs to be designed. State Observer According to [21], for the above-mentioned strict feedback system, the following state observer can be constructed to observe the high-order state of z. where K 1 , • • • , K n+1 > 0 is the observation gain and ẑ1 , • • • , ẑn is the estimation of the state quantity z 1 , • • • , z n .It has been proven in the literature that the above observer converges in a finite time. Experimental Investigation The MTN can approximate any non-linear functions with a finite point of discontinuity.A neat structure is the merit of the MTN, whose terms are easy to adjust.For further details on the MTN, the reader is referred to [22][23][24][25][26][27][28]. The basic structure of the MTN is illustrated in Figure 1. The MTN can approximate any non-linear functions with a finite point of discontinuity.A neat structure is the merit of the MTN, whose terms are easy to adjust.For further details on the MTN, the reader is referred to [22][23][24][25][26][27][28]. Let The basic structure of the MTN is illustrated in Figure 1.In other words, there exists a set of parameter vectors where ( , ) N n t is the total number of the expansion, i w is the weight of the product term, , s i λ is the power of s z in the i th product term, and , , , , , , , , ] Similar to Reference [29], there is no fixed standard for the highest power of the MTN, but with an increase in the power of the MTN, the internal function will increase, and it is usually appropriate to choose three times in practice. Adaptive System Identification In order to design an ideal feedback controller, n A and n B are required; thus, sys- tem identification is involved, with traditional identification methods usually considering n A and n B separately.Due to problems in n B such as zero crossing, it is easy to cause singularity problems like system divergence.To solve this difficulty, we modify the system as follows. We rewrite the last subsystem and obtain In other words, there exists a set of parameter vectors w = [w 1 , w 2 , . . .w N(n,t) ] such that the output of the MTN O ut can be expressed as where N(n, t) is the total number of the expansion, w i is the weight of the product term, λ s,i is the power of z s in the ith product term, and Setting η(z) = [1, z 1 , z 2 , . . ., z n , . . ., z 2 1 , z 1 z 2 , . . ., z t n ] T , we obtain Similar to Reference [29], there is no fixed standard for the highest power of the MTN, but with an increase in the power of the MTN, the internal function will increase, and it is usually appropriate to choose three times in practice. Adaptive System Identification In order to design an ideal feedback controller, A n and B n are required; thus, system identification is involved, with traditional identification methods usually considering A n and B n separately.Due to problems in B n such as zero crossing, it is easy to cause singularity problems like system divergence.To solve this difficulty, we modify the system as follows. We rewrite the last subsystem and obtain Therefore, the system can be identified for the two unknowns 1 B n and A n B n to avoid the singularity problem. According to the basic structure of MTN, we obtain 1 where η(z) is the polynomial combination of MTN. Appl.Sci.2023, 13, 12864 6 of 14 Since z is unknown, it can be replaced by ẑ from the foregoing equation.Despite an error between them, it can be compensated through weight adjustment.w * 1 and w * 2 are ideal-weight MTN vectors. Unlike traditional neural network methods, our technique requires two sets of basis vectors in which each is calculated separately, imposing a significant computational burden.Moreover, there is a suitable polynomial combination compared with the MTN, i.e., the identification effect can be achieved only by changing the parameter value. By introducing the MTN, the system input value can be rewritten as where ε = ε 1 . z n − ε 2 is the total error of the MTN.In the above formula, . z n is unknown and can be replaced by .ẑn , so we can obtain where ξ = ε + w * z n is the total system identification error.Since the unknown network weights w * 1 and w * 2 have not been estimated, the control strategy described below is designed. In the above calculations, only ẑn is estimated, while the derivative of the system ẑn , i.e., .ẑn , is unknown.Thus, a low-pass filter 1 1+θs is introduced where θ is the filter constant.Using the inverse Laplace transform and without considering the influence of the initial value, the following formula can be obtained: ẑnθ = ẑn 1+θs .ẑnθ = ẑn − ẑnθ θ (16) letting the initial value of ẑnθ be 0, i.e., ẑnθ (0) = 0. Correspondingly, the low-pass filter can be applied to other variables, and the initial value is set to 0. Lemma 1. Consider the continuous function G , where G 1 (x) and G 2 (x) are both continuous mappings.After applying the low-pass filter, the following conclusions can be drawn: where G 1θ and G 2θ are the functions of G 1 and G 2 passing through the low-pass filter and ρ is the high-order truncation error. From Lemma 1, we obtain Substituting the previous formula, we obtain where λ = ρ + ξ θ is the lumped error, T is the generalized weight vector, and , −η(ẑ) θ ] is the generalized control vector of input u. Adaptive Control Law Design By adjusting Ŵ to make it infinitely close to W * , we finally achieve the control purpose.To this end, we designed an error-based adaptive adjustment rate that is W = W * − Ŵ. Parameter definition: where β and γ are positive constants.We design two auxiliary variables, P ∈ R 2N×2N and Q ∈ R 2N×1 , based on F: . Since both β and γ are greater than 0, it can be guaranteed that P and Q are both bounded. We calculate the above formula to obtain By setting δ = t 0 e β(τ−t) Fλdτ, we obtain where the norm of δ is a bounded function, that is, δ ≤ δ max . The error vector is defined as By subtracting the above two formulas, we obtain Then, after the low-pass filter, the auxiliary variables F, P, and Q are calculated and S is obtained.An adaptive rate based on S can be designed as where λ > 0 is an adaptive adjustment step. Theorem 1.If the above-mentioned adaptive rate is used, the weight error vector finally converges near the 0 point under the condition of a continuous excitation of Ψ θ . Proof.We define the Lyapunov function as Deriving the above formula provides Since Ψ θ continues to excitate, there is a normal number, κ, for ∀t > 0: It can be seen from the auxiliary variable P that From Theorem 1, we know that λ min (P) > κ, so we have . By applying Young's inequality to W T Φ | ν|, B n γ max | ν|, and W δ max , and substi- tuting the result into the above formula, we obtain . Equation ( 50) reveals that by appropriately increasing the gain parameter k and the correction parameter β, it can be ensured that min From the Lyapunov theorem, we know that the errors ν and W are bounded and converge to a compact set near the 0 point.At the same time, from Equations ( 35) and (43), and since z is bounded, we conclude that ν, e, and ê are bounded and that the weight vector Ŵ is bounded.From Equation (41), the control signal u is bounded. The proof is completed. In this paper, the adaptive law based on an MTN ensures that the estimated weight vector approaches the true weight vector in a direction with infinitely small errors.At the same time, compared with the traditional dual-neural-network identification method, the number of calculations is reduced, and the identification process of the unknown dynamics of the entire system is completed. Simulation Example Consider the servo-hydraulic system of [28], as illustrated in Figure 2. Appl.Sci.2023, 13, x FOR PEER REVIEW 11 of 15 From the Lyapunov theorem, we know that the errors ν and  W are bounded and converge to a compact set near the 0 point.At the same time, from Equations ( 35) and (43), and since z  is bounded, we conclude that ν , e , and ê are bounded and that the weight vector Ŵ is bounded.From Equation (41), the control signal u is bounded. The proof is completed.□ In this paper, the adaptive law based on an MTN ensures that the estimated weight vector approaches the true weight vector in a direction with infinitely small errors.At the same time, compared with the traditional dual-neural-network identification method, the number of calculations is reduced, and the identification process of the unknown dynamics of the entire system is completed. Simulation Example Consider the servo-hydraulic system of [28], as illustrated in Figure 2. The system has typical strict-feedback non-linear characteristics where q x is the out- put displacement, a F is the output driving force of the hydraulic drive, i P is the pres- sure, m is the mass of the load, s k is the spring coefficient, and c is the damping coef- ficient. The system model is as follows: The system has typical strict-feedback non-linear characteristics where x q is the output displacement, F a is the output driving force of the hydraulic drive, P i is the pressure, m is the mass of the load, k s is the spring coefficient, and c is the damping coefficient. The system model is as follows: where V t m , f 22 (x) = 4β e V t m C t , and f 23 (x) = 4β e ω V t m χ.V t is the total volume of the hydraulic cylinder, β e is the elastic modulus of the hydraulic fluid, ω is the effective acting area of the piston in the hydraulic cylinder, and χ is the effective conversion ratio of the servo valve input and output. In order to verify the validity, we select data close to reality: In Figure 3, the BPNN controller and RBFNN controller are given.As traditional trol methods, the neural network controllers work well and have the ability to resist turbance.From these experimental results, it is shown that the method proposed in paper is faster than another. To accurately assess the performance of the three control methods, we emplo three error metrics: the (1) Root Mean Square Error (RMSE), which represents the sq root of the ratio of the squared differences between the actual values and the predicted ues and is sensitive to outliers in the data; (2) the Mean Absolute Error (MAE), w measures the average distance between the model's predicted values and the actual va and is less sensitive to outliers; and (3) the Mean Absolute Percentage Error (MAPE), w is a relative measure that quantifies the accuracy of predictions using relative error.values of these metrics obtained using the three control methods are shown in Table 1.In Figure 3, the BPNN controller and RBFNN controller are given.As traditional control methods, the neural network controllers work well and have the ability to resist disturbance.From these experimental results, it is shown that the method proposed in this paper is faster than another. To accurately assess the performance of the three control methods, we employed three error metrics: the (1) Root Mean Square Error (RMSE), which represents the square root of the ratio of the squared differences between the actual values and the predicted values and is sensitive to outliers in the data; (2) the Mean Absolute Error (MAE), which measures the average distance between the model's predicted values and the actual values and is less sensitive to outliers; and (3) the Mean Absolute Percentage Error (MAPE), which is a relative measure that quantifies the accuracy of predictions using relative error.The values of these metrics obtained using the three control methods are shown in Table 1.As shown in Table 1, the proposed method outperforms the NN and the RBF in terms of most metrics. In order to verify the tracking performance of the system, y d = 1 + 0.1 sin(t) was chosen as the desired signal, and the system outputs are illustrated in Figure 4. Similarly, the results of the three indicators are shown in Table 2.The latter figure presents the system output and the ideal tracking signal, highlighting that the system has a good tracking performance, i.e., the effectiveness of the proposed method is highlighted from the simulation results. Discussion This paper proposes an output feedback control method based on the MTN that is appropriate for non-linear strict systems so that the system output can automatically track the desired signal.The original strict-feedback system is first transformed in the proposed method, and the state variables are redefined to obtain the new standard form.A state observer is then designed to complete the identification process of the adaptive system under the good approximation characteristics of the MTN.Based on this, the adaptive control law is designed to complete the system tracking process.Numerical simulations on a servo-hydraulic system model as the control object verify the effectiveness of the suggested method.Similarly, the results of the three indicators are shown in Table 2.The latter figure presents the system output and the ideal tracking signal, highlighting that the system has a good tracking performance, i.e., the effectiveness of the proposed method is highlighted from the simulation results. Discussion This paper proposes an output feedback control method based on the MTN that is appropriate for non-linear strict systems so that the system output can automatically track the desired signal.The original strict-feedback system is first transformed in the proposed method, and the state variables are redefined to obtain the new standard form.A state observer is then designed to complete the identification process of the adaptive system under the good approximation characteristics of the MTN.Based on this, the adaptive control law is designed to complete the system tracking process.Numerical simulations on a servo-hydraulic system model as the control object verify the effectiveness of the suggested method. output of the MTN ut O can be expressed as, Author Contributions: Methodology, Q.S. and Y.Z.; Software, Q.S.; Validation, S.W.; Formal analysis, X.J.; Investigation, Q.S. and C.Z.; Writing-original draft, Q.S.All authors have read and agreed to the published version of the manuscript.Funding: This work was supported in part by the Natural Science Foundation of the Higher Edu- Table 1 . The unit step response comparison among three metrics obtained using different cont methods. Table 1 . The unit step response comparison among three metrics obtained using different control methods. Table 2 . The tracking response comparison among three metrics obtained using different control methods. Table 2 . The tracking response comparison among three metrics obtained using different control methods.
2023-12-03T16:06:05.990Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "93488284bdf8fb048952090651e88722437b6c91", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/13/23/12864/pdf?version=1701339792", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6edb477c205adaf100a08ae1cc713c804587bee9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
256213607
pes2o/s2orc
v3-fos-license
The Effect of Citrus aurantium on Non-Small-Cell Lung Cancer: A Research Based on Network and Experimental Pharmacology Purpose To screen the main active components of Citrus aurantium through a network pharmacology approach, construct a component-disease target network, explore its molecular mechanism for the treatment of non-small-cell lung cancer (NSCLC), and validate it experimentally. Methods The active ingredients in Citrus aurantium and the targets of Citrus aurantium and NSCLC were collected through the Traditional Chinese Medicine Systematic Pharmacology Database and Analysis Platform (TCMSP), GeneCards, and OMIM databases. The protein interaction network was constructed using the STRING database, and the component-disease relationship network graph was analyzed using Cytoscape 3.9.1. The Metascape database can be used for GO and KEGG enrichment analyses. The Kaplan-Meier plotter was applied for overall survival analysis of key targets of Citrus aurantium in the treatment of NSCLC. Real-time PCR (RT-PCR) and Western blotting were used to determine the mRNA and protein levels of key targets of Citrus aurantium for the treatment of NSCLC. Results Five active ingredients of Citrus aurantium were screened, and 54 potential targets for the treatment of NSCLC were found, of which the key ingredient was nobiletin and the key targets are TP53, CXCL8, ESR1, PPAR-α, and MMP9. GO and KEGG enrichment analyses indicated that the mechanism of nobiletin in treating NSCLC may be related to the regulation of cancer signaling pathway, phosphatidylinositol-3 kinase (PI3K)/protein kinase B (Akt) signaling pathway, lipid and atherosclerosis signaling pathway, and neurodegenerative signaling pathway. The experimental results showed that nobiletin could inhibit the proliferation of NSCLC cells and upregulate the levels of P53 and PPAR-α and suppress the expression of MMP9 (P < 0.05). Conclusion Citrus aurantium can participate in the treatment of NSCLC through multiple targets and pathways. Introduction Currently, lung cancer remains one of the most common cancers and has a very high mortality rate, accounting for approximately 18% of all cancer-related deaths [1]. According to the histological classification, lung cancer is divided into small-cell lung cancer (SCLC, 15% of all lung cancers) and non-small-cell lung cancer (NSCLS, 85% of all lung cancers) [2]. The process of NSCLC development is complex and diverse, involving multiple signaling pathways, such as PI3K/Akt signaling pathway, human matrix metalloproteinase 9 (MMP9), cell cycle protein-dependent kinase 1 (CDK1), and Wnt/β-catenin signaling pathway [3][4][5][6]. According to recent research results, the treatment of NSCLC is mainly based on Western medicine, including surgery, chemotherapy, radiotherapy, targeted therapy, and immunotherapy [7,8]. However, these treatments are often accompanied by undesirable consequences such as susceptibility to recurrence and metastasis, poor prognosis, and high costs [9]. In addition, many targeted drugs used to treat NSCLC (e.g., EGFR-TKI and ALK-TKI), although they can prolong the survival of patients with advanced disease, the resistance of these drugs limits their long-term efficacy [10,11]. Studies have shown that Chinese medicine can be very helpful in the adjuvant treatment or prognosis of NSCLC, reducing the adverse effects of EGFR-TKIs, improving disease-free survival (DFS), and having better tolerability [12,13]. Aurantii Fructus, also known as ZhiQiao, is the dried unripe fruit of Citrus aurantium L. and its cultivated variants and is a traditional Chinese medicine. According to previous studies, Citrus aurantium has several potential pharmacological effects, such as promotion of intestinal motility [14], antidepressant effects [15,16], antikidney stones [17], and antihepatotoxicity [18]. In addition, studies have shown that Citrus aurantium also has potential therapeutic effects on cardiovascular disease and cancer [19]. Weifuchun tablet is a proprietary Chinese medicine containing three Chinese herbs, namely, red ginseng, Isodon amethystoides, and Citrus aurantium, to relieve precancerous lesions of gastric cancer by regulating intestinal microbial balance and treating atrophy and intestinal metaplasia (IM) [20]. The above studies show that Citrus aurantium is a good anticancer herbal medicine, but the therapeutic effect and mechanism of action of Citrus aurantium on NSCLC have not been reported yet and deserve further study. In this study, the protein interaction network between the active ingredients of Citrus aurantium, drug targets, and NSCLC-related disease genes was constructed through a network pharmacology approach to predict the potential targets and related pathways of Citrus aurantium for the treatment of NSCLC, which was then validated by cellular and molecular experiments, thus providing a theoretical basis for further clinical studies. Collection of Active Ingredients and Targets of Citrus aurantium. The active ingredients of Citrus aurantium were obtained by searching "Zhiqiao" using the Traditional Chinese Medicine System Pharmacology Database and Analysis Platform (TCMSP) according to our previous study [21]. Then, we screened the list of ingredients with the criteria of oral bioavailability ðOBÞ ≥ 30% and drug-likeness ðDLÞ ≥ 0:18. Similarly, in the TCMSP database, the targets of active ingredients were screened in the list of relevant targets and a database of active ingredients and targets of Citrus aurantium was created. The target names were corrected using UniProt (https://www.uniprot.org/). Collection of Disease Targets. Two online human-related gene databases, GeneCards (https://www.genecards.org/) and OMIM (https://omim.org/), were searched using the keyword "non-small-cell lung cancer" to obtain the NSCLC-related genes. The genes from the two databases were then integrated to create a database of NSCLC disease targets. 2.3. Venn Diagram of Genes Associated with Citrus aurantium and NSCLC. The target genes of Citrus aurantium and the target genes of NSCLC were uploaded on the Venny 2.1.0 online platform to map the Venn diagram and obtain the crossover genes of Citrus aurantium and NSCLC, i.e., the drug-disease cointeraction target genes. Construction and Analysis of Protein Interaction Network. The information related to Citrus aurantium active ingredient and NSCLC target genes was imported into the network visualization software Cytoscape 3.9.1 (https:// cytoscape3.9.1.org//) to construct a network diagram of the Citrus aurantium active ingredient-NSCLC target gene relationship. The data were analyzed using the CentiScaPe2.2 plug-in in Cytoscape 3.9.1 to calculate the nodes of each active ingredient in the network. GO and KEGG Enrichment Analyses. The crossover genes of Citrus aurantium and NSCLC were entered into the STRING database (https://string-db.org/), and "Homo sapiens" was selected to map the protein interaction network. Subsequently, data analysis was performed using the CentiScaPe2.2 plug-in in Cytoscape 3.9.1 to screen for key targets in Citrus aurantium and NSCLC. Then, the crossover genes were entered into the Metascape database, "Homo sapiens" was selected, GO and KEGG enrichment analyses were performed, the data obtained from the analyses were stored, P values were calculated, and the relevant data were entered into the online platform Weishengxin (https:// www.bioinformatics.com.cn/) to plot bubble plots. 2.6. Kaplan-Meier Analysis. The Kaplan-Meier plotter (https://kmplot.com/analysis/) was able to assess the impact of 54,000 genes on survival rates for 21 cancer types. Among them, the largest dataset includes breast, ovarian, lung, and gastric cancers [22]. In this study, it was used to assess the prognostic value of nobiletin-key target mRNA expression in NSCLC. Each of the 12 key targets was uploaded to the database to obtain a Kaplan-Meier survival plot, where the number of risks is shown below the main plot. When P < 0:05, it indicates that the results are significantly different. In this study, the threshold value with the best performance was used as the cutoff value, and "array quality control" was selected as "no filtered array quality." 2.7. Drugs and Reagents. Dulbecco's modified Eagle medium (DMEM), fetal bovine serum (FBS), 0.25% trypsin-EDTA, and penicillin-streptomycin solution were purchased from Gibco (Logan, Utah, USA). Real-time PCR kits were purchased from Takara Co., Ltd. (Dalian, China). Rabbit anti-p53 and β-tubulin antibodies were purchased from CST Inc. (Boston, MA, USA). Chemiluminescent substrates were purchased from Pierce (Rockford, IL, USA). 2.9. CCK-8 Analysis. Cell viability was assayed using Cell Counting Kit-8 (CCK-8). NSCLC cells in logarithmic growth phase were inoculated at 5 × 10 3 per well into 96well microplates and cultured for 24 h. Then, NSCLC cells were treated with different concentrations of nobiletin (0, 10, 20, and 40 μM) for 12, 24, and 48 h. CCK-8 solution (10 mL) was added to each well and incubated for another 1 h. Finally, the absorbance of each well was measured at 450 nm by a multivolume spectrophotometer system (USA, BioTek Instruments Inc.). Clone Formation Test. NSCLC cells (1 × 10 3 cells/well) in logarithmic growth phase were inoculated in 6-well culture plates and incubated at 37°C for 48 h. The cells were fixed with 4% paraformaldehyde (Solaibo, Beijing, China) for 10 min and stained with crystal violet (Sigma-Aldrich, China) for 30 min, and colonies containing more than 10 cells were observed under a microscope. 2.11. Real-Time PCR. Total RNA was extracted from cultured cells using TRIzol reagent (Invitrogen). mRNA was subsequently quantified, and cDNA was synthesized by reverse transcription, and mRNA levels of target genes were measured using a Bio-Rad quantitative PCR instrument. The specific primers used for RT-PCR are shown in Table 1, and GAPDH was used as an endogenous control. To ensure the validity and accuracy of the data, all reactions were performed three times. 2.12. Western Blotting. Total protein was extracted from the cells using RIPA lysis buffer, and the extracted protein was then quantified using the BCA protein quantification kit according to the manufacturer's instructions. Subsequently, equivalent proteins were separated on 10% SDS-PAGE, then transferred to poly(vinylidene fluoride) (PVDF) membranes and blocked against nonspecific antibodies; primary antibodies were added, incubated for 1-2 h at room temperature, and then incubated with secondary antibodies for 1 h at room temperature; and target protein expression was detected by chemiluminescence, and the bands were analyzed in grayscale by the ImageJ software. 2.13. Statistical Analysis. All data were expressed as mean ± standard deviation, and statistical analysis was performed using the SPSS 13.0 statistical software. One-way analysis of variance (ANOVA) was used for comparison of means between multiple groups. Differences were considered statistically significant at P < 0:05. Establishment of a Database of Active Ingredients and Targets of Citrus aurantium. Through the TCMSP database, 17 active ingredients of Citrus aurantium were available, and the list of ingredients was screened by oral bioavailability ð OBÞ ≥ 30% and adult drug similarity ðDLÞ ≥ 0:18, and finally, five highly active ingredients were obtained (Table 1). Similarly, 124 relevant targets for five highly active ingredients were available in the TCMSP database. Target names were corrected and deduplicated using UniProt to finally obtain 80 relevant target genes for Citrus aurantium. Based on GeneCards and OMIM databases, a total of 6177 target genes related to NSCLC were obtained. The Venn diagram of Citrus aurantium and NSCLC was drawn using the Venny 2.1.0 online platform, and 54 crossover genes were obtained by analysis ( Table 2 and Figure 1(a)). Protein Interaction Network Analysis. The protein interaction network graph of Citrus aurantium with NSCLC can be obtained by entering the crossover genes into the STRING online platform and hiding the free nodes outside the network (Figure 1(b)). After analysis, there are 52 nodes and 195 edges with an average node number of 7.22. In the network diagram, network nodes represent proteins, edges represent protein-protein associations, and different color edges indicate different meanings: light blue indicates from selected databases, purple indicates experimentally determined, and these two are known; green indicates gene neighborhood, red indicates gene fusion, and dark blue indicates gene cooccurrence, and these three are predicted; there are also yellow that indicates text mining, black indicates coexpression, and white indicates protein homology. The protein interaction network map was imported into Cytoscape 3.9.1, and the data were analyzed using the CentiScaPe2.2 plug-in, and 12 key targets (Figure 1(c)) were obtained after filtering based on the mediator centrality (BC), closeness centrality (CC), and degree centrality (DC) parameters. As can be seen from the figure, the top seven genes with node degree values are TP53, CAT, ESR1, MMP9, CXCL8, MAPK14, and PPAR-α, indicating that these genes are in key positions in the protein interaction network. Construction and Analysis of Drug Component-Disease Target Networks. The information related to the active ingredients of Citrus aurantium and the crossover genes between (Figure 2). Using one of the analyses, CentiScaPe2.2, the key component of Citrus aurantium for the treatment of NSCLC was finally obtained as nobiletin based on the mesocentricity (BC), closeness centrality (Table 3). GO and KEGG Enrichment Analyses. The Citrus aurantium-NSCLC crossover genes were entered into the Metas-cape database for GO and KEGG enrichment analyses. Based on P < 0:01, a minimum number of 3, and enrichment factor > 1:5, GO analysis yielded 20 biological processes, 14 molecular functions, and 11 cellular compositions that are component targets of Citrus aurantium for the treatment of NSCLC (Figures 3(a)-3(c)). The results showed that cellular responses to organic cyclic compounds 14), and responses to exogenous stimuli (target number 14) were significantly enriched in Citrus aurantium for the treatment of NSCLC, indicating that Citrus aurantium is able to treat NSCLC through multiple biological pathways. 140 pathways were obtained by KEGG enrichment analysis (P < 0:01), and 18 pathways with the highest correlation to NSCLC were obtained by screening (Figure 3(d)). The results showed the highest enrichment in tumor-related signaling pathway (target number 15), lipid and atherosclerosis signaling pathway (target number 10), PI3K-Akt signaling pathway (target number 10), and neurodegenerative signaling pathway (target number 10), indicating that Citrus aurantium can treat NSCLC through these pathways. significantly improved OS in all NSCLC patients (P < 0:05 ); i.e., survival was higher in the high expression group than in the low expression group. In addition, CDK1 lacked relevant data. Effect of Nobiletin on the Proliferation of NSCLC A549 Cells. To assess whether nobiletin could inhibit the growth of NSCLC cells, we treated NSCLC cells with different concentrations of nobiletin (0, 10, 20, and 40 μM) for 12, 24, and 48 h and then measured cell viability using the CCK8 assay. The results showed that nobiletin inhibited the proliferation of NSCLC cells in a dose-dependent manner (Figures 5(a) and 5(b)). In addition, clonogenesis assays confirmed that nobiletin inhibited the colony-forming ability of NSCLC cells in a dose-dependent manner compared to untreated cells (Figures 5(c) and 5(d)). The above results suggest that nobiletin inhibits the proliferation of NSCLC A549 cells. Table 4). The protein expression level of p53 was detected by Western blotting method. The results showed that nobiletin dose-dependently increased the mRNA and protein expression of P53 (Figures 6(a) and 6(b)). In addition, we also found that nobiletin was able to increase the mRNA expression of PPAR-α and inhibit the mRNA level of MMP9 (Figures 6(e) and 6(f)). Moreover, nobiletin had no significant effect on the expression of CXCL8 and ESR1 (Figures 6(c) and 6(d)). Discussion Lung cancer is one of the malignant tumors with the highest incidence and mortality rate worldwide, but the current clinical treatment is not satisfactory [3]. In this study, we explored the potential therapeutic targets and major molecular mechanisms of Citrus aurantium for the treatment of NSCLC based on network pharmacology analysis and bioinformatics. Cyberpharmacology combines system biology and pharmacology to analyze and explore the multichannel regulation of signaling pathways through high-throughput sequencing, genomics, and other technologies to find therapeutic targets and signaling pathways for diseases [23]. Based on network pharmacology, we collected 5 active ingredients, 80 targets, and 177 disease targets of Citrus aurantium and obtained 54 crossover genes in this study. The results of protein interaction network combined with enrichment analysis were used to obtain nobiletin, a key compound of Citrus aurantium for the treatment of NSCLC, and five key targets: TP53, CXCL8, ESR1, PPAR-α, and MMP9. GO bioprocess analysis showed that Citrus aurantium can treat NSCLC as long as the biological processes such as cellular response through organic cyclic compounds, cellular response to lipids, and negative regulation of intracellular signal transduction. KEGG signaling pathway enrichment analysis showed that the main pathways of Citrus aurantium for the treatment of NSCLC were tumorassociated signaling pathway, lipid and atherosclerosis signaling pathway, PI3K-Akt signaling pathway, neurodegenerative signaling pathway, and MAPK signaling pathway. The development and progression of NSCLC is synergistically regulated by multiple extracellular and intracellular signals. The TP53 gene is located on human chromosome 17 and is involved in the encoding of the p53 protein, which inhibits cancer formation by interacting with various related signaling pathways [24]. Activated p53 protein transcriptionally regulates hundreds of genes involved in multiple biological processes, including DNA repair, cell cycle arrest, cell growth, cell division, apoptosis, senescence, autophagy, and metabolism, thereby mediating cancer suppression [25][26][27][28]. NF-κB and Bax proteins are upstream and downstream targets of p53, respectively, and play important roles in cell growth. Guo et al. [29] showed that PAQR3 inhibits the development and progression of NSCLC through the NF-κB/p53/Bax signaling pathway. In addition, it was found that p53 expression levels positively correlated with apoptosis in NSCLC tissues and inhibited the proliferation of lung cancer cells by increasing apoptosis, thereby inhibiting tumor growth and delaying the development of NSCLC [30]. MMP9, a member of the matrix metalloproteinase family (MMPs), is located on human chromosome 20 and is one of the most important enzymes in the breakdown of the extracellular matrix, playing a key role in the invasion and metastasis of cancer [31]. Previous studies have shown that MMP9 expression can be downregulated by reducing AKT/mTOR levels that promote H3K27Ac and H3K56A on the MMP9 promoter region, thereby inhibiting the proliferation and metastasis of triple-negative breast cancer [32], and that MMP-9 can also promote the invasion and migration of gastric cancer cells through the ERK pathway [33]. In addition, a study by Zhang et al. [34] found that the expression level of MMP9 was significantly higher in lung cancer patients than in normal subjects, and the higher the expression of MMP9, the worse the survival status of lung cancer patients. In the present work, we found that nobiletin significantly increased the expression level of P53 and inhibited the expression of MMP9 in A549 cells, suggesting that P53 and MMP9 may be downstream targets of nobiletin in regulating NSCLC. CXCL8 is a proinflammatory CXC chemokine, called interleukin-8 (IL-8), involved in tumor angiogenesis to promote tumorigenesis and metastasis [35]. The Kaplan-Meier survival analysis showed that high IL-8 expression was associated with a poorer prognosis in NSCLC patients (P < 0:05, Figure 4(b)). CXCL8 interacts with CXCL1 and CXCL2 to promote the secretion of multiple proinflammatory, angiogenic, and immunomodulatory factors (including MMP and VEGF) by neutrophils, thereby promoting tumor metastasis in patients with NSCLC [36,37]. Yan et al. [38] showed that IL-8 is highly expressed in lung cancer and suggested that it could be a potential biomarker for lung cancer and has strong diagnostic properties for lung cancer. In another 13 BioMed Research International study, the methyltransferase SETD2 was found to inhibit tumor growth and metastasis through STAT1-IL-8 signaling-mediated epithelial-mesenchymal transition in lung adenocarcinoma [39]. ESR1 is localized on chromosome 6 and belongs to the transcriptional activator super-family, whose protein product is a transcription factor primarily involved in encoding the estrogen receptor. It was shown that hypermethylation of ESR1 was detected only in lung tumors but not in adjacent normal lung tissue, suggesting that ESR1 hypermethylation may be associated with 14 BioMed Research International the development of lung cancer. Assessment of p16 and ESR1 methylation in blood facilitates early diagnosis of lung cancer, and these methylation genes may be biomarkers for early lung cancer [40,41]. It has been shown that ESR1 signaling plays a biological role in both epithelial and mesenchymal cells in the lung and that ESR1 may promote lung cancer by acting directly on precancerous or tumor cells or indirectly on lung fibroblasts [42]. However, in the present study, we did not find that nobiletin was able to affect the levels of ESR1 and CXCL8. Peroxisome proliferator-activated receptor alpha (PPAR-α) is a ligand-activated nuclear receptor that regulates transcription of target genes associated with lipid homeostasis, differentiation, and inflammation in a variety of ways [43][44][45]. Luo et al. [46] showed that intestinal PPAR-α deficiency in mice increased azomethane-(AOM-) induced colon tumorigenesis and tumor growth. In addition, they demonstrated by IHC staining that the PPAR-α-DNMT1/PRMT6-p21/p27 regulatory pathway may also play a role in the early stages of human colorectal carcinogenesis. Previous studies found that apatinib exerts its antitumor effects through the induction of ketogenesis and demonstrated that this tumor-suppressive effect is PPAR-α-dependent [47]. In another study, fenofibrate, a PPAR-α agonist, was found to alleviate resistance to gefitinib in NSCLC cell lines by modulating the PPAR-α/AMPK/AKT/FoxO1 signaling pathway and demonstrated that the increased antiproliferative effect of fenofibrate was abolished when PPAR-α was silenced [48]. In addition, several studies have shown that activation of PPAR-α can inhibit lung cancer growth and metastasis by downregulating cytochrome P450 arachidonic acid cyclooxygenase (Cyp2c) [49,50]. In the present study, we found that nobiletin significantly increased the expression of PPAR-α, suggesting that PPAR-α may also be involved in the regulatory role of nobiletin in NSCLC. Conclusion In conclusion, we found through our study that nobiletin in Citrus aurantium may be a key active ingredient in the treatment of NSCLC. In addition, further analysis showed that nobiletin inhibited the development of NSCLC through targets such as TP53, CXCL8, ESR1, PPAR-α, and MMP9 and related signaling pathways. Through pharmacological experiments, we verified that nobiletin inhibited the proliferation of NSCLC A549 cell line and affected the expression of P53, PPAR-α, and MMP9. In future experiments, we will further clarify the downstream targets of nobiletin that regulate the proliferation of NSCLC through gene overexpression and silencing. This study provides a reference for further research on the mechanism of action of Citrus aurantium in the treatment of NSCLC. Data Availability The data used to support the findings of this study are available from the corresponding authors upon reasonable request. Conflicts of Interest The authors have declared that no conflict of interest exists.
2023-01-25T16:18:57.976Z
2023-01-23T00:00:00.000
{ "year": 2023, "sha1": "4e2c285d66f7d9854ee6f12afd666192d29a4426", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2023/6407588.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63221620099a57cd6269cf8ca1a7110547f4822c", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52982027
pes2o/s2orc
v3-fos-license
Thymosins in multiple sclerosis and its experimental models: moving from basic to clinical application Background: Multiple sclerosis (MS) afflicts more than 2.5 million individuals worldwide and this number is increasing over time. Within the past years, a great number of disease-modifying treatments have emerged; however, efficacious treatments and a cure for MS await discovery. Thymosins, soluble hormone-like peptides produced by the thymus gland, can mediate immune and non-immune physiological processes and have gained interest in recent years as therapeutics in inflammatory and autoimmune diseases. Methods: Pubmed was searched with no time constraints for articles using a combination of the keywords “thymosin/s” or “thymus factor/s” AND “multiple sclerosis”, mesh terms with no language restriction. Results: Here, we review the state-of-the-art on the effects of thymosins on MS and its experimental models. In particular, we describe what is known in this field on the roles of thymosin-α1 (Tα1) and -β4 (Tβ4) as potential anti-inflammatory as well as neuroprotective and remyelinating molecules and their mechanisms of action. Conclusion: Based on the data that Tα1 and Tβ4 act as anti-inflammatory molecules and as inducers of myelin repair and neuronal protection, respectively, a possible therapeutic application in MS for Tα1 and Tβ4 alone or combined with other approved drugs may be envisaged. This approach is reasonable in light of the current clinical usage of Tα1 and data demonstrating the safety, tolerability and efficacy of Tβ4 in clinical practice. Background: Multiple sclerosis (MS) afflicts more than 2.5 million individuals worldwide and this number is increasing over time. Within the past years, a great number of disease-modifying treatments have emerged; however, efficacious treatments and a cure for MS await discovery. Thymosins, soluble hormone-like peptides produced by the thymus gland, can mediate immune and nonimmune physiological processes and have gained interest in recent years as therapeutics in inflammatory and autoimmune diseases. Methods: Pubmed was searched with no time constraints for articles using a combination of the keywords "thymosin/s" or "thymus factor/s" AND "multiple sclerosis", mesh terms with no language restriction. Results: Here, we review the state-of-the-art on the effects of thymosins on MS and its experimental models. In particular, we describe what is known in this field on the roles of thymosin-α1 (Tα1) and -β4 (Tβ4) as potential anti-inflammatory as well as neuroprotective and remyelinating molecules and their mechanisms of action. Conclusion: Based on the data that Tα1 and Tβ4 act as anti-inflammatory molecules and as inducers of myelin repair and neuronal protection, respectively, a possible therapeutic application in MS for Tα1 and Tβ4 alone or combined with other approved drugs may be envisaged. This approach is reasonable in light of the current clinical usage of Tα1 and data demonstrating the safety, tolerability and efficacy of Tβ4 in clinical practice. (Ascherio and Munger, 2007a, b). This enigmatic immune-mediated disease of the central nervous system (CNS) displays a progressive neurodegeneration due to immune attacks directly against myelin constituents that cause myelin destruction (Lassmann et al., 2007) in genetically susceptible individuals. Treatment development in the past years has been extremely active and a great number of potential new therapies have emerged for patients with MS. MS therapeutic approaches focused for the past three decades on strategies to reduce inflammation and immune activation, acting on immunomodulation, on migration of specific inflammatory cell subsets or on agents directly mediating neuroprotection and neurorestoration. Disease-modifying therapies have shown beneficial effects in patients with relapsing-remitting MS (RRMS). The majority of these drugs are, however, of little benefit during progressive MS where axonal degeneration following demyelination outweighs inflammation. For obvious reasons, this discrepancy in therapeutic efficacy of approved drugs has sparked great interest in the development of new remyelination therapies aimed at providing neuroprotection and functional recovery. The monotherapeutic strategy to suppress immune cells from attacking the CNS is insufficient in preventing or ameliorating permanent and accumulating MS deficits (Kremer et al., 2015). Additional therapies for MS are urgently required to enhance remyelination and to reduce axonal damage to improve functional recovery. Demyelination is a loss of myelin sheaths from axons, which results from damage and death of myelin producing oligodendrocytes (OLs). Remyelination requires oligodendrocyte progenitor cells (OPCs) to differentiate into mature myelinating OLs, because mature OLs in the adult mammalian CNS are post mitotic and, thus, unable to proliferate in response to injury and to form new myelin sheaths (McTigue and Tripathi, 2008;Keirstead and Blakemore, 1997). The failure of newly generated OLs to remyelinate axons and to preserve axonal integrity impedes functional recovery after MS (Kremer et al., 2015). Cell therapies, including stem cell transplantation, have a potential for CNS repair and may be able to provide protection from inflammatory damage caused after injury (Rice et al., 2013). The glial cell communication network, including astrocytes and microglia, also plays important roles in de/remyelination (Domingues et al., 2016). Thus, it is important to investigate a treatment strategy for MS that would include immunomodulation paired to neuroprotection and neurorestoration. Many new molecules or previously approved drugs developed by high-throughput screens appear to have a positive impact on remyelination and neuroprotection and are in the therapeutic pipeline envisaging their use alone or as part of a combination therapy, including immunomodulatory drugs for the treatment of MS (Harlow et al., 2015). Although the beneficial effects of these drugs on CNS cells are encouraging, careful study of off-target effects will need to be undertaken, given that many of these drugs were originally utilized for non-CNS targets, such as the thymic hormone-like molecules thymosins. The use of thymosins in MS Evidence is emerging that the regulation of thymic peptide hormones, such as thymosins, have anti-inflammatory potential in inflammatory as well as autoimmune diseases (Lunin and Novoselova, 2010). In this context, in early studies conducted using thymosin fraction V to treat old mice (Endoh and Tabira, 1990) or guinea pigs (Woyciechowska et al., 1985) subjected to the model of MS experimental autoimmune encephalomyelitis (EAE), the treatment showed no suppressive effect on incidence and severity of disease. In the first study (Endoh and Tabira, 1990), however, EAE was induced in aged mice, in which susceptibility to disease may be significantly reduced. Indeed, in the same setting, also treatment with another thymic protein, the serum thymic factor, showed no effect, differently from that observed by other groups describing an intensive suppression of EAE symptoms (Nagai et al., 1982). Moreover, thymosin fraction V is a partially purified mixture of 10 major and at least other 30 polypeptides from the thymus gland with extremely varied and important biological properties that may act individually, sequentially, or in concert to influence the development of T cell subsets and that could also mediate inhibitory effects not observed in experiments conducted with single or a mixture of purified thymosins (Hoch and Volk, 2016;Goldstein and Badamchian, 2004). Indeed, therapeutic benefits were observed in the damaged CNS of neurological disorders, including MS, when the synthetic form of thymosin β4 (Tβ4), a single peptide purified from thymosin fraction V able to pass the blood brain barrier (Mora et al., 1997), was exogenously administered. Using animal models of neurological injury, studies have demonstrated that Tβ4 can target multiple neural cells (including neurons, oligodendrocytes and microglia) and can also provide neuroprotection, immunosuppression, and neurorestoration, including remyelination, synaptogenesis, and axon growth (Zhang et al., 2016a,b;Chopp and Zhang, 2015;Santra et al., 2012;Santra et al., 2016;Wang et al., 2015;Wang et al., 2012;Cheng et al., 2014). Furthermore, studies showed that purified prothymosin α (ProTα), the precursor of thymosin α1 (Tα1), is able to regulate the defective phenotype of monocytes in MS impacting T cell activation (Reclos et al., 1987;Baxevanis et al., 1990). More recently, the anti-inflammatory potential of Tα1 on the differentiation of regulatory subsets of lymphocytes was also studied in MS (Giacomini et al., 2017). In this review we will describe in detail what is known on the role of Tβ4 and Tα1 in promotion of remyelination and anti-inflammatory responses in MS and the potential mechanisms of action. Search strategy and selection criteria A literature search was carried out on PubMed/Medline database. The authors deemed important not to miss any potentially relevant study, therefore a comprehensive search strategy was set and no limits were fixed as to language or date of publication. Based on the review topic, the search strategy included controlled vocabulary and free-text words, synonyms and MeSH terms relating to thymosin "(thymosin/s", "thymus factor/s", "thymus peptide/s", "thymic peptide/s", "thymosin fraction 5", "thymosin beta 4", "thymosin alpha 1", "prothymosin"), combined with the main search filter (multiple sclerosis). The final reference list was generated considering titles relevant to the review topic based on the online searches updated to June 2018 as well as citations from other bibliographies or authors' suggestions. Thymosins: the old and the new The thymus gland produces soluble hormone-like peptides that can mediate immune and non-immune physiological processes. Thymosin was originally prepared as a crude extract of mouse or rat thymus gland in 1966. In the next decades, the first biologically active thymic extract was purified and called thymosin fraction V, the fractionation of which led to the isolation of a series of immunoactive polypeptides, thus so named thymosins (Goldstein, 2007). However, these molecules are genetically unrelated while being distributed widely throughout most tissues and play important, yet very different, roles in cells. The active peptides are typically short, highly charged with no or few aromatic amino acids and, therefore, are intrinsically unstructured proteins under natural conditions (Hoch and Volk, 2016). Starting with fraction V, several main peptides (ProTα, Tα1, polypeptide β1 and different Tβ) were isolated and tested for biological activity (Table 1). Thymosins are divided into 3 main groups based on the isoelectric focusing pattern of thymosin fraction V: α-thymosins below pH 5.0, βthymosins between pH 5.0 and 7.0 and γ-thymosins above pH 7. The numerical subscript simply denotes the chronological order of isolation. The first two peptides isolated from fraction V were Tα1 and polypeptide β. In general, the peptides isolated from the β region of thymosin fraction V do not appear to be thymus-specific products. The most predominant band on the β region is polypeptide β1 that did not show any biological activity, though it was the most prominent component of thymosin fraction V. Later, it was identified as a 74-amino-acid residue fragment of ubiquitin, lacking two glycine residues at the C-terminus (Hoch and Volk, 2016). The next thymosin, which was isolated and sequenced, was termed Tβ4. Successively, β8, 9, 10 and 15 were isolated (Hoch and Volk, 2016). Many orthologs of human thymosin genes sharing chromosome location and/or sequence similarities have been characterized in different species. For a summary of the main thymosin genes' characteristics and tissue distribution between human and mouse see Table 2. Several attempts were made in trying to characterize the cellular receptors involved in thymosins' recognition; however, so far, none have been identified (Rinaldi Garaci et al., 1985;Brelinska and Warchol, 1982). To-date, main members of the thymosin family are considered Tα1 and its precursor ProTα, as well as β-thymosins (Mosoian, 2011). ProTα is a 12.5-kDa, highly acidic protein, widely distributed in different cell types and expressed at both intracellular and extracellular levels. In humans, a family of 7 genes, 6 of which are considered pseudogenes, encode ProTα (Mosoian, 2011). The major intracellular functions of ProTα are linked to chromatin remodeling, cell proliferation, differentiation and apoptosis, and ProTα was also found overexpressed in different cancer types (Mosoian, 2011). The extracellular ProTα shares many features with interleukin (IL)-1α, thus representing an important endogenous stimulator of the innate immune system related to anti-viral, anti-cancer, anti-fungal and anti-ischemic activities, as well as an adjuvant for vaccines (Mosoian, 2011). Of note, ProTα signaling via toll-like receptor (TLR) 4 is required for its potent antihuman immunodeficiency virus (HIV) activity in macrophages via type I IFN induction (Mosoian, 2011). Tα1 peptide, which is only 28 amino acids-long, is contained at the N-terminus of its precursor ProTα, nearly 100 bases-long. It was shown that a lysosomal asparaginyl endopeptidase (the so-called legumain) is able to proteolytically process the asparagynil-glycine residues Asn 28 -Gly 29 of ProTα to generate Tα1 (Sarandeses et al., 2003). Since its discovery, investigations on Tα1 were mainly performed in the area of infectious diseases. Tα1 administration was studied in a wide variety of animal and human settings and its pharmacologic effects were shown to enhance cellular immunity inhibiting viral replication of different viruses. Based on these data, Tα1 was then used for the treatment of chronic hepatitis B and C (Iino et al., 2005;You et al., 2006), cytomegalovirus infection (Bozza et al., 2007) and invasive aspergillosis (Romani et al., 2004). In addition, the post-marketing data on Zadaxin ® (SciClone) clearly confirmed Tα1 immunomodulatory activities and related therapeutic potential also in cancer (such as Hepatocellular carcinoma, lung cancer, and melanoma), infectious diseases (sepsis, infections after bone marrow transplant, lung infections including chronic obstructive pulmonary disorder, Severe acute respiratory syndrome, and HIV) (King and Tuthill, 2016;Liu et al., 2016a;Matteucci et al., 2017;Jia et al., 2015) and improvement in immune responses of elderly immunocompromised patients (for example for enhancement of response to vaccines) (Tuthill et al., 2012). The Tβ family is composed of 20 short (40-44 amino acids) peptides; among these only three of them have been characterized in detail, Tβ4, Tβ10 and Tβ15. However, while Tβ15 was found only up-regulated in different malignancies, such as the human prostate cancer, representing a potential biomarker (Bao et al., 1996), in a healthy human body only Tβ4 and Tβ10 are expressed (Huff et al., 2001). These peptides play numerous different functions. Among others, they affect the processes of carcinogenesis, differentiation and angiogenesis, influence metalloproteinase activity and accelerate wound healing. The effect of Tβ4 on neuroprotection in the MS animal model Neuroprotection is a well-investigated effect of Tβ4 (Santra et al., 2016;Xiong et al., 2012). In spinal cord injury and traumatic brain injury models, Tβ4 treatment significantly improved locomotor and sensorimotor functional recovery and spatial learning, as well as increased survival of neurons and OLs, and reduced cortical lesion volumes after neurological injuries (Cheng et al., 2014;Xiong et al., 2012). In vitro, exogenous Tβ4 treatment significantly reduced apoptosis of the neural progenitor cells subjected to oxygen glucose deprivation (Santra et al., 2016). Moreover, Tβ4 may also drive neuroprotection in EAE. The EAE model is widely employed in investigation studies of MS since it exhibits significant neurological functional deficits as well as obvious pathological changes, including demyelination and inflammatory infiltration (Procaccini et al., 2015;Eng et al., 1996). When Tβ4 was administered as a prophylactic treatment on the day of proteolipid protein peptide (PLP 139-151 ) immunization, Tβ4 treatment significantly delayed the EAE onset, and evoked a significantly improved neurological functional recovery. Since mature OLs and myelin support axonal integrity and function (Nave, 2010), further investigation found that robust functional improvement accompanied increased numbers of mature OLs in the CNS of the EAE mice (Zhang et al., 2009). Since the EAE model is induced by an autoimmune response, the antiinflammatory and immunomodulatory properties of Tβ4 (Badamchian et al., 2003;Girardi et al., 2003;Sosne et al., 2007) via the suppression of nuclear factor-kappa B (NF-kB) activation (Sosne et al., 2007) may protect OLs from damage and death. In addition to inflammatory cells (which infiltrate from peripheral circulation), microglia, the resident innate immune cells of the CNS, are activated by neuroinflammation and play a pivotal role in onset and pathological changes of disease. Thus, the effect of exogenous Tβ4 treatment on inhibition of microglial activation after damage may contribute to reduce the secretion of inflammatory mediators (Zhang et al., 2016b;Zhou et al., 2015), and thereby prevent and/or reduce damage of OLs after EAE by attenuating immune onslaught (neuroprotection). The most important aspect of our study is in the novel evidence showing that prophylactic Tβ4 treatment may contribute to remyelination, since this treatment was able to promote an increase of new OLs generated from OPC proliferation and differentiation (Zhang et al., 2009). The effects of Tβ4 on OPC differentiation and remyelination in the demyelination models To further investigate remyelination effects of Tβ4, the treatment window was delayed to permit the full development of demyelination damage in the CNS of EAE animals. When the therapeutic treatment of Tβ4 was administered after EAE symptoms' onset instead of on the day of PLP immunization, functional outcomes revealed that this Tβ4 treatment approach evoked significant functional benefit (Zhang et al., 2016a), and concurrently increased OLs in the demyelinating CNS. At the early stage of EAE (day 7 after onset), the protein level of myelin basic protein (MBP) and the numbers of OLs were strongly decreased. Conversely, OLs were significantly increased after Tβ4 treatment, suggesting that the newly generated OLs formed new myelin and contributed to improved functional recovery. This hypothesis was supported by results that also OPC differentiation was significantly induced, axons were re-wrapped, and axonal damage was reduced after Tβ4 treatment at the late stage of EAE (day 30 after onset), and that OPC differentiation significantly correlates with the neurological functional score (Zhang et al., 2016a). This Tβ4 treatment approach increased myelin areas and improved functional outcome in the late disease stage, demonstrating the presence of a remyelination effect of Tβ4, in addition to its neuroprotective and anti-inflammatory effects. Complementing the EAE model, an additional demyelination model -induced by cuprizone diet-was employed to further confirm that the remyelination effect of Tβ4 derives from its direct action on OPCs. The cuprizone model is extensively used to study in vivo toxicity-induced demyelination (Zhang et al., 2017a) with less infiltrated immune cells as compared to what found in EAE (Procaccini et al., 2015). Thus, this experimental model has utility for directly investigating the effects of a therapeutic agent on OPC differentiation and remyelination. Cuprizone-fed mice with demyelination treated by Tβ4 exhibited a significant increase of remyelination, accompanied with a robust increase of newly generated OLs, and elevation of MBP density in the demyelinating corpus callosum (Zhang et al., 2016b). In concert, these results obtained from both the EAE and cuprizone models indicate that the in vivo effects of Tβ4 on OPC differentiation and remyelination are independent of its systemic anti-inflammatory effect. Data obtained from other models of neurological injury, which induce demyelination and white matter damage and result in Homo sapiens X NC_000023.11 ovary, endometrium and prostate No ortholog in Mus musculus neurological deficits, including stroke, peripheral neuropathy and traumatic brain injury, likewise show the benefits of Tβ4 on OPCs and myelin after delayed Tβ4 treatments. Rats with middle cerebral artery occlusion treated with Tβ4 demonstrated a significant overall improvement in functional outcome (Morris et al., 2010). Although lesion volumes were not reduced, Tβ4 treatment increased myelinated axons in the ischemic boundary, and augmented remyelination, which was associated with an increase of OPCs and myelinating OLs (Morris et al., 2010). In a model of diabetes-induced peripheral neuropathy, extended Tβ4 treatment of diabetic mice significantly improved neurological function, and was closely associated with increased axonal regeneration and remyelination in peripheral nerves (Wang et al., 2015). Traumatic brain injury remains a leading cause of mortality and morbidity worldwide with no effective pharmacological treatments. Tβ4 treatment of traumatic brain injury in rats amplified endogenous remyelinating processes including oligodendrogenesis, neurogenesis and axonal remodeling, which appeared to drive functional recovery (Xiong et al., 2012). In vitro experiments provide evidence of the direct effects of Tβ4 on OPCs and neurons. After Tβ4 treatment of primary cultured OPCs, the differentiation of OPCs into mature OLs identified by the protein level of MBP significantly increased (Zhang et al., 2016a;Santra et al., 2014). Thus, data generated from these in vivo and in vitro studies in concert, confirm the remyelination effect of Tβ4. Due to the combined benefits in neuroprotection and remyelination (summarized in Fig. 1), Tβ4 is an excellent candidate for the treatment of demyelinating diseases also based on data demonstrating safety, tolerability and efficacy of Tβ4 in clinical practice (Crockford, 2007;Vasilopoulou et al., 2015;Marks and Kumar, 2016). In vivo and in vitro studies have also given mechanistic insights into the therapeutic effects of Tβ4. The epidermal growth factor receptor and the TLR signal transduction pathways may contribute to CNS remyelination and recovery of function induced by Tβ4 (Zhang et al., 2016a;Santra et al., 2014). Hedgehog signaling pathway was shown to be involved in activation of stem cells after Tβ4 treatment, which may contribute to therapeutic benefit (Kim et al., 2017). Furthermore, having in mind that microRNA (miRNA)−146a may directly induce OPC differentiation to OLs (Zhang et al., 2017a;Liu et al., 2016b), Tβ4 treatment may prominently stimulate the expression of this mRNA in primary cultured OPCs, thus promoting the proliferation and differentiation of OPCs and OLs (Santra et al., 2014), which may serve as a common remyelination therapeutic mechanism. In addition to increasing miR-146a expression in OPCs, Tβ4 remarkably also increased miR-146a expression in microglia and significantly inhibited secretion of pro-inflammatory mediators (Zhou et al., 2015), all of which may promote remyelination (Crockford, 2007). Tα1 as a pleiotropic immunoregulator Tα1 has been shown to have beneficial effects on numerous immune system parameters, related to both innate and adaptive immune cells, including macrophages, neutrophils, natural killer cells and DC in addition to the well-characterized effects on the differentiation and maturation of T cells (Serafino et al., 2012). For these features, Tα1 was used as an adjuvant or immunotherapeutic agent to treat disparate human diseases, including viral infections, immunodeficiencies and malignancies (Romani et al., 2012). Tα1, however, demonstrated to be a powerful pleiotropic molecule able not only to induce anti-viral and pro-inflammatory responses, but also to promote a regulatory milieu depending on the context, as others' and our data showed (Romani et al., 2006;Giacomini et al., 2015). These observations are consistent with the fact that Tα1 can negatively control inflammation during immunopathological pro-inflammatory infections and diseases, exerting an interesting, and until now not-completely understood, role as a homeostasis regulator (Romani et al., 2007). In this context it is important to underline another interesting property of Tα1 that relies on the modulation of regulatory T cell (Treg) function by acting on signals delivered through TLR in response to pathogen associated molecular patterns (Romani et al., 2007;Montagnoli et al., 2006). This characteristic was also recently tested in cystic fibrosis where Tα1 increases the altered maturation of the cystic fibrosis transmembrane conductance regulator and reduces the chronic inflammation caused by the excessive activation of the innate immune response (Romani et al., 2017). Indeed, owing to its ability to activate the tolerogenic pathway of tryptophan catabolism via the immunoregulatory enzyme indoleamine 2,3-dioxygenase 1 (Puccetti and Grohmann, 2007), Tα1 specifically potentiates immune tolerance in the lung, breaking the vicious circle that perpetuates chronic lung inflammation in response to a variety of infectious noxae (Romani et al., 2006). Collectively, these data suggest that Tα1 represents a promising molecule to control inflammation, immunity and tolerance in a variety of clinical settings, including organ transplantation, tumors as well as autoimmune diseases, such as MS (Serafino et al., 2012). Tα1 and the anti-inflammatory effect in MS: focus on B cells Due to the Tα1 ability to establish a regulatory environment for balance of inflammation and tolerance, we recently hypothesized a possible role for this molecule in novel therapeutic applications towards autoimmune diseases, and in particular for MS (Giacomini et al., 2017). Concentration of Tα1 in human serum is very high in fetuses and newborns, when the immune system is first developing, but rapidly drops in early childhood coincident with the maturation of T cells in the body remaining to a steady-state level throughout adulthood. However, deregulation in Tα1 serum concentrations was found in different types of cancers or infectious diseases and, more recently, in patients affected with chronic inflammatory autoimmune diseases such as psoriatic arthritis, rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE), that have a much lower level of serum Tα1 as compared to healthy controls (Pica et al., 2016). Similarly, we also recently found that serum Tα1 is significantly lower in RRMS patients than in matched controls (Giacomini et al., 2017) and we hypothesized that the deficient endogenous Tα1 level found in patients' sera could be related to MSassociated altered inflammatory status. As for Tα1 always studied as a modulator of T cell responses, MS was considered by far a T cell-mediated autoimmune disease. Nonetheless, over the past few years B cells have been increasingly recognized as disease-relevant in MS and recent evidence on their immunopathogenic support on MS development has been collected, highlighting their central role (Franciotta et al., 2008), historically overshadowed by the emphasis on T cell research. Traditionally, B cells have been implicated in MS for their ability to produce pathogenic antibodies (Abs) or auto-Abs, found to be present in CSF and brain tissue of MS patients. Accordingly, characteristic oligoclonal IgG bands in their CSF are thought to be a quite specific marker for MS (Walsh et al., 1985). Numerous B lymphocytes were also described together with T lymphocytes, DC and plasma cells in white matter lesions, with a very high frequency in acute lesions (Nyland et al., 1982). However, new impetus on the central role of B cells in MS pathogenesis was received from the identification in the meninges of SPMS patients of ectopic lymphoid follicles enriched with B and plasma cells, whose establishment in the brain of these individuals could provide a microenvironment in which B cell expansion and maturation, and hence local Ig production, may occur (Serafini et al., 2004). Nowadays, however, important Ab-independent pathogenic roles for B cells are emerging, also in light of successful results of B cell-depleting therapies in MS (Barun and Bar-Or, 2012). In particular, selective depletion of B cells via monoclonal Abs against the B cell lineage specific surface marker CD20 (i.e. rituximab, ocrelizumab, and ofatumumab) proved to be remarkably effective in the induction of long-lasting suppression of lesion activity and clinical relapses but showed no effect on plasma cell differentiation and Ab production (Barun and Bar-Or, 2012). Importantly, B cells can efficiently present antigens to T cells and modulate local immune responses by secreting soluble factors. In humans, different B cell subsets were described producing distinct effector cytokines. In particular, CD19 + CD27-naïve B cells release mainly the anti-inflammatory IL-10; while CD19 + CD27 + memory B cells largely express pro-inflammatory factors, such as lymphotoxin (LT), TNF-α and IL-6. In human B cells the effector cytokine profile is stringently context-dependent with a reciprocal regulation of pro-and anti-inflammatory response. In MS, this cytokine network is dysregulated, with a much lower production of the anti-inflammatory factor IL-10 (Duddy et al., 2007). Furthermore, B cells of MS patients exhibit aberrant pro-inflammatory responses, with increased LT:IL-10 ratio and exaggerated LT and TNF-α secretion, that may mediate 'bystander activation' of disease-relevant pro-inflammatory T cells, resulting in new relapsing MS disease activity (Bar-Or et al., 2010). The term "regulatory B cells" was first introduced by Mizoguchi and Bhan to indicate a subset of B cells with the ability to produce IL-10 and to suppress inflammatory cellular immune responses (Mizoguchi and Bhan, 2006). Since then the family of B reg is expanding with many subsets identified and shown to arise at different stages of B cell differentiation in a context-dependent manner (Mauri and Menon, 2015). In particular, among many others, two main B reg populations have been deeply characterized: CD24 + CD38hi transitional-immature B reg arising from CD27-B cell compartment (Blair et al., 2010) and CD24low/ negCD38hi plasmablast-like B reg cells differentiating from CD27 + memory B cells (Matsumoto et al., 2014). While much evidence indicate that B cell-derived IL-10 production is strongly downregulated in MS patients (Duddy et al., 2007;Bar-Or et al., 2010), only few studies have characterized so far B reg populations in MS by using different Ab cocktails leading to contrasting results (Knippenberg et al., 2011;Michel et al., 2014;Li et al., 2015;Habib et al., 2015). Based on this background and having in mind the strong immunomodulatory and pleiotropic activity of Tα1 molecule as well as our previous data showing a TLR7-driven dysregulation of B cell response in MS patients (Giacomini et al., 2013;Rizzo et al., 2016), we set up an in vitro PBMC-based experimental procedure to study differentiation of B reg subsets and the impact of Tα1 in this context (Giacomini et al., 2017). In particular, differently from studies on purified B cells, the mixed cell population of PBMC resemble the in vivo scenario and, upon stimulation with a specific TLR7 agonist (the socalled Imiquimod), could account for cytokine production as well as triggering of CD40 signaling generating an in vitro B cell proliferating milieu unique for either healthy individuals or patients. By using this setting, our study demonstrated a striking difference in the ability of B cells from RRMS patients to differentiate into both CD24 + CD38hi transitional-immature and CD24low/negCD38hi plasmablast-like B reg subsets, as compared to matched controls (Giacomini et al., 2017). Very interestingly, in vitro exposure to Tα1 drastically reduced the TLR7induced production of the pro-inflammatory cytokines IL-1β, IL-8 and IL-6, while increasing expression of the anti-inflammatory mediators IL-10 and IL-35. Accordingly, Tα1 treatment restored the ability of RRMS B cells to differentiate into IL-10-producing transitional-immature B reg and regulatory plasmablasts to the level found in healthy controls (Fig. 2). Furthermore, the B reg subsets expanded by Tα1 treatment display a suppressive activity reducing both IFN-γ and IL-17 production found in TLR7-treated MS patients-derived PBMC cultures (Giacomini et al., 2017). Unfortunately, there is no evidence on the role of Tα1 in EAE. However, Janeway and colleagues first observed that B10.PL mice lacking B cells suffered an unusually severe and chronic form of EAE, suggesting that anti-inflammatory B cells in charge of negatively regulating inflammatory reactions may be depleted (Wolf et al., 1996). Therefore, we envisage that treatment with Tα1, impacting induction of both B and T regulatory cell subsets, may exert protective effects during EAE by reducing disease severity. In concert, these findings highlight the therapeutic potential of Tα1 in MS as well as in other autoimmune conditions that show reduced differentiation of B reg subsets and chronic immune activation. Conclusive remarks Ongoing research in MS therapeutics seeks strategies to target or modulate the pro-inflammatory responses into a more anti-inflammatory scenario, in which antigen-specific immune tolerance may be induced and, in this regard, manipulating B reg subsets may be successful. However, since approved treatments for MS work by reducing immune system activity or blocking entry of immune cells into the CNS, thus reducing relapse rate and severity of attacks, but they do not repair immune-mediated damage to the myelin sheaths surrounding axons, drug repurposing or development of new treatments to promote myelin repair and neuronal protection is desirable. In this scenario, we reviewed the state-of-the-art on thymosins and MS, with particular attention to the neuroprotective and remyelinating action of Tβ4 and to the more recently characterized anti-inflammatory activity of Tα1. The clinical usage of Tα1 in the past in cancer or chronic infections or as adjuvant in vaccine formulations (Garaci et al., 2007), in which an immune potentiation is needed, may seem difficult to reconcile with its use in autoimmune conditions where an anti-inflammatory action is desirable. However, our results on Tα1 potentiation of B reg differentiation in MS (Giacomini et al., 2017), together with its well-characterized pleiotropic activity able to shift inflammation toward a more anti-inflammatory milieu depending on the stimuli or pathological conditions (Romani et al., 2012;Giacomini et al., 2015) and its capacity to induce Treg generation and IL-10 production (Romani et al., 2006;Romani et al., 2017), indicate beneficial and advisable functions for a drug to be used in autoimmune diseases. These data may pave the way for a repurposing potential of Tα1, alone or in combination with other approved drugs, for treatment of MS or other autoimmune conditions that display reduced differentiation of B reg subsets and chronic immune activation. In addition, Tβ4 has a broad net of protective and restorative effects on neurological degeneration and injury in the CNS. Recent studies investigated the remarkable capacity of Tβ4 not only on promotion of OPC proliferation, but also on OPC differentiation and remyelination, and demonstrated significant improvement of functional and behavioral outcomes in animal models of MS. The ability of Tβ4 to target many diverse processes via multiple molecular pathways that drive oligodendrogenesis and axonal remodeling may also be mediated by miRNAs, particularly, miR-146a (Santra et al., 2014). Thus, Tβ4 has substantial potential for clinical translation as a multiple-target therapy for MS/EAE or other neurological demyelinating diseases, with the remyelination effect supplementing its antiinflammatory and neuroprotective role, as previously found in studies involving Tβ4 prophylactic treatment (Zhang et al., 2009). Since low level of endogenous Tα1 is present in sera of patients affected with chronic inflammatory autoimmune diseases such as psoriatic arthritis, RA and SLE (Pica et al., 2016) as well as in RRMS patients (Giacomini et al., 2017), an interesting approach might also be to quantify the levels of serum Tβ4 in the same categories of patients to investigate whether a deregulation of both these thymic peptides may be related to MS-associated autoimmune responses. Tβ4 level in the MS population has been reported to be down-regulated in CSF (Liguori et al., 2014) and this down-regulation is a potential biomarker of MS. Furthermore, one may also envisage that treatment with currently approved disease modifying therapies would affect a dysregulated endogenous thymosin system in MS. Thus, it may be interesting to investigate whether endogenous Tα1 and/or Tβ4 expression level in cells or sera are modified in patients undergoing treatment with disease-modifying therapies. In particular, immune-modulatory therapies, such as IFN-β, glatiramer acetate or teriflunomide, could impact the low circulating level of Tα1 and/or Tβ4 in MS patients. Or possibly, the combination of synthetic Tα1 with B cell-directed therapeutics, as the specific anti-CD20 mAbs ocrelizumab and ofatunumab or those mAbs, including natalizumab and alemtuzumab, affecting both T and B lymphocytes, may complement their ability to affect the B cell compartment. The profound and rapid depletion of B cells that arises after treatment with these mAbs is then followed by repletion of lymphocytes with a more regulatory phenotype. However, in some cases (as for example following CD52 depletion) there is a hyperpopulation of immature B cells that may be responsible for secondary autoimmunity, a major drawback of alemtuzumab therapy (Baker et al., 2017). As in the combination regimens of chemotherapeutics plus Tα1 adopted for treatment of many cancer types (Garaci et al., 2015), in MS a combination of disease modifying therapies with synthetic Tα1 may help in addressing and correctly modulating the repopulation of B cell compartment towards a more regulatory and anti-inflammatory phenotype. Furthermore, the induction of miR-146a mediated by Tβ4 can modulate the innate and adaptive immune response (Wu and Chen, 2016). For example, the increased level of miR-146a can suppress IFN-γ-dependent Th1 responses and reduce Th1-mediated lesions by Tregs inhibition of signal transducer and activator of transcription 1 (Lu et al., 2010), and diminish adhesion of T cells to endothelial cells (Wu et al., 2015). Currently, FDA-approved disease modifying therapies for MS mainly focus on immunomodulation, and there is lack of remyelination treatments. In this review, Tβ4 has been demonstrated to promote remyelination, synaptogenesis and axon growth, in addition to its effect on immunosuppression. In future clinical treatment, Tβ4 may be employed as a monotherapy or combined with other disease modifying therapies to treat MS patients. Thus, therapeutic approaches should attempt to decrease severity of disease by immunosupression in order to reduce the damage to myelin and axons as well as to promote repair of damaged myelin and axons and, thereby, enhance functional recovery of MS patients. We believe that further studies will be needed to address this very interesting aspect of thymosin biology, such as the effects of Tβ4 on MS patients, and that of Tα1 on the EAE model, having in mind that treating MS patients with Tα1 and Tβ4 alone or together with other approved drugs may meet the requirements for a desirable but-still-unknown therapy for MS acting on both immune dysregulation and CNS damage. Funding Our work was in part supported by grant GR-2016-02363749 from Italian Ministry of Health (to MS).
2018-11-01T14:27:16.521Z
2018-10-02T00:00:00.000
{ "year": 2018, "sha1": "9454ddc28472f43c87642563b2fb0ed1917c6e0c", "oa_license": "unspecified-oa", "oa_url": "https://europepmc.org/articles/pmc7104151?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0b0816d1ff8230f49341fd257c9bb9bb9515269a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
263633222
pes2o/s2orc
v3-fos-license
Effect of Periodontal Disease on Alzheimer’s Disease: A Systematic Review The aim of this review was to evaluate the relationship between periodontal disease (PD) and the onset and progression of Alzheimer's disease (AD) and to determine whether patients with PD would be at greater risk of developing AD compared to periodontally healthy subjects. This systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. An electronic search for cross-sectional, cohort, or case-control studies was conducted on five databases (PubMed, ScienceDirect, EBSCO, Web of Science, and Scopus). No restrictions were applied to the language and year of publication. Exposure was PD, and the outcome of interest was the onset and/or progression of AD. The risk of bias of the included studies was assessed using the Newcastle-Ottawa Scale (NOS) designed for non-randomized studies. Six studies fulfilling the selection criteria were included in this systematic review. Four of the studies were of cohort design and two were of case-control design. All except one showed a significant association between PD and the risk of AD onset and progression. According to the NOS bias risk assessment, three studies were found to be of good quality, and three other cohort studies were of low quality. Data from this systematic review indicate that patients with PD present a significantly higher risk of AD compared to individuals with healthy periodontium. However, results should be interpreted with caution given the methodological limitations found. For future research, powerful and comparable epidemiological studies are needed to evaluate the relationship between PD and AD. Introduction And Background Periodontal disease (PD) is an inflammatory oral disease that affects the supporting structures of the teeth (periodontium), including gingiva, periodontal ligament, cementum, and alveolar bone [1].The disease results from complex interactions between the dental biofilm and host defense mechanisms.Bacteria and their components like lipopolysaccharides present in the biofilm induce an intensified host inflammatory response.This cascade of inflammatory response damages periodontal structures and leads ultimately to bone loss [1].Being responsible for disability, speech impairment, low self-esteem, and reduced quality of life, PD has become a major public health concern that burdens the global healthcare system [2]. Alzheimer's disease (AD) is a progressive neurodegenerative disorder with a prevalence increasing exponentially with age [3,4].It is characterized by an irreversible degeneration of neurons and neural connections beginning in the hippocampus and extending to the rest of the brain.People affected by Alzheimer's gradually lose cognitive abilities and autonomy.These symptoms consequently lead to advanced dementia and eventually to death.The cognitive decline that leads to AD has been related to two cardinal neuropathological features, beta-amyloid plaques (Aβ) and neurofibrillary tangles [5,6].The amyloid plaques consist of deteriorating neuronal processes or neuritis, surrounding deposits of a central core protein called amyloid beta (or beta-amyloid).This protein is derived from a larger molecule called amyloid precursor protein, which is a normal component of nerve cells.The neurofibrillary tangle consists of abnormal accumulations of phosphorylated protein, called tau located within nerve cells.This protein is normally present in neurons.Abnormal chemical changes cause tau molecules to form tangles inside neurons. There is a growing body of evidence in the literature suggesting a potential association between PDs and systemic diseases, particularly atherosclerotic vascular disease, pulmonary disease, diabetes, pregnancyrelated complications, osteoporosis, and kidney disease as well as AD [2].The periodontal medicine concept has been proposed to study the interrelationship between oral and systemic health.The presence of periodontal pathogens and their metabolic by-products may contribute to the body's overall inflammatory burden, thus promoting the development of systemic conditions [2,7,8].It has been suggested that periodontal pathogens could promote initiation or exacerbation of systemic diseases either by entering the bloodstream (bacteremia) or by stimulating an immuno-inflammatory response through the systemic release of toxins and local inflammatory mediators such as cytokines, prostaglandins, and serum antibodies into the bloodstream [2,7,8]. Most studies have indicated that Alzheimer's patients suffer from impaired oral health, a high incidence of PD, and an affected quality of life.These oral manifestations have been justified by the cognitive and motor deficits related to AD, compromising dental care and the maintenance of proper dental hygiene [9][10][11][12][13][14][15][16][17][18]. Authors have also suggested that PD may be a risk factor for AD.Hypotheses are mainly based on the involvement of periodontal pathogens and their virulence factors in the pathogenesis and progression of Alzheimer's, either by direct invasion of brain tissue or by indirect action of the bacterial load and proinflammatory mediators in systemic circulation [19][20][21][22].Thus, following the elevation of the systemic inflammatory response, periodontal infection would contribute to cerebral and vascular pathologies by altering brain function and, as a result, worsen the neurodegenerative process characterizing AD [19][20][21][22][23][24][25][26][27][28][29]. The purpose of our systematic review was to evaluate how is PD related to the onset and progression of AD and to determine whether patients with PD would be at greater risk of developing AD compared to periodontally healthy subjects. Review Methods This systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines designed to report systematic reviews and meta-analyses [30,31].The global review protocol was registered under the registration number INPLASY202080033. Our PICOS question was formulated as follows: P (patients): adults over 40 years old with AD and without AD; I (indicator): PD; C (comparator): absence of PD; O (outcome): onset or progression of AD; S (study design): observational studies. Search Strategy Five electronic databases were systematically searched: PubMed/Medline, ScienceDirect, EBSCO, Web of Science, and Scopus.The following search equation was used: ("periodontal disease" OR "chronic periodontitis" OR periodont* OR periodontitis [Mesh]) AND ("Alzheimer's disease" OR dementia OR "cognitive decline" OR "cognitive impairment" OR "cognitive dysfunction" OR "Alzheimer's disease" [Mesh]).No restrictions were applied to the language and year of publication.Some limited changes were made to the search strategy due to technical restrictions in ScienceDirect, Web of Science, EBSCO, and Scopus. Eligibility Criteria To be included in this systematic review, studies had to be observational of cross-sectional, cohort (retrospective or prospective), or case-control design.We included studies that explored whether periodontitis was associated with AD.Exposure was PD, and the outcome of interest was the onset and/or progression of AD. We excluded interventional studies that assessed the effect of periodontal treatment on AD; studies that examined mild cognitive impairment, cognitive impairment, cognitive decline, and dementia without mentioning explicitly AD; studies that evaluated other types of dementia; studies that assessed the impact of dementia and AD on oral health; studies that addressed the number of lost teeth without reference to AD; and studies that analyzed levels of periodontal inflammatory markers as the exposure of interest instead of PD.Experimental studies, case series, letters, comments, editorials, communications, and reviews were also excluded. Data Extraction and Collection Two reviewers (S.L. and A.B.) independently extracted information from each article on the first author and year of publication, country of origin, study design, sample size, age, number of years of follow-up, methods used to assess periodontitis and Alzheimer's, covariates, and results.All the selected studies were independently read by the reviewers.Every phase of data extraction was thoroughly discussed, and disagreements between researchers were resolved by consensus in consultation with a third author (L.A.). Quality Assessment of Studies The methodological quality of the included studies was assessed using the Newcastle-Ottawa Scale (NOS) by the two authors (S.L. and A.B.) individually.This scale has been developed to evaluate the quality of non-randomized studies [32]. The qualitative evaluation criteria comprised eight items belonging to three broad domains, namely, (1) sample selection of study groups, (2) comparability of groups, and (3) the ascertainment of either the exposure or outcome of interest for case-control or cohort studies, respectively [33].A series of options are provided for each item, and a "star system" allows a semi-quantitative assessment of the study according to the three main domains already cited.Studies of the highest quality receive a maximum of one star for each item, with the exception of the one related to comparability that allows the assignment of two stars. A study of good quality requires three or four stars in the selection domain, one or two stars in the comparability domain, and two or three stars in the outcome/exposure domain.A study of fair quality requires two stars in the selection domain, one or two stars in the comparability domain, and/or two or three stars in the outcome/exposure domain.Finally, a study of low quality requires zero or one star in the selection domain, zero stars in the comparability domain, and/or zero or one star in the outcome/exposure domain.Therefore, the NOS assigns up to a maximum of nine points for the least risk of bias in the three domains.Studies with less than five points are identified as representing a high risk of bias.Discrepancies in scores were resolved through discussion by the authors. Selection of Articles The initial electronic search on all databases provided 5400 citations.After examining the titles, 4474 irrelevant articles were excluded owing to the following reasons: duplication, animal studies, reviews, letters, editorials, communications, studies in which the data did not answer the research question, and articles reporting the effect of dementia and Alzheimer's on oral and periodontal health. The abstracts of the 926 articles selected for potential eligibility were then read and examined.Thus, 910 non-eligible articles were eliminated.They include articles assessing the oral health status of Alzheimer's patients and dementia, articles studying cognitive impairment, cognitive decline, and mild cognitive impairment without reference to AD or dementia, studies addressing the effect of tooth loss or loss of posterior dental occlusion on cognitive impairment, and articles that did not comply with our inclusion criteria. The full text of the 16 remaining articles was read and evaluated according to the predefined eligibility criteria.Ten studies were subsequently excluded for the following reasons: exposure of interest was evaluation of periodontal antibody levels (n = 1) [34] and of plasma tumor necrosis factor-alpha (TNF-α) and antibodies to periodontal bacteria in AD and cognitively normal patients without assessment of PD (n = 1) [35]; the aim of the study was to investigate the prevalence of oral infections and blood cytokine levels in patients with AD, mild cognitive impairment, and elders without dementia (n = 1) [19] and the determination of serum antibodies to PD bacteria in participants with AD compared to antibody levels in control subjects (n = 1) [36]; verification of the presence of periodontal pathogens and pathogen-specific antibodies was made between patients with AD and subjects having other types of dementia instead of cognitively healthy patients (n = 1) [37]; the primary outcome was serum levels of inflammatory biomarkers in AD [38] and levels of expression of AD-related genes in gingival tissues without addressing the research question (n = 2) [39]; the study compared the risk of dementia in individuals undergoing various types of periodontal treatments during the follow-up period (n = 2) [40,41]. As a result, six articles fulfilling the inclusion criteria were finally included in this systematic review.The selection process is reported in the flow diagram according to the PRISMA guidelines (Figure 1). FIGURE 1: PRISMA flow diagram of search processes and results PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Characteristics of the Included Studies The characteristics of the six included studies are presented and summarized in chronological order in Table 1.They were conducted in five different countries, including France (n = 1) [42], Spain (n = 1) [43], the United Kingdom (n = 1) [44], Taiwan (n = 2) [45,46], and Sweden (n = 1) [47], and were published in English between 2012 and 2018.Four of the studies were of cohort design [42,[44][45][46], and two were of case-control design [43,47].Five studies identified the association between PD and the risk of developing cognitive impairment and/or various subtypes of dementia, including AD [42,43,[45][46][47].One study examined whether periodontitis would be associated with an increased rate of cognitive decline, regardless of the severity of dementia [44].Arrivé et al.'s study demonstrated that PD, assessed with community periodontal index (CPI), was not associated with the risk of dementia [42].Indeed, Gil-Montoya et al. showed that patients with severe periodontitis had almost a three-fold risk of developing dementia in comparison with subjects without periodontitis or those having mild periodontitis [43].This association was more significant with clinical attachment loss in all patients with dementia regardless of severity.Furthermore, Ide et al. reported that during a six-month follow-up, the presence of periodontitis was linked to an increase in cognitive decline in AD patients independently of the basic cognitive state [44].In agreement with this, Tzeng et al. found that patients with chronic periodontitis and gingivitis were 2.5 times more likely to develop dementia [45].Similarly, Chen et al. showed an association between chronic periodontitis and AD, which became significant after 10 years of exposure [46].Finally, Holmer et al. found that patients with AD had more marginal alveolar bone loss compared to cognitively healthy controls [47]. NOS Assessment Results Assessment results according to NOS are shown in Table 2 for cohort studies and in Table 3 for case-control studies.NOS scores ranged from two to nine points.Three studies had a low risk of bias: one of cohort design [44] and two of case-control design [43,47].Three other cohort studies were of low quality [42,45,46], while one of the case-control studies accomplished the full score [47].Arrivé et al. used the CPI to assess exposure to PD [42].Although the index includes periodontal pocket depth, it has been considered methodologically deficient as a screening and diagnostic tool [48,49]. Studies In addition, Tzeng et al. and Chen et al. did not mention any description of periodontal assessment [45,46].The diagnosis criteria for PD in Gil-Montoya et al.'s study were not appropriate since the severity of the disease was defined by the percentage of sites with a clinical attachment loss > 3 mm, while the extent of PDs differs from severity [43].Regarding AD evaluation, Chen et al. did not report the criteria and tools used to diagnose AD [46]. In terms of follow-up, Ide et al.'s study was of short duration (six months) [44].Regarding Arrivé et al.'s study, the follow-up from the exposure until the end was not clear [42]. Concerning the adequacy of the follow-up of cohorts, the follow-up in Arrivé et al.'s study was unclear when compared to the number of visits reported by the authors [42].Furthermore, Chen et al. did not elucidate the loss to follow-up rate in the unexposed cohort [46]. Discussion Despite the high risk of bias found in most studies included in this systematic review, almost all results demonstrated that patients with PD present an increased risk of developing AD.With the exception of Arrivé et al.'s study, which showed that PD was not associated with the risk of dementia [42], all other studies proved a statistically significant association between PD and the risk of onset and/or progression of AD [43][44][45][46][47].These findings corroborate the theory that chronic inflammatory diseases such as PD could play an important role in the pathogenesis of AD by progressively inducing neurodegenerative changes leading to disease development. From an immuno-inflammatory perspective, some authors have hypothesized that there is a positive association between periodontal inflammation and dementia, particularly AD.PD could lead to the development and progression of AD by increasing pro-inflammatory cytokine levels.These proinflammatory factors are suggested to be able to invade the blood-brain barrier during inflammatory conditions and to generate neurodegenerative changes and thus the development of dementia [22,25,26,28,36,50,51]. In Kamer et al.'s studies, it has been shown that TNF-α and a high number of antibodies targeting periodontal bacteria are associated with AD [35], and that periodontal inflammation could affect cognitive function [20].In addition, an association between PD and amyloid Aβ has been demonstrated for the first time in men; the clinical parameters of PD in elders with normal cognitive status were positively associated with an increase of amyloid β accumulation in the brain [50].In their last review, Kamer et al. explored the hypothesis that PD is causally related to AD by reviewing available evidence using Hill's criteria for causation (consistency of association, strength of association, specificity, temporality, biological gradient, plausibility and analogy, coherence, and experimental evidence) [52].Although the strength of evidence was not sufficient to confirm causality, they concluded that PD, through its inflammatory and bacterial burdens, could be a biologically plausible risk factor for AD.Laugisch et al. suggested that local production of antibodies in cerebrospinal fluid to Porphyromonas gingivalis and other periodontal pathogens may occur in patients with dementia [37].Nevertheless, no specific association has been found between periodontal infection and the onset of AD.Sparks et al. also demonstrated an increase in antibodies to PD bacteria in subjects for several years prior to cognitive impairment and suggested that PD could potentially contribute to the risk of onset/progression of AD [36].In this sense, it has been proposed that poor oral health and PD could be considered modifiable risk factors for cognitive impairment and dementia, and early treatment of periodontitis may limit the severity and progression of cognitive lesions [22,24,26].According to Olsen and Singhrao, oral organisms such as spirochetes, Chlamydia pneumoniae, Helicobacter pylori, herpes simplex virus type I, and Candida species could be involved in causing AD, especially, anaerobic bacteria such as treponemes, P. gingivalis, Fusobacterium and Actinomyces, but also facultative anaerobic Candida species [27].Some reports confirm the existence of a potential infectious invasion of P. gingivalis or its components in the brain.In transgenic AD-sensitive mice, P. gingivalis was able to enter the brain and cause an inflammatory response [53].In addition, a positive immunofluorescent reaction of P. gingivalis lipopolysaccharide has been found in four out of six brain tissue samples from AD [54].More recently, a mouse model with P. gingivalisinduced experimental periodontitis exhibited memory impairment and a significant increase in amyloid plaque loads, as well as high levels of interleukin-1β and TNF-α in the brain [55].According to the authors, periodontitis caused by this bacteria could exacerbate the deposition of Aβ in the brain, leading to worsening of cognitive impairments, by a mechanism involving brain inflammation.In their cohort study, Lee et al. found that the risk of developing dementia was significantly higher in elderly participants with periodontitis compared to other patients without periodontitis [56].They suggested that PD may be a modifiable risk factor for dementia.However, it appears from the value of HR (1.16) that this association is not clinically significant.This study was not included in our systematic review because AD was not explicitly included in the forms of dementia. Previous systematic reviews and meta-analyses investigating the relation between oral health and dementia were based on tooth loss as the exposure of interest instead of periodontal variables (clinical attachment loss, periodontal probing depth).Shen et al. found that tooth loss is a risk factor that could be positively associated with the increased risk of dementia in older people [57], and Oh et al. revealed that more residual teeth were associated with a lower risk of developing dementia later in life [58].In the same context, the results of the systematic review by Tonsekar et al. [59] were inconclusive regarding tooth loss and PD as risk factors of dementia or cognitive impairment owing to the divergent results between studies.Furthermore, the systematic review conducted by Wu et al. found inconsistent results on the association between oral health and cognitive decline in the elderly, some showed that oral health measures such as the number of teeth and PD were associated with increased risk of cognitive decline or incident dementia, while no association was found in others [60].In the systematic review and meta-analysis carried out by Leira et al. for dementia [61], the relative risk of severe periodontitis is higher than for moderate periodontitis.However, the insufficient evidence does not permit to fulfill the criterion temporality for causality between PD and dementia of Alzheimer's type [52].The significance of the level of severity at a specific moment, might not be as critical as the Alzheimer's patient's exposure to the periodontal microorganisms and their potential entry into the brain over the course of the patient's life, facilitated by different circulatory, lymphatic, neural, and other routes [62].The review of Leira et al. [61] included all the studies on the association between PD and AD independently of the direction (which disease impacts the other).In the present review, we have focused our review question on the effect of PD on AD.Dioguardi et al. focused on the etiopathogenetic role of PD and periodontal bacteria in the onset and progression of AD compared to unaffected patients [63].Their systematic review was based on recent literature reviews, in vitro experiments, and clinical studies investigating the role of PD and bacteria such as P. gingivalis, Aggregatibacter actinomycetemcomitans, Fusobacterium nucleatum, and Treponema denticola in the onset and progression of AD.They deduced that there is no definitive evidence to consider periodontitis as a risk factor.The systematic review and meta-analysis of observational studies carried out by Nadim et al. to investigate the influence of PD on dementia found that there was an overall significant and positive association of PD with increased dementia with no evidence of the impact of PD on AD [64].The pooled RR of dementia in relation to PD from all high-quality studies included was 1.38 (95% CI: 1.01-1.90).Nevertheless, the NOS quality assessment of this review reported good quality of three cohort studies [42,45,46], while we evaluated them as having low quality.Furthermore, the authors studied the risk of dementia with their subclassification.In the present review, we opted to focus on AD for more precision. The present systematic review confirms the findings of Kaliamoorthy et al. [65], despite some differences in eligibility criteria and search strategy leading to the inclusion of other studies [42,[45][46][47].Recent reviews focusing on the treatment of neuro-inflammation in AD have shown interesting therapeutic results, especially at the early stages of the disease.However, data remain controversial regarding the benefits of non-steroidal anti-inflammatory drugs (NSAIDs) in reducing the risk and progress of AD [66].Data related to targeting insulin resistance and antibiotic use have been inconclusive as well [67,68]. The restoration and maintenance of healthy gut microbiota could play a key role in the prevention and treatment of AD [67].It has been proposed that management of the gut microbiota by probiotics may prevent or alleviate the symptoms of these chronic diseases and contribute to reducing neuroinflammation in AD.However, studies that evaluate the potential benefit of probiotic supplements in the course of AD are still needed [67,68]. Nutrition was suggested to be a good measure for the optimization of cognition and prevention of AD's progress [67,69].In fact, it was shown that adopting the Mediterranean dietary pattern can play a positive role in cognitive health among healthy individuals and reduce their risk of developing AD [69].Furthermore, studies included in a recent systematic review have introduced some evidence suggesting a direct relation between diet and changes in brain structure and activity and that higher adherence to a Mediterranean-type diet was associated with decreased cognitive decline [70]. When investigating the relationship between dental treatment and AD, it was shown that periodontal treatment contributed to reducing comorbidities associated with AD [71].In an analysis that emulates a trial, a moderate to strong effect of periodontal treatment and subsequent maintenance treatment was found on an imaging marker of AD [72].Hence, PD may be considered a modifiable risk factor for the onset of AD [72,73]. A structured search approach and quality assessment were applied.However, the presence of bias in the studies requires caution while interpreting the results.Several methodological limitations were found while conducting this review.First, there was a large diversity regarding sample size among studies as well as the existence of different follow-up periods.Although the review is based mainly on cohort studies [42,[44][45][46], which have a better level of evidence than cross-sectional and case-control studies, two of the selected studies were case-control [43,47].This study design prevents the establishment of a causal relationship and cannot infer that PD is a risk factor for AD.For example, in the study by Holmer et al., it is impossible to determine whether the destruction of periodontal tissue occurred before the actual onset of AD [47]. Although PD is a chronic progressive disease and bone loss usually takes many years to occur, the risk of potential reverse causality must be considered, that is, cognitive function leads to poor oral health.In addition, three cohort studies [42,45,46] were of low quality, of which two were retrospective [45,46].This design has the same level of evidence as case-control studies.Studies that are based on medical records for the diagnosis of dementia have a higher risk of bias than prospective population-based studies, which recruit samples according to adapted evaluation methods and follow-up [74].Similarly, the lack of consistency in periodontal assessment among studies and in the definition of chronic periodontitis makes it difficult to properly assess the effect of PD on AD.As part of the 2017 global workshop on the classification of periodontal and peri-implant diseases, case definitions for periodontitis, which can be implemented in clinical practice, research, and epidemiological surveillance, have been developed [75].There was also considerable heterogeneity regarding the criteria and cognitive assessment tools used to diagnose AD and assess the cognitive status of the participants.Another limitation was the incomplete adjustment of confounding factors.An analysis of confounders in observational studies evaluating the association between AD and PD indicated a lack of consideration for potential bias due to confounding [76].This interferes with the confirmation that PD is a risk factor for AD and its causality or specific role.Although studies have attempted to take these factors into account, residual confusion (due to unmeasured or unknown confounders) is still an impediment to the validity of the results [77].It has been established that genetics can play a role in the expression of AD.Among the various phenotypes considered, the apolipoprotein E (ApoE) e4 allele is most strongly associated with AD and is considered a major risk factor [78].In this review, no article evaluated the genotype of ApoE participants as a possible confounding factor in the association between PD and AD.Smoking has been reported in two studies [42,47], whereas no information concerning this variable was given in the other studies.One study included alcohol consumption [42], and three reported a history of depression [42,45,46], while only one took into account oral hygiene habits [43].In addition, participants' education levels were included in only three studies [42,43,47].Potential selection bias may exist as participants with a high level of education are more likely to request a dentist's assessment for gingivitis or periodontitis and to seek medical care for a cognitive impairment.In addition, other relevant information regarding other residual confounding factors, such as dietary factors or drug intake, was not included in these studies.These factors may have affected the results reported.It should also be noted that in Arrivé et al.'s study, the statistical calculation of the HR of the periodontal state in association with the AD took as a modality of reference the number of pockets ⩾4 mm instead of healthy periodontal status [42].This did not make it possible to determine the HR of the modality ⩾4 mm, which remains the most relevant in comparison with the gingival bleeding and the healthy state. A meta-analysis was not performed because of heterogeneous study protocols and the low number of studies with association measures.For future research, longitudinal studies with a formal diagnosis of AD would be imperative.In addition, standardization of cognitive status assessment would ensure better interpretation of results.These studies must be well-designed and exhaustively define PD (stage and grade).The currently recommended criteria would serve to unify periodontal assessment.They should also include homogeneous and representative study samples and have an adequate follow-up period.The use of standard covariates involved in the interaction between these two diseases as well as a global integration of confounding factors in the statistical analysis would minimize the risk of bias.Randomized clinical trials on the effect of periodontal treatments on the course of AD are also required.Thereafter, more effective methods of patient management could be developed. TABLE 3 : Quality assessment results of case-control studies with the Newcastle-Ottawa Scale Serious [43]46]ions were found regarding inconsistency and publication bias in most studies.The exposed cohort of Arrivé et al.'s study was not representative of the population given the lack of information regarding exposure characteristics to PD[42].In addition, this study was without a control group (unexposed cohort).In the studies of Tzeng et al. and Chen et al.[45,46], cohort selection was considered unrepresentative because samples were selected from a national registry (Taiwan National Health Insurance Research Database), which is a database usually used in descriptive studies.The selection of exposed and unexposed cohorts in Tzeng et al.'s study was affected by the inclusion of young subjects (<40 years old)[45].Control subjects in Gil-Montoya et al.'s study were from a primary health center rather than the community, thus representing only one particular group and not the general population[43].
2023-10-05T15:39:58.319Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "e8f389e677392458a1939263bf587f1ed28f8a9e", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/169778/20231001-3590-t5qnzh.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40a7ce7378d3b91a245d96693675f3941cf2f693", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202832341
pes2o/s2orc
v3-fos-license
CYP2B6 genetic polymorphisms influence chronic obstructive pulmonary disease susceptibility in the Hainan population Introduction Chronic obstructive pulmonary disease (COPD) is a lung disease closely related to exposure to exogenous substances. CYP2B6 can activate many exogenous substances, which in turn affect lung cells. The aim of this study was to assess the association of single-nucleotide polymorphisms (SNPs) in CYP2B6 with COPD risk in a Chinese Han population. Materials and methods Genotypes of the five candidate SNPs in CYP2B6 were identified among 318 cases and 508 healthy controls with an Agena MassARRAY method. The association between CYP2B6 polymorphisms and COPD risk was evaluated using genetic models and haplotype analyses. Results In allele model, we observed that rs4803420 G and rs1038376 A were related to COPD risk. And rs4803420 G/T and G/T-T/T were related to a decreased COPD risk compared to GG genotype in the co-dominant and dominant models, respectively. When comparing with the AA genotype, rs1038376 A/T and A/T-T/T were associated with an increased COPD risk in the co-dominant and dominant models, respectively. Further gender stratification co-dominant and dominant models analysis showed that genotype G/T and G/T-T/T of rs4803420, and genotype A/T and A/T-T/T of rs1038376 were significantly associated with COPD risk compared to the wide type in males and females, while allele C of rs12979270 was only associated with COPD risk in females. Smoking status stratification analysis showed that rs12979270 C was significantly associated with an increased COPD risk under the allele model compared with allele A in the smoking subgroup. Haplotype analysis showed that haplotype GTA and TAA were related to COPD risk. Conclusion Our data is the first to demonstrate that CYP2B6 polymorphisms may exert effects on COPD susceptibility in the Chinese Han population. Introduction Chronic obstructive pulmonary disease (COPD) is a leading cause of morbidity and mortality worldwide, characterized by persistent airflow limitation that is usually progressive and associated with an enhanced chronic inflammatory response in the airways and the lung to noxious particles or gases. 1,2 It was reported that 210 million people were affected worldwide and caused more than 4 million people deaths per year, 90% of which occurred in low-and middle-income countries. 3 Among them, 9-10% of the adult patients and 8.2% of the patients in 40 years old age group in China suffer from COPD. [4][5][6] Based on data from the World Health Organization, COPD will become the fourth most prevalent disease and the fourth most common cause of death by 2035. 7 At the same time, COPD also led to economic and social burden that is substantial and increasing. 8 The pathophysiology of COPD involved in complex interactions among several factors. These factors included tobacco, marijuana, indoor pollution (eg, biomass cooking and heating in poorly ventilated dwellings), and occupational exposure to organic and inorganic dust, chemicals and smoke, as well as genetic vulnerability. [9][10][11] It is worth noting that COPD is a lung disease intimately linked to exposure to exogenous substances. Although many of the exogenous compounds are not inherently dangerous, they have deleterious effect on lung cells after enzymatic activation. And these activation reactions are mainly catalyzed by enzymes in cytochrome P450 (CYP) superfamily. 12 Cytochrome P450, family 2, subfamily B, polypeptide 6 (CYP2B6) is a member of the cytochrome P450 superfamily. It has been reported that the CYP2B6 enzyme also played an important role in the biological transformation of many xenobiotics, such as cyclophosphamide, ifosfamide, ketamine, propofol, bupropion, nevirapine, efavirenz, and some carcinogens such as aflatoxin B1. [13][14][15] Moreover, if xenobiotics are converted into their bioactive forms as precancer pathogens, they may have irreversible reactions with DNA, leading to mutations and chromosomal aberrations. 16 At present, CYP2B6 mRNA 17 and its protein expression 18 level were found in human lung. Immunohistochemistry assay detected that CYP2B6 was also expressed in human Clara cells secreted by a large number of non-ciliated secretory cells in the bronchiolar epithelium of the mammalian lung. 19 Another reporter found its mRNA was also expressed in bronchial and peripheral lung. 20 The catalytic activity associated with CYP2B6 was also detected in human lung. 21 To sum up, these studies suggest that CYP2B6 may be closely related to the occurrence of lung disease by activating some exogenous compounds inhaled into the lungs. Hainan is a multiethnic mix province, while Han and Li are the two main ethnicities, and the incidence of COPD in the Hainan population is higher than that of other regions of the People s Republic of China. 2,22 COPD is closely related to exposure to exogenous substances. And the association between CYP2B6 variant and COPD susceptibility has not been investigated in the Hainan population so far. Therefore, in the present study, we firstly investigated the connection of five CYP2B6 gene polymorphisms (rs2099361, rs4803418, rs4803420, rs1038376, and rs12979270) and the risk of COPD in the Hainan population. Our study is supposed to provide more evidence for CYP2B6 in COPD pathogenesis and contribute to early COPD risk estimation among the individuals of Chinese ancestry. Ethics statement The study was approved by the ethics committee of Hainan Province People's Hospital of China, in accordance with the principles of the Helsinki Declaration and following national and international guidelines. Each participant was informed both in writing and verbally of the procedures and purpose of our research and signed a written informed consent before donating 5 mL venous blood for further analyses. Research participants This case-control study consisted of 318 patients and 508 controls, and they were consecutively recruited between July 2016 and July 2018 at the Hainan Province People's Hospital of China. All subjects in our study were unrelated individuals recruited from Hainan province, China. All patients were diagnosed with COPD based on patient history, physical examination, and pulmonary function, according to the criteria of the National Heart, Lung and Blood Institute (NHLBI)/WHO Global initiative for Chronic Obstructive Lung Disease (GOLD). 23 The inclusion criteria for COPD patients were the post-bronchodilator forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) of less than 70% and the FEV1 of less than 80% predicted. COPD severity was classified according to the GOLD guidelines for predicting FEV1 percent: mild (>80%), moderate (50-80%), severe (30-50%), or very severe (<30%). The exclusion criteria for COPD patients were as follows: 1) patients unable to perform lung function tests; 2) had other significant respiratory diseases such as asthma, congestive heart failure, cystic fibrosis, and tuberculosis. The control group was randomly selected healthy individuals. The control group excluded individuals suffering from any organic disease by clinical examination, chest Xray examination, and laboratory investigations. Written informed consent was obtained from each participant prior to enrollment in the study. Data collection Five milliliters of peripheral blood samples from each participant were collected by a specialized technician and stored into tubes containing ethylenediaminetetraacetic acid (EDTA) for anticoagulation. Genomic DNA was isolated from whole blood samples using the GoldMag DNA Purification Kit (GoldMag Co. Ltd, Xi'an City, China) according to the manufacturer's instructions, and the purity and concentration of DNA were measured by the NanoDrop2000 (Thermo Scientific, Waltham, MA, USA). The isolated DNA was stored at 80°C until analysis. All subjects signed a written informed consent in this study. SNP selection and genotyping This study focused on the relationship between CYP2B6 SNPs and COPD risk in Chinese Han population. Based on the 1000 Genomes database (http://www.internationalgen ome.org/), we finally identified five candidate polymorphisms (rs2099361, rs4803418, rs4803420, rs1038376, and rs12979270) for the present case-control study. Each SNP had the minor allele frequency (MAF)>0.05 in global population from the 1000 Genome Project. Agena MassARRAY Assay Design 4.0 software was used to design the primers for amplification and extension reactions. Genotyping was carried out by two laboratory personnel in a double-blinded fashion using the Agena MassARRAY system (Agena, San Diego, CA, USA.). 24 The primers used for the five SNPs are shown in Table 1. Statistical analysis Statistical analysis was performed with Microsoft Excel and SPSS 18.0 (SPSS, Chicago, IL, USA). Welch's t-tests (for continuous variables) and the Chi square test (χ 2 test, for categorical variables) were used to assess the differences in the distribution of demographic characteristics of case and control groups. SNP frequencies in the control subjects were evaluated for departure from Hardy-Weinberg Equilibrium (HWE) by the Fisher's exact test. Allele and genotype frequencies of each SNP in patient and control groups were compared by a χ 2 test. The associations between these SNPs and COPD risk were assessed by calculating odds ratios (ORs) and 95% confidence intervals (CIs) using logistic regression analysis with adjustments for age and gender. Multiple inheritance models (codominant, dominant, recessive, and log-additive) were generated by PLINK software (http://www.cog-genomics.org/plink2/) to estimate the relationship between each SNP and COPD risk. 25,26 Finally, we used the Haploview software package (version 4.2) for pairwise linkage disequilibrium (LD) and haplotype construction, and used SHEsis software for haplotype association analyses. All P-values were two-sided, and P<0.05 was considered statistically significant. Demographic characteristics The basic characteristics of cases and controls are summarized in Table 2. The study included 318 patients with COPD (238 males and 75 females; age at diagnosis: 64.56 ±8.49 years) and 508 healthy controls (337 males and 171 females; age at diagnosis: 64.52±10.40 years). There was a statistically significant difference in the distributions of gender and age between the case group and the control group (P<0.05). The average of patients' BMI was slightly higher than that of the control group, and there was no statistically significant difference (P>0.05). Pulmonary function (FEV1/FVC) of COPD cases and controls was 0.56 (±0.05) and 0.79 (±0.04), respectively. Pulmonary function (FEV1/FVC) in the COPD cases was worse than that in the control subjects. The number of nonsmokers was larger than that of smokers (p=0.082), but there was no significant difference in the distribution of smoking status. HWE and SNPs alleles The detailed information of candidate SNPs in CYP2B6 are demonstrated in Table 3. All SNPs were in accordance with HWE in the controls (P>0.05) except for rs4803418 excluded from subsequent analyses. The call rate for all SNPs was >98.40% among the COPD cases and the controls, which was considered high enough to perform association analyses. The minor allele of each SNP was assumed as a risk factor for COPD compared to the wild-type allele. Using χ 2 tests and ORs (95% CI), we identified two significant SNP variants associated with the risk of COPD. Among them, the frequency of "G" of rs4803420 (0.158 vs 0.234) and frequency of "A" of rs1038376 (0.220 vs 0.124) were significant between the healthy control group and the COPD patients. Compared to allele "T", the minor allele "G" of rs4803420 was found related to a decrease the risk of COPD (OR=0.61, 95% CI=0.47-0.80, P<0.000). Similarly, allele "A" of rs1038376 was significantly correlated with an increased risk of COPD compared to allele "T" (OR=2.00, 95% CI=1.53-2.61, P<0.000). Associations between genotype frequencies and COPD risk We also assessed the association between these SNPs and COPD risk using four genetic models (codominant, dominant, recessive, and log-additive) by logistic regression analysis. The results of the various genetic models are displayed in Table 4. Compared with homozygous "G/G" genotype, genotypes "G/T" and "G/T-T/T" of rs4803420 Stratification analysis by gender In accordance with the stratified analysis by gender, we found that CYP2B6 variants rs4803420 and rs1038376 were significantly correlated with COPD risk in both males and females, and rs12979270 was only significantly correlated with COPD risk in females, as shown in Table 5. Rs4803420 allele "T" was associated with a reduced COPD risk compared with allele "G" in allele model Stratification analysis by smoking status When stratified analysis by smoking status, the correlation between candidate SNP and COPD risk was listed in Table 6. We only found that allele "C" of rs12979270 was significantly associated with an increased risk of COPD under the allele model compared with allele "A" in the smoking subgroup (OR=1.49, 95% CI: 1.08-2.07, P=0.016). Abbreviations: SNP, single-nucleotide polymorphism; Alleles A/B, minor/major alleles; MAF, minor allele frequency; OR, odds ratio; 95% CI, 95% confidence interval. Discussion In this hospital-based case-control study, we genotyped five polymorphisms of CYP2B6 and evaluated their correlations with the risk of COPD in the Hainan population of China. A remarkable result was showed that rs4803420 was associated with a reduced risk of COPD, while rs1038376 and rs12979270 had adverse effect on COPD risk. Further gender stratification analysis revealed that rs4803420 and rs1038376 were significantly correlated with COPD risk in males and females, while rs12979270 was only related to COPD risk in females. Our findings indicate that polymorphic sites in the CYP2B6 may play a crucial role in the development of COPD. The CYP2B6 gene is located on chromosome 19q13.2 and comprises 9 exons. 27 The CYP2B6 enzyme catalyzes many reactions involved in drug metabolism and synthesis of cholesterol, steroids, and other lipids. In the previous studies, CYP2B6 was mainly expressed in the liver and lung. 17,18 We also used HaploReg v4.1 database to predict Figure 1 The haplotype block map constructed by candidate SNPs in CYP2B6. Notes: Block 1 includes rs4803420, rs1038376, and rs12979270; the linkage disequilibrium between two SNPs is indicated by standardized r 2 (red boxes). Figure 2 The haplotype block diagram constructed by candidate SNPs in CYP2B6 in males. Notes: Block 1 includes rs4803420, rs1038376, and rs12979270; the linkage disequilibrium between two SNPs is indicated by standardized r 2 (red boxes). the function of selected SNPs in CYP2B6 gene, revealing that rs4803420 and rs1038376 may affect the expression of CYP2B6 (Table S1). Recently, diseases associated with CYP2B6 included poor metabolism, acute myeloid leukemia, acute lymphoblastic leukemia, and acute frontal sinusitis. 15,28 CYP2B6 gene is highly polymorphic, and a total of 63 alleles have been identified so far. Previous studies have shown that some CYP2B6 gene variants had certain effect on its expression and activity, and they were also related to the biotransformation of different drugs. For example, rs3745274 in CYP2B6 gene was related to the decrease of the gene expression and the CYP2B6 activity in the liver. 29,30 CYP2B6 516G > T SNP changed the amino acid residues of glutamine to histidine, 31 which may lead to a decreased CYP2B6 enzyme activity in the liver. 30 CYP2B6*6 was the most common variant associated with functional changes in CYP2B6, which led to adverse effect on the function of the mRNA and protein after causing aberrant splicing. 30 The GT genotype frequency of CYP2B6 G15631T polymorphism Figure 3 The haplotype block diagram constructed by candidate SNPs in CYP2B6 in females. Notes: Block 1 includes rs4803420, rs1038376, and rs12979270; the linkage disequilibrium between two SNPs is indicated by standardized r 2 (red boxes). loci was higher in acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) In our study, we firstly found that rs4803420 had a protective effect on the development of COPD. And rs1038376 and rs12979270 may be a risk factor for the incidence of COPD. And rs12979270 was only correlated with an increased COPD risk in females after gender stratification analysis. Smoking status stratification analysis showed that rs12979270 was associated with an increased COPD risk in smokers, while this SNP was not related to COPD risk in nonsmokers. Given that CYP2B6 gene variants had certain effect on its expression and activity, we speculated that CYP2B6 SNPs might affect the occurrence of COPD by changing the expression or activity of CYP2B6 gene. Larger samples are needed to confirm the overall results by functional experiment, including bioinformatics tools, to further elaborate the function of CYP2B6 in the development COPD, contributing to the early diagnosis and targeted treatment of COPD. Conclusion In summary, our study is the firstly provided substantial basic evidence that gene polymorphisms in CYP2B6 were related to the susceptibility of COPD in the Hainan population of China. These results may provide new view for the assessment, prevention, and prognosis of COPD risk.
2019-09-17T02:58:59.489Z
2019-09-05T00:00:00.000
{ "year": 2019, "sha1": "5e8bd1951f755a46b2654f6f181b87bdc44ad809", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=52568", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37c999796d2f6e3c652a3fa4382bef0561fdfea1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16471109
pes2o/s2orc
v3-fos-license
Allosteric activation mechanism of the cys-loop receptors. Binding of a neurotransmitter to its ionotropic receptor opens a distantly located ion channel, a process termed allosteric activation. Here we review recent advances in the molecular mechanism by which the cys-loop receptors are activated with emphasis on the best studied nicotinic acetylcholine receptors (nAChRs). With a combination of affinity labeling, mutagenesis, electrophysiology, kinetic modeling, electron microscopy (EM), and crystal structure analysis, the allosteric activation mechanism is emerging. Specifically, the binding domain and gating domain are interconnected by an allosteric activation network. Agonist binding induces conformational changes, resulting in the rotation of a beta sheet of amino-terminal domain and outward movement of loop 2, loop F, and cys-loop, which are coupled to the M2-M3 linker to pull the channel to open. However, there are still some controversies about the movement of the channel-lining domain M2. Nine angstrom resolution EM structure of a nAChR imaged in the open state suggests that channel opening is the result of rotation of the M2 domain. In contrast, recent crystal structures of bacterial homologues of the cys-loop receptor family in apparently open state have implied an M2 tilting model with pore dilation and quaternary twist of the whole pentameric receptor. An elegant study of the nAChR using protonation scanning of M2 domain supports a similar pore dilation activation mechanism with minimal rotation of M2. This remains to be validated with other approaches including high resolution structure determination of the mammalian cys-loop receptors in the open state. Introduction The cys-loop receptor family of ligand-gated ion channels has a signature cysteine loop in the amino-terminal domain. This family includes nicotinic receptors (nAChRs), serotonin receptor type 3 (5-HT 3 R), γ-aminobutyric acid receptors type A and C (GABA A/C ), glycine receptors, zinc-activated cation channel, and invertebrate glutamate/ serotonin-activated anionic channels or GABA-gated cation channels [1][2][3] . Recently, prokaryotic proton-gated ion channels are also considered to be in the same family although they are devoid of the signature cysteine loop [4] . All cysloop receptors are allosteric proteins, in which binding of agonist to the binding pocket in the subunit interface of the extracellular amino-terminal domain controls the distantly located channel domain to open the pore [1] . This long range coupling between binding pocket and the gating machinery Review requires an interconnected allosteric network, through which the binding energy can be transduced to the gating energy to open the channel [5] . Accumulating evidence suggests that the activation mechanisms of this receptor family are likely to be very similar. Thus, we review the activation mechanism of the cys-loop receptor family in general with emphasis on nAChRs. Kinetic models for channel activation Activation of a ligand-gated ion channel includes binding and gating steps ( Figure 1A) [6] . The Hill slope of dose-response relationships of most heteromeric cys-loop receptors is greater than one, suggesting at least two binding steps for the receptor activation ( Figure 1B) [7] . However, radio ligand binding studies revealed that receptor binding affinities are usually in the nanomolar range, whereas current activations require agonist concentrations in the micromolar range. This long puzzled discrepancy between binding and functional studies is now known to be the result of the difference in functional states of receptors under distinct experimental conditions. Binding measures receptor affinity mainly in the high affinity desensitized state where receptor-ligand interaction has reached equilibrium, whereas electrophysiological recording measures the receptor response at the time of peak current [8] . In fact, binding and gating are mutually coupled [9] , except when receptor is in the desensitized state. The binding can influence channel gating, and gating can also alter binding affinity. In the case of a non-desensitizing GABA C receptor, channel opening can increase binding affinity to such an extent that it appears to lock the agonist in the binding pocket [10] . Desensitization can even increase the binding affinity further for desensitizing receptors [8] . In addition, the gating influence on binding is further supported by the fact that mutations of gating residues in the pore-lining domain can alter the agonist sensitivity of receptors [11][12][13] . Thus, channel gating involves global conformational changes from the binding pocket to the gating machinery. This phenomenon can be well described by Monod-Wyman-Changeux allosteric activation (MWC) model [1,14,15] (Figure 1C). Excellent agreement of experimental data to the MWC model further strengthens this conclusion [13] . A recent study with comparison of the effects of partial agonists and full agonists on channel gating suggested that the increase in binding affinity and channel gating are not a single step. It can be further divided into two sequential steps: the conformational change in the binding domain and then channel opening [16] . That is, there is an intermediate state, termed "flip state", with the receptor binding domain switching to a high affinity state before channel opening ( Figure 1D). In summary, kinetic studies revealed that binding and gating are coupled with mutual influence with each other. However, the coupling between binding and gating is not a single step through a single rigid body. Instead, activation involves binding, sequential conformational change(s), and gating. Functional domains of the cys-loop receptors Binding pocket Structure-function studies in the last two decades with combined techniques such as site-directed mutagenesis, photoaffinity labeling, and structural analysis based on electron microscopic (EM) images of tubular arrays of receptors have shaped a structural model for the cys-loop receptors. To date, the best-studied cys-loop receptor is nAChR. The agonist binding sites of the muscle type nAChR are formed in the extracellular amino-terminal domain at subunit interfaces between α and non-α subunits, whereas the binding sites for neuronal type nAChRs are formed in the subunit interface between α and β subunits for heteromeric receptors or between two α subunits in homomeric receptors [17] . Affinity labeling and site-directed mutagenesis have provided extensive evidence about the agonist binding site. Six loops, designated A through F, appear to participate in formation of the agonist binding pocket [17] . Residues from loops A [18] , B [19] , and C [20][21][22] of the α subunit and residues from loops D [23,24] , E [22,25] , and F [26][27][28] from another subunit contribute to the formation of the binding pocket in the subunit interface. The model of the agonist binding pocket was further validated and extended by high resolution crystal structures of homologous acetylcholine binding proteins, AChBPs [29][30][31][32] and 4 Å resolution EM structure of the Torpedo nAChR. In this structural model, the receptor has five subunits with the agonist/antagonist-binding pocket located at the subunit interface ( Figure 2A). In the heteromeric nAChRs, there are two binding pockets located in the two subunit interfaces [6] , typical for all heteromeric cys-loop receptors. However, homomeric receptors have potentially 5 binding steps. In some homomeric receptors (if not all), such as ρ1 GABA C receptor, three bindings are needed to induce significant channel openings [75] ; (C) Monod-Wyman-Changeux allosteric activation (MWC) model [13,14] ; (D) adapted flip model from Lape et al [16] . Note that the grayed out states are rare and thus omitted in original scheme. In all models (A−D), A represents agonist; R is receptor; F is flipped state of the receptor; R* is the receptor in open state. All open states and flipped states are high affinity states, whereas resting states are low affinity states. between α and non-α subunits. For heteromeric GABA A receptors, two binding pockets are located between β and α subunits. Note that the β subunit of GABA A receptor is equivalent to the α subunit in nAChR, whereas the α subunit of GABA A receptor is in the position of the β subunit in neuronal nAChRs. For homomeric cys-loop receptors, such as α7 nAChR, ρ1 GABA receptor, α1 glycine receptor and 5-HT 3Α receptor, there are 5 potential binding pockets in all five subunit interfaces. All previously identified binding loops (A through F) can be mapped onto the structural model. Loops A, B, and C from one subunit form the principal face of the binding pocket. Loops D, E, and F contrib-uted by an adjacent subunit form the complementary face of the pocket ( Figure 2B). Gating machinery The channel domain is formed by transmembrane domains (M1-M4). Studies using sitedirected mutagenesis and ultrastructural analysis have identified the second transmembrane domain (M2) as the pore-lining domain in the cys-loop receptors. Hydrophilic substitutions of the conserved leucine in the mid-point of the M2 domain dramatically influence channel gating kinetics, increase agonist sensitivity, and create spontaneous opening channels in several members of the cys-loop receptors [11][12][13][33][34][35][36][37][38][39] . Although earlier studies using cysteine accessibility test suggested that the gate is in the intracellular end of M2 [39,40] , it is now clear that the accessibility of these residues in the intracellular end of M2 in the absence of agonist is due to spontaneous opening of the M2 mutant channel [41] . The EM structure of the Torpedo nAChR at 4Å resolution finally confirmed that the M2 domain is lining the pore, and that the gate is formed by the hydrophobic interactions between amino acid residues in the middle of the M2 domains [42] (Figure 3). Mutagenesis studies also revealed that structural elements to control ionic selectivity and single channel conductance are located in the intracellular end (the beginning) of the M2 domain. Coupling region Using correlated mutational analysis, we have identified an allosteric network connecting the binding pocket to the gating machinery in the cys-loop receptor family ( Figure 4A, 4B) [5] . Through this network, binding energy can be transduced to the gating energy to open the channel. The key coupling region in this allosteric network is in the interface between amino-terminal binding domain and transmembrane gating domain for each subunit. A study using a chimeric receptor with AChBP and channel domains of 5HT 3 R revealed that the coupling interface requires matching of three loops (loop 2, loop 7, and loop 9) from the amino-terminal domain and one loop (M2-M3 linker) from the transmembrane domain for the receptor to be functional [43] . Additionally, a region in pre-M1 and the beginning of M1 that covalently links the amino-terminal domain to the transmembrane domain, is also important in channel gating ( Figure 4C) [44] . Activation mechanism As stated in the beginning, activation of the cys-loop receptor family includes binding, conformational changes, and gating steps. Briefly, agonist binding in the binding pocket of the amino-terminal domain initiates a conformational change, which then propagates to the gating machinery through the coupling region to open the channel. Propagation of "conformational wave" from the binding to channel gate is not a single step. It has been studied with single channel analysis and linear free energy relationship of gating rate constants by different mutations at each position. The results have suggested that there is a gradient change in the Φ slope factor of the linear free energy relationship, derived from the channel gating constants (opening and closing rates) of the mutations at each position, along the activation pathway [45] . Further analysis revealed that the gradient change in allosteric activation network is not continuous. Instead, it can be divided into several clusters based on their values of Φ slope factor. All positions in each cluster have similar Φ values, suggesting that the residues in each cluster influence channel gating similarly. In other words, each cluster probably moves as a rigid body, with synchronous movement of all residues in the cluster. It is also suggested that all residues in each cluster are coupled tightly, and conformational changes between clusters are coupled less tightly. In this scenario, the conformational change induced by agonist binding would stepwisely propagate toward the channel through discrete modules in the amino-terminal domain and finally to the gating machinery to open the channel [46] . This mechanism is further supported by a recent single channel analysis of partial agonist activation, which further revealed that there is a conformational change, termed flip conformation, preceding the channel opening [16] . The flip conformation clearly changes binding affinity before channel opening. Partial agonists have less ability to convert the receptor to the flipped high affinity state than full agonists. However, once flip conformation occurs, partial agonists and full agonists gate the channel very similarly. In the following section, we will present the detailed mechanism for this allosteric activation process. Conformational changes in the amino-terminal domain Based on the 4Å EM structure of Torpedo nAChR and comparison between α and non-α subunits, Unwin and colleagues proposed that the activation mechanism of the receptor involves agonist-induced clockwise rotation (viewed from the extracellular end) of the inner sheets in the amino-terminal domains of two α subunits. This rotation of the amino-terminal domain is then translated into the rotation of the M2 domain by direct coupling between the bottom of the inner sheet (loop 2) and top of the M2 domain (or beginning of M2-M3 liker) ( Figure 5). However, this proposed mechanism is not based on the agonist-induced structural change. The agonist-induced structural change in the amino-terminal domain is best demonstrated in the crystal structures of AChBPs. When an agonist co-crystallized with the receptor, it induces an inward movement of loop C (also called loop C capping) to tighten the binding pocket ( Figure 6) [31,32] . This could be related to the increased binding affinity during channel activation [10] . New hydrogen bond formation between Y185 in loop C and K139 in β7 strand (connecting to cys-loop) in the nicotine bound state of an AChBP may suggest initial coupling. In the case of muscle type nAChR, single channel analysis demonstrated that mutations of αY190 [47] or αD200 [48] can influence channel gating. Mutant cycle analysis further revealed that αY190 (homologous to Y185 in AChBP) is coupled to αK145 Figure 5. M2 rotation hypothesis [42] . Agonist binding induces a rotation of the inner sheet of the amino-terminal domain, which is coupled to the M2 transmembrane domain through the interaction between loop 2 in amino-terminal domain and M2-M3 linker from the transmembrane domain (created from 2BG9 chain A). 2BYN), epibatidine (agonist)-bound state (green, from 2BYQ) and ImI (antagonist)bound state (blue, from 2BYP). Three crystal structures were loaded to Swiss PDB viewer 3.7 and fitted with magic fit function. The red box is the location of loop C from one subunit for all three structures. (B) close look of agonist and antagonist-induced movement in loop C [32] . (homologous to K139 in AChBP) when an agonist binds to the receptor after disrupting the salt-bridge between αD200 in loop B and αK145 in β7 strand in the resting state [49] . Interestingly, in a GABA A receptor, similar charge interaction between homologous residues (βE153 is at homologous position as αK145, and βK196 is at homologous position of αY190) is critical for channel activation, although with charges reversed [50] . Thus, while there are some variations in detailed interactions, the general mechanism of activation is conserved in the cys-loop receptor family. In addition, in GABA A receptors, another negatively charged residue in loop B (βE155) is also an important determinant for channel gating. Mutation of this residue created spontaneously opening channels, suggesting it may also serve as a trigger for channel activation [51] . Since the conformational change of the receptor can be divided into blocks, it is likely that E155 and E153 are in the same rigid body. Loop C also interacts with loop B for the allosteric channel gating through a backbone hydrogen bonding [52] . Another significant change in the crystal structure of AChBP upon agonist binding is in the binding loop F [31] . The conformational change of loop F during channel activation is further supported by increased photolabeling of loop F in the α1 subunit of nAChR in the open state, although the direction of the movement is not completely clear [53] . Mutation of a loop F residue (εD175N) of nAChR clearly influences channel gating, suggesting the importance of the loop F in channel activation [54] . In the ρ1 GABA C receptor, the outward movement of the lower part of loop F is supported by cysteine accessibility test and fluorescence detection, which is partially coupled to the channel gating as assessed by sensitivity of agonist-induced fluorescence change to a non-competitive antagonist [55] . Since one arm of loop C is linked to the bottom of loop F, it is possible that loop C inward movement would pry the bottom part of loop F in the same subunit and create an outward movement of it. In addition, upon agonist binding, the backbone of αS191 in loop C can form a hydrogen bond with an aspartate residue (γD174/δD180) in loop F of the complementary face of the muscle type nAChR [56] . This dynamic hydrogen bonding between loop C and loop F upon agonist binding could pull loop F outward toward loop C. The outward movement of loop F is then potentially coupled to loop 2 and M2-M3 linker to pull the channel open. Although the conformational changes in loops A, D, and E are not observed in the crystal structures of AChBPs in the presence of agonists, mutagenesis and functional studies in intact channels have suggested that loops A, D, and E are also involved in channel gating. For example, mutation(s) of εW55/δW57 in binding loop D of muscle type nAChR dramatically reduces channel opening rate [57] . Mutations of α7W55 in homomeric α7 nAChR alter channel gating kinetics, with slowed desensitization [58] . In the case of the ρ1 GABA C receptor, a mutation of the homologous residue (Y102S) in loop D created spontaneously opening channels [59] , further suggesting the importance of this aro- matic residue in loop D in initial conformational changes induced by an agonist. Similarly, in the ρ1 GABA C receptor, a mutation (F146C) in loop A and a mutation in loop E (Q160C) also create spontaneously opening channels [60] . Unlike AChBP, the amino-terminal domain of a cys-loop receptor is coupled to transmembrane domain. It is likely that in the resting state, the conformation of the amino-terminal domain is different from the resting state of the soluble AChBP protein. Thus, it is understandable that in an intact cys-loop receptor, these three binding loops also undergo conformational rearrangement during channel function. This possibility is further supported by the site-specific fluorescence monitoring during channel activation. For example, in the ρ1 GABA C receptor, GABA-induced fluorescence change was detected in loop E (L166C) and in the top of the receptor (S66C), which can be partially (in L166C) or completely (in S66C) blocked by non-competitive antagonist picrotoxin [61] . In summary, it appears that all six loops have some contributions to the channel gating, which involves global conformational change in the receptor. It is likely that the coordinated movement of all six binding loops cause inner sheet rotation. Conformational changes in the gating machinery EM imaging of the Torpedo nAChR in the open state with 9 Å resolution showed that channel opening involves a rotation of the pore-lining kinked rod structures [62] . These pore lining rod structure are further confirmed to be second transmembrane domain (M2) by the EM image at 4Å resolution. As mentioned above, kinked M2 domains line the pore and form the channel gate by hydrophobic interaction in the middle of the transmembrane domains. The M2 rotation presumably disrupts the hydrophobic interactions of the gate forming residues and thus widens the pore to allow ions to flow through ( Figure 7A). However, two recent studies using the crystal structure of the bacterial proton-gated ion channels, which are bacterial counterparts of the mammalian cys-loop receptor family, have suggested a novel mechanism: pore dilation caused by the tilting of the M2 and M3 domain as a rigid body along the axis parallel to the membrane [63,64] . The bacterial pentameric ligand-gated ion channel homologue from Erwinia chrysanthemi (ELIC) was apparently crystallized in the resting closed state. The outer segments of the M2 domains of this receptor interact with each other to form a hydrophobic barrier, the channel gate, to prevent ion flux. The bacterial Gloeobacter violaceus pentameric ligand-gated ion channel homologue (GLIC) was crystallized with high proton concentration (low pH) and was apparently in the open state. The major difference in channel domain of the two structures is that the upper part of M2-M3 domains tilts out in GLIC. As a result, the pore diameter in outer half of M2 becomes larger for ion conduction, and the intracellular end of the pore becomes smaller for ionic selectivity and single channel conductance ( Figure 7B, 7C). Thus, the activation for bacterial ligand-gated ion channel involves mainly tilting of M2-M3 as a rigid body in the channel domain. Now, the question is whether this activation mechanism is also applicable to mammalian cys-loop receptors. In the nAChR, although high resolution structural model in the open state is still not available, single channel analysis of protonation scanning of pore lining domain suggests that M2 rotation in the open state is minimal, supporting the pore dilation mechanism [65] . Although this mechanism in the cys-loop receptors needs to be further validated with other approaches, the gating mechanism of this receptor family is likely to be conserved across species. Coupling between amino-terminal domain and the gating machinery As mentioned above, coupling between binding and gating domains requires matching of three loops (loop 2, loop 7/cys-loop, and loop 9/loop F) from the amino-terminal domain and one loop (M2-M3 linker) from the transmembrane domain [43] and pre-M1 and the beginning of M1 [44] . Since M2-M3 linker is not conserved across the entire cys-loop receptor family, detailed coupling residues could vary depending on subfamilies, although the general mechanism is likely to be conserved. In muscle type nAChR, αV46 in loop 2 is coupled to S269 and P272 in the M2-M3 linker, whereas αE45 in loop 2 is coupled to R209 in pre-M1 [66] . Since the pre-M1 domain is also directly linked to loop C, the authors believe that loop C capping can directly result in the rotation of the pre-M1, which is coupled to loop 2 and then in turn to M2-M3 linker to open the channel. They proposed that this coupling between pre-M1 and loop 2 serves as the principal activation pathway. However, another study, also using single channel analysis, suggested that the coupling between pre-M1 and loop 2 is relatively weak and thus plays a less important role in channel gating [67] . αP272 in M2-M3 linker is coupled not only to V46 in loop 2 but also to V135 in the cys-loop [68] . The homologous proline in M2-M3 linker of 5-HT 3 R controls channel opening and closing through its backbone cis-trans isomerization [69] . However, this proline is only conserved in nAChRs and 5-HT 3 R. In GABA A receptor, the couplings between loop2/ cys-loop to M2-M3 linker in both α and β subunits are through a charge interaction [70,71] . However, based on the relative tolerance of charge reversal, neutralization, or introduction in several members of the cys-loop receptor family, Xiu et al concluded that it is the overall charge pattern, but not specific charge interaction, in the coupling interface that controls channel gating in the cys-loop receptor family [72] . The conserved arginine in pre-M1 of GABA A receptor β subunit (R216) also plays a pivotal role in channel activation by both GABA and pentobarbital, suggesting similar coupling mechanism at this level [73] . In the ρ1 GABA C receptor, the same arginine is coupled to E92 in loop 2 [74] . Concluding remarks In summary, the amino-terminal binding domain is coupled to the channel gate through an interconnected allosteric network. Both binding pocket and gating machinery have a tendency to close. Thus, they are coupled with a tension, so that the closures of binding pocket and gate are mutually exclusive, unless their coupling is disrupted as in the case of desensitization. In the resting state, the gating machinery has stronger force than the binding pocket to close, so that the equilibrium shifts toward closing of the channel gate and opening of the binding pocket. However, when the gating machinery is loosened, as in the case of hydrophilic mutation in the gate-forming residues, the channel opens spontaneously with simultaneous closure of binding pocket as reflected by increased binding affinity. In the wild type receptor, the closure of the binding pocket, mainly induced by agonist-binding, would alter the energy landscape of the receptor to open the channel. The conformational change in the binding pocket rearranges the interaction between loop C and loop F/cys-loop, which potentially causes an outward movement of both loop F and cys-loop. At the same time, the conformational change of the loop C may also pry the pre-M1 and M1 domain to move outward. This outward motion pulls M2-M3 linker directly (through M1) and indirectly (through pre-M1 and loop 2 coupling). Conformational change also involves rotation of inner β-sheet in amino-terminal domain, probably through coordinated movement of all six binding loops, making loop 2 to move toward periphery. The outward movement of these three loops and pre-M1 and M1 is then coupled to the M2-M3 linker, pulling channel lining M2 to open (more likely causing pore dilation than M2 rotation). The above summary also includes some speculations of the authors. The detailed coupling and gating mechanism still awaits future investigation with functional analysis (especially mutant cycle analysis) combined with real time monitoring of conformational changes during channel activation by fluorescence technique guided with structural models when high resolution crystal structures of mammalian cys-loop receptor are available.
2017-05-25T02:10:38.990Z
2009-05-11T00:00:00.000
{ "year": 2009, "sha1": "15d35abee3888d8d1c08374b46cfd98a53ebed66", "oa_license": null, "oa_url": "https://www.nature.com/articles/aps200951.pdf", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "15d35abee3888d8d1c08374b46cfd98a53ebed66", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
256039968
pes2o/s2orc
v3-fos-license
Accurate prediction of acute pancreatitis severity based on genome-wide cell free DNA methylation profiles Patients with severe acute pancreatitis (SAP) have a high mortality, thus early diagnosis and interventions are critical for improving survival. However, conventional tests are limited in acute pancreatitis (AP) stratification. We aimed to assess AP severity by integrating the informative clinical measurements with cell free DNA (cfDNA) methylation markers. One hundred and seventy-five blood samples were collected from 61 AP patients at multiple time points, plus 24 samples from healthy individuals. Genome-wide cfDNA methylation profiles of all samples were characterized with reduced representative bisulfite sequencing. Clinical blood tests covering 93 biomarkers were performed on AP patients within 24 h. SAP predication models were built based on cfDNA methylation and conventional blood biomarkers separately and in combination. We identified 565 and 59 cfDNA methylation markers informative for acute pancreatitis and its severity. These markers were used to develop prediction models for AP and SAP with area under the receiver operating characteristic of 0.92 and 0.81, respectively. Twelve blood biomarkers were systematically screened for a predictor of SAP with a sensitivity of 87.5% for SAP, and a specificity of 100% in mild acute pancreatitis, significantly higher than existing blood tests. An expanded model integrating 12 conventional blood biomarkers with 59 cfDNA methylation markers further improved the SAP prediction sensitivity to 92.2%. These findings have demonstrated that accurate prediction of SAP by the integration of conventional and novel blood molecular markers, paving the way for early and effective SAP intervention through a non-invasive rapid diagnostic test. more organs, and local or systemic complications. Compared to MAP, SAP patients have a much worse prognosis: on average they require significantly longer hospital stay, more frequent re-admissions, and most notably, significantly higher mortality rate [4]. Existing widely used AP severity stratification systems during the early phase of AP are either imaging-based (for example, Balthazar CT-enhanced scoring system and the computed tomography severity index (CTSI)) [5], or clinical test-based (Ranson's score [6], the Acute Physiology and Chronic Health evaluation (APACHE-II) [7], the bedside index of severity of AP (BISAP), etc. [8]), or based on a combination of both clinical tests and patient self-reporting (pancreatic activity scoring system (PASS) [3,9]. However, while generally useful, so far all of them have been shown to predict SAP with moderate accuracy between 0.6 to 0.8 [10,11], some of which perform better at specificity over sensitivity in diagnoses, or vice versa [12]. Some systems, such as APACHE-II, which requires 16 tests to complete to predict AP severity, are complicated and hard to implement in typical clinical settings. Some, such as Ranson's scores, require a minimum of 48 h after hospitalization to predict SAP, limiting the time window to initiate medical intervention [13]. Furthermore, imaging-based systems are less objective because interpretation relies on inspectors' personal experiences [5], and enhanced CT, which is essential to identify localized pancreas complications, may actually complicate treatment by causing deterioration in pancreatic microcirculatory disturbance [14]. Given the limitations of the current AP severity prediction systems, we sought to identify novel biomarkers and establish a scoring system to accurately and objectively predict SAP during the first 24 h of hospitalization. To this end, we selected peripheral blood as the source for markers discovery: AP severity has been shown to be assessed by the levels of several different types of molecules in blood: damage-associated molecules such as HMGB1 [15], cell-free DNA [16], nucleosomes [17] and histones [18] that signal tissue damages; proinflammatory cytokines such as IL-6 [19] or IL-10 [20], which correlate with inflammation responses; levels of small molecules such as glucose, Ca 2+ , C reactive protein [21], triglycerides [22], etc. While each of these molecules may capture only one or a few aspects of the complications, organ damages or risk factors of SAP, we reasoned that integrating multiple measurements though machine learning might lead to a more accurate prediction of SAP. Another type of biomarkers we considered was cfDNA methylation. cfDNA derives from genomic DNA released during cell death (apoptosis or necrosis), and thus carries cell-type specific epigenetic signatures from its source tissues [23]. cfDNA methylation profiles have been shown to be informative for detecting cancer in plasma, but for non-cancer diseases its clinical applicability has just begun to be explored, such as detecting acute myocardial infarction [24], type I diabetes and multiple sclerosis [23]. We reasoned that complications and organ failures that characterize SAP cause inflammatory responses, cell deaths and tissue damages, which lead to substantial releases of cfDNA species from damaged tissues that are normally at a very low level in heathy individuals' blood, and thus generating different cfDNA methylation profiles in AP patients from healthy individuals. Moreover, MAP and SAP patients are likely to have distinct cfDNA methylation profiles due to much higher degree of complications and/or organ damages in SAP than in MAP. The signature differences in the cfDNA methylation profiles for MAP and SAP could be informative for classifying AP based on severity. In this study, we first pursued cfDNA methylation markers that accurately classified AP or SAP in our study cohort of AP patients. We further screened conventional clinical measurements that have been performed on our study cohort and identified a subset that predicted SAP cases at an accuracy comparable to RAC. By further integrating the informative clinical measurements with cfDNA methylation markers, we derived an expanded model with a significantly improved sensitivity and overall accuracy, providing a new strategy to identify SAP cases. Patient characteristics and sample description Sixty-one patients diagnosed as AP were included in current study. Median age was 46.2 years (ranged from 22 to 65), and 62.3% were men. There were 17 patients with MAP, 7 with MSAP and 37 with SAP in the cohort according to the revised Atlanta classification (Fig. 1, Additional file 2: Table S1 and Table S2), respectively. For all patients, on day 1, − 3, and − 7 after hospital admission, whole blood samples were drawn to identify DNA methylation markers when AP may be rapidly advancing; additionally, patients with CT grades indicating significant pancreas pathology also had blood drawn on day 14 and − 21 to monitor the changes on methylation markers after initial treatment. To discover tissue-specific DNA methylation markers of organs injury, we first generated genome-wide DNA methylation profiles from 120 cfDNA samples collected from multiple time points of the 45 AP patients. We also mapped DNA methylation profiles from cfDNA samples of 24 age-and sex-matched normal individuals as controls to minimize the interference of random background DNA methylation signals on AP diagnoses. cfDNA samples were extracted from both AP patients and healthy individual's plasma samples and were prepared into DNA methylation library and sequenced on Illumina HiSeq X10 platform. The sequencing information is listed in Additional file 2: Table S3. Identify DNA methylation markers in plasma that detect acute pancreatitis We reasoned that MAP and SAP may share similar DNA methylation features, but SAP samples likely have higher levels because of more damages on internal organs or tissues in SAP cases, which leads to increased release of cfDNA into blood than MAP. Therefore, we stand a better chance to identify general AP markers by first contrasting DNA methylation profiles of SAP samples with those of healthy individuals in the training phase. To improve the power of detecting subtle methylation differences in plasma DNA, we focused on a set of Methylation Haplotype Blocks (MHBs) in which local CpG methylation status are coordinated along single DNA molecules, such that tissue-specific signals are easier to detect with a haplotype-based scoring scheme [25]. To this end, we randomly assigned half of healthy control cases (12 cases) and half of SAP cases (22 cases, 69 samples) to a training set for marker discovery. The rest of the samples of either class were assigned to an independent test set for validation, as well as all MAP cases (17 cases, 39 samples). After filtering out poorly covered MHBs, a total of 43,358 MHB were used for following analysis. We quantified DNA methylation patterns on MHBs using several metrics, such as methylation haploid load (MHL), average methylation frequency (AMF), etc. as classifiers for AP diagnosis [25]. Eventually we determined that uMHL, a metric that quantifies the degree and linkage disequilibrium of unmethylated CpG sites in each MHB, is the most appropriate metric to derive a classifier: indeed, we identified 565 MHBs that are hypermethylated (uMHL scores < 0.1) in over 50% of training healthy samples and also methylated to a lesser Table S4). An AP-predicting model was further formulated using the aggregated uMHL scores on these MHBs to quantify each training sample. By plotting the scores of healthy and SAP samples separately, we demonstrated that these markers can accurately separate these two classes of plasma samples (p = 0.00085, Welch's t-test) (Fig. 2B). The accuracy of classification was quantified using receiving operational characteristic (ROC) curve, which achieved an AUC of 0.91 (sensitivity 95.7%; specificity 83.3%) (Additional file 1: Figure S1A) on the training samples. To validate these markers, we applied the AP prediction model on the test samples using the same cutoff, 0.215 as on the training samples, and achieved accurate prediction of AP (sensitivity 97.2%; specificity 75%), confirming the robustness of our uMHL-based model in AP diagnosis (Fig. 2C). To investigate the potential biological functions of these methylation markers, especially whether and how they are involved in the pathology of AP, we annotated these 565 MHBs using GREAT, a web portal for Gene Ontology (GO) annotation of regulatory regions [26]. We observed significant enrichments in several GO terms that are closely connected to AP pathology (Fig. 2D, Additional file 2: Table S5), including regulation of cellular response to insulin stimulus and regulation of peptide hormone secretion that are associated with normal pancreas functions; or regulation of metanephros development and foregut morphogenesis that are associated with non-pancreas organs that are often damaged during SAP (kidney and upper digestive track, respectively). We also found enrichments in genes involved in myeloid differentiation and leukocyte degranulation, which are potentially related to SAP-caused local or systematic inflammatory responses. Overall, these enriched GO categories are consistent with the known pathology of AP, especially Acute pancreatitis-predicting MHBs were identified based on their uMHL scores in cfDNA samples. A 565 MHBs that were hypermethylated in healthy individuals' cfDNA samples but relatively hypomethylated in APs' samples were identified as classifiers for AP plasma. Heatmap visualizes the differences in the uMHL scores of those MHBs between healthy controls and AP samples in the training set; samples were arranged by each patient and by days; B swarm plot of the aggregate uMHL scores of the 565 MHB sites shows that they robustly separated healthy and AP plasma samples of either training or test set; C AP prediction accuracy by the (aggregated) uMHL scores of the 565 MHBs on test set AP samples over healthy controls; D Genes associated with the 565 identified AP markers are enriched in pancreas-and kidney-related Gene Ontology categories. AP, acute pancreatitis; cfDNA, cell free DNA; MHB, methylated haplotype block; uMHL, unmethylated haplotype load SAP. Furthermore, we compared our AP (N = 565) and SAP markers (N = 59) with markers reported by Guo et al. [27] or the Type-I markers (tissue-specific markers) identified by Sun et al. [28] to find overlapping markers which might be tissue-specificity. Indeed, we found 46 markers by Guo et al. and 2 by Sun et al. that overlapped with our markers, respectively. The 46 markers from Guo et al. were from several organs, including pancreas, liver, lung, kidney, and GI track, all of which were known to be damaged in acute pancreatitis. The 2 overlapping markers from Sun et al. were from colon and liver, respectively. However, they did not overlap with Guo Identify cfDNA methylation markers to classify MAP or SAP cases We next sought to identify SAP-specific DNA methylation markers in order to assess the severity and distinguish SAP from MAP, which has an immediate clinical utility. To this end, we randomly assigned roughly half of the MAP cases (9 cases, 18 samples) and half of SAP cases (22 cases, 72 samples) to a training set for marker discovery, and the remaining cases (8 MAP cases, 21 samples, and 22 SAP cases, 64 samples) to an independent test set for validation. Based on the results from APspecific markers discovery, we also chose uMHL as the quantitative metric for SAP marker discovery and predictive model building. MHBs were filtered based on sequencing coverage to ensure statistical robustness. We performed multiple rounds of exploratory marker screenings on these MHBs and their uMHL values. Initial attempts using a single uMHL score to identify MHBs that are differentially methylated in MAP and SAP samples did not yield desired results. We then turned to an alternative strategy by looking for MHBs with mean uMHL values different between SAP and MAP samples. After evaluating multiple cutoffs for the average uMHL values and the cutoffs of maximal or minimal uMHL values, we discovered 59 MHBs, which are more methylated (max. uMHL < 0.7, mean uMHL < 0.5) in over 65% of MAP cases and less methylated (min. uMHL > 0.3, mean uMHL > 0.5) in over 65% of SAP cases, to diagnose MAP and SAP plasmas (Fig. 3A). We plotted the arithmetic average of uMHL values of these MHBs for both MAP and SAP training samples for comparison (Fig. 3B), and the results showed that SAP samples have significantly higher average uMHL scores than MAP cases (p = 2.83 × 10 -11 , Welch's t-test), demonstrating that these MHBs (Additional file 2: Table S6) are less methylated in SAP samples than in MAP samples, and that the average uMHL scores can be used to differentiate MAP and SAP plasma samples. Then, 565 (SAP + MAP vs Control) and 59 (SAP vs MAP) markers were intersected, by which only 1 overlapping marker was found (PTPN1 (− 317,389), CEBPB (+ 2126)). Indeed, we used the average uMHL scores to classify MAP and SAP training samples. With a cutoff of 0.532, we were able to classify SAP with area under the receiver operating characteristic (AUC) = 0.97 (sensitivity 87.5%; specificity 94.4%) on the training samples. We then applied the MHB classifiers on the independent test samples for validation. Using the same cutoff as in the training set, we were able to classify MAP and SAP samples at an AUC of 0.81 (sensitivity 85.9%; specificity 85.7%) (Fig. 3C). Such an accuracy is comparable to the performance of several clinically used stratification systems for early assessment of AP severity, including APACHE-II, BISAP and Ranson's score [29]. Identify optimal clinical blood tests to predict SAP We have demonstrated that a set of cfDNA methylation markers can predict the severity of AP at a comparable accuracy as several commonly used clinical AP stratification systems. To further improve the accuracy of predicting severity, we sought to integrate conventional biomarkers from body fluids to the cfDNA methylationbased SAP prediction model. A number of traditional biomarkers have been routinely used by clinicians to either diagnose AP (such as level of blood amylase or lipase) or have been used to monitor AP patients' physiological conditions (blood electrolytes, etc.), inflammatory responses (levels of white blood cells, etc.) or organ damages (indicators for kidney, liver and lung functions, etc.). We tried to identify a small subset of these markers that are most indicative for SAP symptoms, which can be combined with the cfDNA methylation SAP markers to improve the overall SAP prediction accuracy. Furthermore, we aimed to select markers that can be measured during the first 24 h of AP patients' hospitalization, in order to inform treatment decisions in a timely manner. We surveyed 93 non-invasive clinical tests (Additional file 2: Table S1), which were performed on 61 AP cases and a total of 175 samples. Samples from all collection dates were used in the analyses, therefore they provided a comprehensive and dynamic measurement of key biomarkers to assess the temporal progresses in AP pathology and severity. The types of body fluids used in these tests included venous and arterial blood, and urine. We also included vital signs such as body temperature in our analyses. For benchmarking the performance, a RAC grade was given to each case to evaluate the prediction accuracy of SAP prediction by selected clinical tests' results. We first performed a proof-of-principle prediction of SAP samples using all available clinical test results. We trained the all-markers model using a training set (9 MAP cases, 18 samples; 22 SAP cases, 72 samples), and detected SAP cases at a reasonable level of accuracy (AUC = 0.8) in the test set (8 MAP cases, 21 samples; 22 SAP cases, 64 samples) (Fig. 4A). This suggested that a significant number of clinical tests in this all-tests SAP prediction model are informative of pathologies that define SAP, so even without any marker selection, the all-test model was still capable of predicting SAP with moderate accuracy. We reasoned that by removing underperforming measurements with regard to detection accuracy, we should be able to further simplify and improve the predictor to an accuracy comparable to RAC. Meanwhile the large number of tests also allowed us to choose the ones that can be completed within 24 h after the collection of body fluids, thus enable early SAP diagnosis. We then focused on 66 tests that measure biomarkers in venous blood. This was mainly because venous blood contains the majority of measurable biomarkers, and is safe and easy to collect, and many of venous bloodbased tests return results within 24 h after blood collection. Body temperature was also included due to the convenience for measurement. We filtered 66 clinical tests based on data availability. This resulted in keeping 57 tests for marker discovery. Using the Recursive Feature Elimination algorithm of python package "sklearn" and the training set samples, we screened those 57 venous blood-based tests by recursively and gradually pruning off tests that contributed the least to the accuracy of SAP diagnosis, and identified the top 20 tests that formulated an SAP prediction model with an AUC of 0.99 (Additional file 1: Figure S1B and Additional Table S1), which is nearly as high as that by RAC classification (1.0 by definition). However, this model underperformed in the test set (ACU = 0.78) (Additional file 1: Figure S1C), possibly due to overfitting. To improve the prediction model, we proceeded by first keeping tests in the 20-test model that contribute the most to prediction accuracy and whose targets were known to be associated with the risk (for example, triglyceride level) or symptoms of AP (urea nitrogen caused by kidney damage and dysfunction, etc.). Fig. 4 Blood levels of biomarkers measured by routine clinical tests can be used to accurately diagnosis SAP during its early stage. A SAP prediction accuracy by undiscriminatingly using 75 available measures from 93 body fluids-measuring clinical tests; B 12 venous blood-based tests identified from the training set built an SAP model that classified test set MAP and SAP samples with high accuracy; C members of the 12-biomarker model may either positively or negatively predict SAP. D When being incorporated to SAP prediction model using aggregate uMHL scores of cfDNA, the 12 venous biomarkers significantly improved its overall prediction accuracy. SAP, severe acute pancreatitis; cfDNA, cell free DNA; MAP, mild acute pancreatitis; uMHL, unmethylated haplotype load We also added 5 additional tests to the prediction model based on their clinical significances on AP. Among them globulin level represents inflammatory response, creatinine, uric acid and estimated glomerular filtration rate all indicate kidney damage and dysfunction, and serum chloride has been reported to be indicative of SAP [30]. We then rebuilt a logistic regression model (python package "statsmodels") using the 25 tests and recursively removed least-contributing tests to SAP prediction accuracy based on performances on the training set. The final model (Additional file 2: Table S7) contains 7 tests from the original 20, and all 5 new ones. It classified MAP and SAP samples in the training set with an AUC of 0.95 (sensitivity: 95.82%; specificity: 83.33%). We proceeded to validate the 12-biomarker SAP prediction model on the validation set, which predicted SAP samples at an AUC of 0.97 (sensitivity: 87.5%; specificity: 100%) (Fig. 4B). Such an accuracy is nearly as high as that of RAC, therefore we believe this model is likely sufficient for routine clinical diagnosis of SAP during the first 24 h of AP patients' hospitalization. The 12-biomarker model mainly measured markers indicative of organs that are known to be frequently damaged in SAP, especially kidney (urea nitrogen, creatinine, etc.), or markers informative on inflammatory responses (levels of neutrophils, lymphocytes, erythrocytes, etc.), both categories are intimately connected to the main pathologies of SAP and may explain their capacity to collectively predict SAP. Among them two measurements on red blood cell level and volume have the highest overall weight in the prediction model (Fig. 4C), followed by markers indicative of inflammation (neutrophil and lymph levels), and then by those of kidney functions. Finally, we built an expanded SAP prediction model by combining average uMHL scores of the predefined 59 cfDNA methylation markers with the 12 clinical tests and performed logistic regression on these markers using training set (9 MAP cases, 18 samples; 22 SAP cases, 72 samples). The expanded model (Additional file 2: Table S8) was able to classify an indepedent test set comprising of 8 MAP cases (21 samples) and 22 SAP cases (64 samples), achieving an AUC of 0.96 (sensitivity 92.2%; specificity 90.5%) (Fig. 4D). While the AUC is almost identical to that of the 12-biomarker only model, the shape of the ROC curve is slightly different, such that the sensitivity was improved from 87.5 to 92.2%. This is significant in the pancreatitis clinics, because a minor reduction of specificity from 100 to 90.5% is manageable since it does not lead to adverse outcomes. In contrast, identifying SAP more sensitively and early allows for timely adjustment of the treatment options, such as fluid resuscitation, enteral nutrition, interventional endoscopic, continuous regional arterial infusion and surgical treatments, which have been well documented for reducing the mortality of SAP patients. Discussion Early detection of SAP symptoms remains a challenge in the emergency care of AP patients, and is key to SAP patients' immediate survival and long-term prognosis. RAC, being the gold standards of AP diagnosis, requires more than 48 h to assess the severity of AP cases, which limits its utility to SAP diagnosis. Other diagnostic protocols either requires longer-than-48 h to perform, or are challenging to perform and score, hence are similarly limited in SAP diagnosis. We approached this challenge by first identifying cfDNA methylations as molecular classifiers for AP over healthy individuals, and for SAP over MAP cases, respectively. To our knowledge, this is the first set of epigenetic markers reported for AP and SAP diagnosis. Our results showed that methylation markers for AP prediction achieved a high degree of accuracy (AUC = 0.92) that is comparable to that of RAC, and markers for SAP prediction has a sensitivity and specificity comparable to several most commonly used clinical SAP diagnosis protocols. Therefore, cfDNA methylation markers alone are at a similar level of prediction accuracy to their equivalent clinical protocols. The usage of SAP cases naturally will introduce markers derived from immune cells activated during systematic inflammatory response syndrome (SIRS), which is one of the hallmarks of SAP. Thus, it's understandable that the most discriminating markers for AP and SAP are from cell sources, such as immune cells activated during SIRS. Moreover, it should be noted that the results indicated that most of the markers had not shown tissue-specificity according to present tissue methylation databases, though further cell type specific methylation data might help elucidate. We further improved cfDNA methylation-based SAP prediction by adding 12 selected venous blood biomarkers to build an expanded prediction model. These markers are highly informative for systematic inflammatory responses, and/or damages on organs such as kidney. An SAP prediction model build solely based on these 12 markers has an SAP prediction accuracy (0.97) in our test cohort, nearly identical to RAC (1.0). Because the selected tests are routinely performed in the majority of hospitals, the number of tests to perform is reasonably manageable, and neither their costs nor the required volume of blood is prohibitively high, we believe that our SAP diagnostic protocols can be implemented very widely. Practically, measuring methylation status on the set of 59 methylation markers depends on targeted methylation sequencing, which may take 2-3 days to complete. However, it is worth exploring the possibility of shortening the turnaround time to fit into a 48-h detection window: for example, gradually reducing the 59 cfDNA methylation SAP markers by the RFE algorithm may reach a point where the number of remaining markers (for example, 10 or fewer) may accommodate a PCRbased detection while simultaneously maintaining accuracy. So even when cfDNA extraction and processing steps are included, SAP detection by PCR can be completed within 48 h. Additionally, detection of cfDNA methylation markers requires performing only a single assay, instead of multiple clinical tests, to diagnose SAP, therefore it might require fewer instruments and a simpler workflow. Our efforts on cfDNA methylation marker screening are just the beginning of identifying cfDNA signatures for AP and SAP, which in future may lead to discovering organ-and/or tissue-specific markers in cfDNA and results in molecular diagnoses of damages of specific organs. Conclusions In this study, we developed a novel predictive model for AP severity based on the DNA methylation patterns of plasma DNA, a type of molecular markers that have never been explored for this clinical problem. With DNA methylation signatures alone, we demonstrated a sensitive separation of AP patients from healthy controls, as well as accurate classification of MAP versus SAP. Furthermore, using a machine learning approach, we derived an expanded model with a significantly improved sensitivity and overall accuracy by further integrating the informative clinical measurements with cfDNA methylation markers, providing a new strategy to detect clinical SAP cases. Study design and participants This study was based on a case-control design with participants randomly selected from the AP cohort organized by the Pancreatitis Unit of the First Affiliated Hospital of Wenzhou Medical University, a universityaffiliated tertiary-care public hospital. The study was preformed according to Standards for the Reporting of Diagnostic Accuracy Studies guidance for observational studies. The patients diagnosed with acute pancreatitis (AP) were prospectively recruited from the First Affiliated Hospital of Wenzhou Medical University between July 2017 and November 2017. The research protocol of the study was approved by the Ethics Committee of the First Affiliated Hospital of Wenzhou Medical University (2017-136) and written informed consent was obtained from each patient or their next of kin included in the study. The study was registered in Chinese Clinical Trial Registry (ChiCTR-DDD-17012200). AP was defined as two or more of the following conditions: characteristic abdominal pain; serum amylase and/or lipase levels three or more times the upper limit of normal; and/or an imaging study (computed tomography (CT) or magnetic resonance imaging) demonstrating changes consistent with AP. Inclusion criteria were: first episode of acute pancreatitis as defined by the revised Atlanta classification; 18 years and older; male or female; and availability of blood samples within 24 h of admission. Patients were excluded with following criteria: advanced pulmonary, cardiac, renal diseases (chronic kidney disease stage 4-5), liver cirrhosis (Child-Pugh grade B-C) or malignancy; pregnancy, chronic pancreatitis or trauma as the etiology; nonpancreatic infection or sepsis caused by a second disease; or duration of abdominal pain before admission exceeding 24 h. Twentyfour healthy volunteers matched with sex and age were included as control subjects. The severity of AP stratified as mild AP or moderately/severe AP according to revised 2012 Atlanta criteria [3]. MAP and SAP cases were randomly selected from the pool of qualified patients to match age and sex. The primary outcome of this retrospective study was to identify the most effective predictive blood markers for SAP; The secondary outcome was to compare the new model to current existing models being run in clinics, including Ranson's score, APACH-II and BISAP. The demographic, clinical, and laboratory data (Additional file 2: Table S1) of all patients with AP at 1st-day, 3rd-day and 7th-day of admission was prospectively collected and maintained in an electronic database in accordance with protocol for this study, including age, sex, vital signs, physical exam findings, serum levels of aspartate transaminase, alanine transaminase, alkaline phosphatase, gamma-glutamyl transferase, total bilirubin, lactate dehydrogenase, amylase, lipase, C-reactive protein, urea nitrogen (BUN) and white blood cell, etc. The severity of AP was classified by the standard RAC protocol. Peripheral venous blood samples were obtained from each patient and each healthy volunteer. Blood samples were transported to the clinical research center at 4 °C within 1 h. Plasma was obtained after centrifugation (3000 × g, 10 min, 4 °C) and stored at − 80 °C for further analysis. cfDNA methylation sequencing Cell-free DNA from plasma samples was extracted and purified using the QIAamp Circulating Nucleic Acid kit (QIAGEN, 55114). The quality of extracted cfDNA was determined by DNA NGS 3 K Assay (PerkinElmer, CLS960013). cfDNA samples were prepared into DNA methylation libraries for reduced-representation bisulfite sequencing (RRBS): briefly, up to 20 ng ctDNA was used as input for each preparation. Input DNA was ligated to customized adaptors compatible to Illumina sequencing platform. CT conversion was performed after ligation using MethylCode Bisulfite Conversion Kit (Invitrogen, MECOV50). After purification, DNA was amplified using PfuTurbo Cx Hotstart DNA polymerase (Agilent, 600412). Libraries were purified using AMPure Beads (Beckman Coulter, A63882), pooled and size-selected using 6% TBE gels (Invitrogen, EC6265BOX). The purified library pools were quantified using the KAPA Library Quantification Kit for Illumina (Kapa Biosystems, KK4824), and were sequenced on the Illumina HiSeq × 10 platform for paired ends using 2 × 150 cycle runs. Sequencing reads were demultiplexed using the Illumina bcl2fastq Conversion Software (v2.20) and aligned to the bisulfite-converted hg19 reference genome using BWA (v0.7.12) for further downstream analyses. Sequencing data processing Fastq data are trimmed by trim-galore (http:// www. bioin forma tics. babra ham. ac. uk/ proje cts/ trim_ galore/). After reads trimming, both paired-end reads were merged to a single-end reads. The single reads were mapped using Bismark-transformed hg19 genome [31] with bowtie 1 [32]. The mapped bam files were processed by in house scripts extracting the methylation haplotype information. Quantify DNA methylation patterns on MHB MHBs are defined as previously described [25] using a set of whole genome bisulfite sequencing data from human tissues and cell lines. To perform quantitative analysis of the methylation patterns within individual MHBs, we calculated the Unmethylated Haplotype Load (uMHL), which is a measurement of consecutiveness of un-methylated CpGs within an MHB. Briefly, it sums the fraction of consecutively un-methlylated CpG haplotypes of each length of haplotype within an MHB. l is the length of haplotype (the number of CpGs within an MHB). W i stands for weight of each length of haplotype (we select l3 putting higher weights to longer haplotypes). P(UMH i ) stands for the fraction of consecutively un-methylated haplotype of haplotypes with length i. Identify DNA methylation markers for AP and SAP Candidate cfDNA methylation markers for AP diagnosis were first screened by selecting MHBs that were hypermethylated in the majority of healthy individuals but less methylated in the SAP samples of a training set. These markers and the prediction model were further validated in a test set that have both MAP and SAP samples in addition to healthy controls. The sensitivity and specificity of classifications on the test samples were calculated using the same cutoff as in the training set. Candidate methylation markers for SAP diagnosis were first screened by identifying MHBs that, as quantified based on their uMHL scores, were hypermethylated in the majority of MAP training samples but are hypomethylated in the majority of SAP training samples. The prediction model was built by averaging uMHL scores of all identified MHBs and tested in the training set to ensure sufficient accuracy using the blood samples at 1 day after admission. It was further validated in a test set, in which the sensitivity and specificity of classifications were calculated using the same cutoff as in the training set. Identify clinical tests to predict SAP Clinical tests were filtered based on availability of test results, AP cases were divided into training and test groups for model building and validation, respectively. A proof-of-principle SAP prediction model using all available biomarkers in body-fluids was built using Random Forest algorithm. For SAP models using only venous blood biomarkers, we used the Recursive Feature Elimination algorithm (python package "sklearn") to identify a preset number of tests that classified MAP and SAP samples, based on the blood biomarkers they measured with the highest degree of accuracy in the training set. We used Python package StatsModels to build the prediction model, and validated it on test set AP samples. We then built an expanded SAP prediction model by combining average uMHL scores of the predefined cfDNA methylation markers with the identified blood biomarkers, and performed logistic regression on the combined marker set using training set. The combined model was then tested on the validation set (Table 1).
2023-01-21T15:33:31.380Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "ba91d81c1dfbec09bef01d779f82a966ebd030de", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13148-021-01217-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "ba91d81c1dfbec09bef01d779f82a966ebd030de", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
263830430
pes2o/s2orc
v3-fos-license
Probing spin hydrodynamics on a superconducting quantum simulator Characterizing the nature of hydrodynamical transport properties in quantum dynamics provides valuable insights into the fundamental understanding of exotic non-equilibrium phases of matter. Experimentally simulating infinite-temperature transport on large-scale complex quantum systems is of considerable interest. Here, using a controllable and coherent superconducting quantum simulator, we experimentally realize the analog quantum circuit, which can efficiently prepare the Haar-random states, and probe spin transport at infinite temperature. We observe diffusive spin transport during the unitary evolution of the ladder-type quantum simulator with ergodic dynamics. Moreover, we explore the transport properties of the systems subjected to strong disorder or a tilted potential, revealing signatures of anomalous subdiffusion in accompany with the breakdown of thermalization. Our work demonstrates a scalable method of probing infinite-temperature spin transport on analog quantum simulators, which paves the way to study other intriguing out-of-equilibrium phenomena from the perspective of transport. Introduction Transport properties of quantum many-body systems driven out of equilibrium are of significant interest in several active areas of modern physics, including the ergodicity of quantum systems [1][2][3][4] and quantum magnetism [5][6][7].Understanding these properties is crucial to unveil the non-equilibrium dynamics of isolated quantum systems [8,9].One essential property of transport is the emergence of classical hydrodynamics in microscopic quantum dynamics, which shows the power-law tail of autocorrelation functions [8].The rate of the power-law decay, referred as to the transport exponent, characterizes the universal classes of hydrodynamics.In d-dimensional quantum systems, in addition to generally expected diffusive transport with the exponent d/2 in non-integrable systems [10][11][12], more attentions have been attracted by the anomalous superdiffusive [5,[13][14][15][16] or subdiffusive transport [2,3,[17][18][19], with the exponent larger or smaller than d/2, respectively. In this work, using a ladder-type superconducting quantum simulator with up to 24 qubits, we first demonstrate that in addition to the digital pseudo-random circuits [35][36][37][38][39][40][41], a unitary evolution governed by a timeindependent Hamiltonian, i.e., an analog quantum circuit, can also generate quantum states randomly chosen from the Haar measure, i.e., the Haar-random states, for measuring the infinite-temperature autocorrelation functions [42][43][44].Subsequently, we study the properties of spin transport on the superconducting quantum simulator via the measurement of autocorrelation functions by using the Haar-random states.Notably, we observe a clear signature of the diffusive transport on the qubit ladder, which is a non-integrable system [11,12,25]. Upon subjecting the qubit ladder to disorder, a transition from delocalized phases to the many-body localization (MBL) occurs as the strength of disorder increases [45].By measuring the autocorrelation functions, we experimentally probe an anomalous subdiffusive transport with intermediate values of the disorder strength.The observed signs of subdiffusion are consistent with recent numerical results, and can be explained as a consequence of Griffth-like region on the delocalized side of the MBL transition [2,3,[46][47][48][49]. Finally, we explore spin transport on the qubit ladder with a linear potential, and it is expected that Stark MBL occurs when the potential gradients are sufficiently large [28,[50][51][52][53][54].With a large gradient, the conservation of the dipole moment emerges [28,54], associated with the phenomena known as the Hilbert space fragmentation [55][56][57].Recent theoretical works reveal a subdiffusion in the dipole-moment conserving systems [17,19].In this experiment, we present evidence of a subdiffusive regime of spin transport in the tilted qubit ladder. Experimental setup and protocol Our experiments are performed on a programmable superconducting quantum simulator, consisting of 30 transmon qubits with a geometry of two-legged ladder, see Fig. 1a and b.The nearest-neighbor qubits are coupled by a fixed capacitor, and the effective Hamiltonian of capacitive interactions can be written as [22,23] (also see Supplementary Note 1) σ− j+1,m + H.c.) where ̵ h = h/2π, with h being the Planck constant (in the following we set ̵ h = 1), L is the length of the ladder, σ+ j,m (σ − j,m ) is the raising (lowering) operator for the qubit Q j,m , and J ∥ j,m (J ⊥ j ) refers to the rung (intrachain) hopping strength.For this device, the averaged rung and intrachain hopping strength are J ∥ /2π ≃ 7.3 MHz and J ⊥ /2π ≃ 6.6 MHz, respectively.The XY and Z control lines on the device enable us to realize the drive Hamiltonian Ĥd = ∑ m∈{↑,↓} ∑ L j=1 Ω j,m (e −iϕj,m σ+ j,m + e iϕj,m σ− j,m )/2, and the on-site potential Hamiltonian ĤZ = ∑ m∈{↑,↓} ∑ L j=1 w j,m σ+ j,m σ− j,m , respectively.Here, Ω j,m and ϕ j,m denote the driving amplitude and the phase of the microwave pulse applied on the qubit Q j,m , and w j,m is the effective on-site potential. To study spin transport and hydrodynamics, we focus on the equal-site autocorrelation function at infinite temperature, which is defined as where ρr is a local observable at site r, ρr (t) = e i Ĥt ρr e −i Ĥt , and D is the Hilbert dimension of the Hamiltonian Ĥ.Here, for the ladder-type superconducting simulator, we choose ρr = (σ z 1,↑ + σz 1,↓ )/2 (r = 1) [12], and the autocorrelation function can be rewritten as with c µ;ν = Tr[σ z µ (t)σ z ν ]/D (subscripts µ and ν denote the qubit index 1, ↑ or 1, ↓). The autocorrelation functions (2) at infinite temperature can be expanded as the average of C r,r (|ψ 0 ⟩) = ⟨ψ 0 |ρ r (t)ρ r |ψ 0 ⟩ over different |ψ 0 ⟩ in z-basis.In fact, the dynamical behavior of an individual C r,r (|ψ 0 ⟩) is sensitive to the choice of |ψ 0 ⟩ under some circumstances (see Supplementary Note 7 for the dependence of C r,r (|ψ 0 ⟩) on |ψ 0 ⟩ in the qubit ladder with a linear potential as an example).To experimentally probe the generic properties of spin transport at infinite temperature, one can obtain (2) by measuring and averaging C r,r (|ψ 0 ⟩) with different |ψ 0 ⟩ [15].Alternatively, we employ a more efficient method to measure (2) without the need of sampling different |ψ 0 ⟩.Based on the results in ref. [40] (also see Methods), the autocorrelation function c µ;ν can be indirectly measured by using the quantum circuit as shown in Fig. 1c, i.e., where and ÛR being a unitary evolution generating Haar-random states.For example, to experimentally obtain c 1,↓;1,↑ , we choose Q 1,↑ as Q A , and the remainder qubits as the Q R .After performing the pulse sequences as shown in Fig. 1d, we measure the qubit Q 1,↓ at z-basis to obtain the expectation value of the observable σz 1,↓ . Observation of diffusive transport In this experiment, we first study spin transport on the 24-qubit ladder consisting of Q 2a shows the results of S PE with different evolution times t R .For the 23-qubit system, the probabilities p k are estimated from the single-shot readout with a number of samples N s = 3 × 10 7 .It is seen that the S PE tends to the value for Haar-random states, i.e., S T PE = N ln 2 − 1 + γ with N = 23 being the number of qubits and γ ≃ 0.577 as the Euler's constant [36].Moreover, for the final state |ψ R ⟩ with t R = 200 ns, the distribution of probabilities p k satisfies the Porter-Thomas distribution (see Supplementary Note 4). In Fig. 2b, we show the dynamics of the autocorrelation function C 1,1 measured via the quantum circuit in Fig. 1c with t R = 200 ns.The experimental data satisfies C 1,1 ∝ t −α , with a transport exponent α ≃ 0.5067, estimated by fitting the data in the time window t ∈ [50 ns, 200 ns].Our experiments clearly show that spin diffusively transports on the qubit ladder ĤI (1), and demonstrate that the analog quantum circuit ÛR (t R ) with t R = 200 ns can provide sufficient randomness to measure the autocorrelation function defined in Eq. ( 2) and probe infinite-temperature spin transport.We also discuss the influence of t R in Supplementary Note 4, numerically showing that the results of C 1,1 do not substantially change for longer t R > 200 ns.Moreover, in Supplementary Note 4, we show that for a short evolved time t R ≃ 15ns, the values of the observable defined in Eq. ( 4) are incompatible with the infinite-temperature autocorrelation functions.Given that the chosen initial state for generating the Haar-random state exhibits a high effective temperature associated with the Hamiltonian ĤR , the state |ψ R ⟩ would asymptotically converge to the Haarrandom state with a sufficiently extended t R .However, with t R ≃ 15 ns, the time scale is too small to get rid of the coherence, and the value of S PE for the state |ψ R ⟩ is much smaller than the S T PE (see Fig. 2a), suggesting that |ψ R ⟩ with t R ≃ 15 ns is far away from the Haar-random state, and cannot be employed to measure the infinitetemperature autocorrelation function (2).In the follow-ing, we fix t R = 200 ns, and study spin transport in other systems with ergodicity breaking. Subdiffusive transport with ergodicity breaking After demonstrating that the quantum circuit shown in Fig. 1c can be employed to measure the infinitetemperature autocorrelation function C 1,1 , we study spin transport on the superconducting qubit ladder with disorder, whose effective Hamiltonian can be written as σ− j,m , with w j,m drawn from a uniform distribution [−W, W ], and W being the strength of disorder.For each disorder strength, we consider 10 disorder realizations and plot the dynamics of averaged C 1,1 with different W are plotted in Fig. 3a.With the increasing of W , and as the system approaches the MBL transition, C 1,1 decays more slowly.Moreover, the oscillation in the dynamics of C 1,1 becomes more obvious with larger W , which is related to the presence of local integrals of motion in the deep many-body localized phase [58]. We then fit both the experimental and numerical data with the time window t ∈ [50 ns, 200 ns] by adopting the power-law decay C 1,1 ∝ t −α .As shown in Fig. 3b, we observe an anomalous subdiffusive region with the transport exponent α < 1/2.For the strength of disorder W /2π ≳ 50 MHz, the transport exponent α ∼ 10 −2 , indicating the freezing of spin transport and the onset of MBL on the 24-qubit system [2].Here, we emphasize that the estimated transition point between the subdiffusive regime and MBL is a lower bound since with longer evolved time, the exponent α obtained from the powerlaw fitting becomes slightly larger (see Supplementary Note 6). Next, we explore the transport properties on a tilted superconducting qubit ladder, which is subjected to the linear potential ĤL = ∑ L j=1 ∆j ∑ m∈{↑,↓} σ+ j,m σ− j,m , with ∆ = 2W S /(L − 1) being the slope of the linear potential (see the tilted ladder in the inset of Fig. 4a).Thus, the effective Hamiltonian of the tilted superconducting qubit ladder can be written as ĤT = ĤI + ĤL .Different from the aforementioned breakdown of ergodicity induced by the disorder, the non-ergodic behaviors induced by the linear potential arise from strong Hilbert-space fragmentation [55][56][57].The ergodicity breaking in the disorderfree system ĤT is known as the Stark MBL [28,[50][51][52][53][54]. We employ the method based on the quantum circuit shown in Fig. 1c to measure the time evolution of the autocorrelation function C 1,1 with different slopes of the linear potential.The results are presented in Fig. 4a and 4b.Similar to the system with disorder, the dynamics of C 1,1 still satisfies C 1,1 ∝ t −α with α < 0.5, i.e., subdiffusive transport.Figure 4c displays the transport exponent α with different strength of the linear po-tential, showing that α asymptotically drops as W S increases. Two remarks are in order.First, by employing the same standard for the onset of MBL induced by disorder, i.e., α ∼ 10 −2 , the results in Fig. 4c indicate that the Stark MBL on the tilted 24-qubit ladder occurs when W S /2π ≳ 80 MHz (∆/2π ≳ 14.6 MHz).Second, in the ergodic side (W S /2π < 80 MHz and W /2π < 50 MHz for the tilted and disordered systems respectively), the transport exponent α exhibits rapid decay with increasing W S up to W S /2π ≃ 20 MHz in the tilted system.Subsequently, as W S continues to increase, the decay of z becomes slower.In contrast, for the disordered system, α consistently decreases with increasing disordered strength W .We note that the impact of the emergence of dipole-moment conservation with increasing the slope of linear potential on the spin transport, and its distinction from the transport in disordered systems remain unclear and deserve further theoretical studies. Discussion Based on the novel protocol for simulating the infinitetemperature spin transport using the Haar-random state [40], we have experimentally probed diffusive transport on a 24-qubit ladder-type programmable superconducting processor.Moreover, when the qubit ladder is subject to sufficiently strong disorder, we observe the signatures of subdiffusive transport, in accompany with the breakdown of ergodicity due to MBL. It is worthwhile to emphasize that previous experimental studies of the Stark MBL mainly focus on the dynamics of imbalance [50,59,60].Different from the disorder-induced MBL with a power-law decay of imbalance observed in the subdiffusive Griffith-like region [61], for the Stark MBL, there is no experimental evidence for the power-law decay of imbalance [50,59,60].Here, by measuring the infinite-temperature autocorrelation function, we provide solid experimental evidence for the subdiffusion in tilted systems, which is induced by the emergence of strong Hilbert-space fragmentation [55][56][57].Theoretically, it has been suggested that for a thermodynamically large system, non-zero tilted potentials, i.e., ∆ > 0, will lead to a subdiffusive transport with α ≃ 1/4 [17,62].In finite-size systems, both results as shown in Fig. 4 and the cold atom experiments on the tilted Fermi-Hubbard model [63] demonstrate a crossover from the diffusive regime to the subdiffusive one.Investigating how this crossover scales with an increasing system size is a further experimental task, which requires for quantum simulators with a larger number of qubits. Ensembles of Haar-random pure quantum states have several promising applications, including benchmarking quantum devices [42,64] and demonstrating the beyondclassical computation [35][36][37][38][39].Our work displays a practical application of the randomly distributed quantum state, i.e., probing the infinite-temperature spin transport.In contrast to employing digital random circuits, where the number of imperfect two-qubit gates is proportional to the qubit number [36][37][38][39][40][41], the scalable analog circuit adopted in our experiments can also generate multi-qubit Haar-random states useful for simulating hydrodynamics.The protocol employed in our work can be naturally extended to explore the non-trivial transport properties on other analog quantum simulators, including the Rydberg atoms [42,[65][66][67], quantum gas microscopes [68,69], and the superconducting circuits with a central resonance bus, which enables long-range interactions [21,70,71]. Methods Derivation of Eq. ( 4) Here, we present the details of the deviation of Eq. ( 4), which is based on the typicality [12,40,72].According to Eq. ( 2), We note that Nν is an operator which projects the state of the ν-th qubit to the state |0⟩. According to the typicality [12,40,72], the trace of an operator Ô can be approximated as the expectation value averaged by the pure Haar-random state |r⟩, i.e., with N being the number of qubits.It indicates that the infinite-temperature expectation value Tr[ Ô]/D can be better estimated by the expectation value for the Haar-random state ⟨r| Ô|r⟩.Thus, Based on the definition of the projector Nν , Nν |r⟩ is a Haar-random state for the whole system except for the ν-th qubit, and in the experiment, only a (N −1)-qubit Haar-random state is required. Numerical simulations Here, we present the details of the numerical simulations. The Krylov subspace is panned by the vectors defined as {|ψ(t)⟩, Ĥ|ψ(t)⟩, Ĥ2 |ψ(t)⟩, ..., Ĥ(m−1) |ψ(t)⟩}.Then, the Hamiltonian Ĥ in the Krylov subspace becomes a m-dimensional matrix H m = K † m HK m , where H denotes the Hamiltonian Ĥ in the matrix form, and K m is the matrix whose columns contain the orthonormal basis vectors of the Krylov space.Finally, the unitary time evolution can be approximately simulated in the Krylov subspace as |ψ(t + ∆t)⟩ ≃ K † m e −iHm∆t K m |ψ(t)⟩.In our numerical simulations, the dimension of the Krylov subspace m is adaptively adjusted from m = 6 to 30, making sure the numerically errors are smaller than 10 −14 . For the numerical simulation of the ÛR (t R ) = e −i Ĥd t R in Fig. 1c, based on the experimental data of the XY drive, the parameters in Ĥd are Ω j,m /2π = 10.4 ± 1.6 MHz, and ϕ j,m ∈ [−π/10, π/10]. Details of generating Haar-random states In this section, we present more details for the generation of faithful Haar-random states.The analog quantum circuit employed to generate Haar-random states is where ĤI is given by Eq. ( 1) and Here, we first numerically study the influence of the driving amplitude Ω j,m .For convenience, we consider ϕ j,m = 0 and isotropic driving amplitude, i.e., Ω = Ω j,m for all (j, m).We chose qubits.The dynamics of participation entropy S PE for different values of Ω are plotted in Fig. 5a, and the values of S PE with the evolved time t = 200 ns and 1000 ns are displayed in Fig. 5b.It is seen that for small Ω, the growth of S PE is slow and with increasing Ω, it becomes more rapid.In this experiment, we chose Ω/J ∥ ≃ 1.4 because the participation entropy can achieve S T PE with a relatively short evolved time t ≃ 200 ns.As Ω further increases, the time when S T PE is reached does not significantly become shorter.Based on above discussions, Ω/J ∥ ≃ 1.4 is an appropriate choice of the driving amplitude. Next, we numerically study the influence of the randomness for the phases of driving microwave pulse ϕ j,m .In this experiment, by using the correction of crosstalk, the randomness of the phases is small, i.e., ϕ j,m ∈ [−π/10, π/10].Here, we consider the phases with large randomness, i.e., ϕ j,m ∈ [−π, π].The numerical results for the time evolution of S PE with 5 samples of ϕ j,m are plotted in Fig. 5c.With ϕ j,m ∈ [−π, π], the participation entropy can still tend to S T PE around 200ns.Only the short time behaviors are slightly different from each other for the 5 samples (see the inset of Fig. 5c). Observation of diffusive transport.a, Experimental verification of preparing the states via the time evolution of participation entropy.Here, we chose Q R = {Q1,↑, Q2,↑, ..., Q12,↑, Q2,↓, Q3,↓, ..., Q12,↓} with total 23 qubits.The inset of a shows the corresponding quantum circuit.The dotted horizontal line represents the participation entropy for Haar-random states, i.e., S T PE ≃ 15.519.b, Experimental results of the autocorrelation function C1,1(t) for the qubit ladder with L = 12, which are measured by performing the quantum circuit shown in Fig. 1c and d.Here, we consider the state generated from ÛR (t R ) with t R = 200 ns, which is approximate to a Haar-random state.Markers are experimental data.The solid line is the numerical simulation of the correlation function C1,1 at infinite temperature.The dashed line represents a power-law decay t −1/2 .Error bars represent the standard deviation. Supplementary Note 1. MODEL AND HAMILTONIAN In this experiment, we use a ladder-type superconducting quantum processor with 30 programmable superconducting transmon qubits, which is identical to the device in ref. [22].The optical micrograph and coupling strengths of the chip are shown in Fig. S1, and the device parameters are listed in Table S1.The Hamiltonian of the total system can be essentially described by a Bose-Hubbard model of a ladder where ℏ is the reduced Planck constant, N is the total number of qubits, â † (â) denotes the bosonic creation (annihilation) operator, h j is the tunable on-site potential, E C,j denotes the on-site charge energy, representing the magnitude of anharmonicity, and ĤI is the Hamiltonian for the interactions between qubits.For qubits connected in a ladder-type with two coupled chains ('↑' and '↓'), the interaction Hamiltonian ĤI is mainly derived from the nearest-neighbor (NN) rung (vertical, '⊥') and intrachain (parallel, '∥') hopping couplings, namely where L = N/2 is the length of each chain, J ⊥ j and J ∥ j,m are the NN rung and intrachain coupling strengths.The mean values of J ⊥ j /2π and J ∥ j,m /2π are 6.6 MHz and 7.3 MHz, respectively.In addition, it is inevitable that small next-nearest-neighbor (NNN) interactions are present, including the hopping interactions between the diagonal qubits of the upper and lower chains ('×', diagonal down '⧹' and diagonal up '⧸') and between NNN qubits on each chain ('∩'), and the corresponding Hamiltonians are expressed as where J ⧹ j , J ⧸ j and J ∩ j,m are the strengths of diagonal down, diagonal up and parallel NNN hopping interactions, respectively.In short, for numerical simulations, we consider ĤI = Ĥ⊥ + Ĥ∥ + Ĥ× + Ĥ∩ . In our quantum processor, the anharmonicity (≥ 200 MHz) is much greater than the coupling interaction and the model can be viewed as a ladder-type lattice of hard-core bosons [52], i.e., the Eq. ( 1) in the main text.However, in principle, the leakage to higher occupation states can be possibly induced by the finite value of the ratio between the averaged anharmonicity and coupling strength, i.e., E C /J.To qualitatively characterize whether the Bose-Hubbard model (S1) can be approximate as the hard-core bosons, we consider the dynamics of the summation of the probability max(⃗ s)=1 p(⃗ s) with ⃗ s denoting a configuration of product state.For instance, ⃗ s = (1, 0, 1, 0, ..., 1, 0) corresponds to the Néel state |⃗ s⟩ = |1010...10⟩.If the system exactly becomes a hard-core bosonic model, max(⃗ s)=1 p(⃗ s) = 1.Here, we numerically simulate the dynamics of max(⃗ s)=1 p(⃗ s) for the Hamiltonian of the superconducting circuit with experimentally measured hopping interactions and anharmonicity.As an example, we adopt the system size L = 16 and a half-filling product state as the initial state |ψ 0 ⟩ (see the inset of Fig. S2a).The results are plotted in Fig. S2a.One can see that the summation of the probabilities for the states with higher occupations, i.e., max(⃗ s)>1 p(⃗ s) = 1 − max(⃗ s)=1 p(⃗ s), only reach a relatively small value ∼ 0.03, with the evolved time t ≃ 200 ns.Moreover, we numerically simulate the time evolution of the particle number ⟨n(t)⟩ ≡ ⟨ψ(t)|n|ψ(t)⟩ = ⟨ψ(t)| i ni |ψ(t)⟩, with ni ≡ |0⟩ i ⟨0| + |1⟩ i ⟨1|, up to the experimental time scales t ≃ 200 ns.The results are displayed in Fig. S2b.We emphasize that only the occupations of the states |0⟩ and |1⟩ are considered in the definition of ni , while the finite E C /J allows the possibility of the leakage to the states with higher occupations, such as |2⟩.Consequently, the decay of ⟨n(t)⟩ shown in Fig. S2 quantifies the leakage induced by the finite E C /J.The stable value of n(t)/2L with t ≃ 200 ns is about 0.4966, indicating a moderate impact of the leakage on the conservation of the particle number.In short, the results in Fig. S2 suggest that hard-core bosonic Hamiltonian (1) in the main text, with a conservation of the particle number, can efficiently describe our superconducting quantum simulator.respectively.The quantized Hamiltonian thus is (the constant term is omitted): where ω = ( √ 8E C E J − E C )/ℏ denotes the qubit frequency, and E C = e 2 /(2C) is the charging energy that represents the magnitude of anharmonicity.For a single Josephson junction, E J is not tunable, while for a SQUID with two junctions, it depends on the external flux Φ ext applied to the junction region.In the experiments, we can adjust the qubit frequency ω via the external fast flux bias applied to the Z control line. When a time-dependent driving voltage V d (t) is added into a transmon qubit (Fig. S4), the driving current I d can be split into the qubit capacitance term I C and the Josephson junction term I J .Meanwhile, according to Kirchhoff voltage law, the total voltage reduction through either of the two branches must be zero.Thus, one can obtain the following motion equation      = 0, where the Lagrangian of this driven qubit can be constructed as where C d is the driving capacitance.In Eq. (S11), the first term represents the charge energy of C, the second term denotes the charge energy of C d caused by induced electromotive force, and the last term is the inductance energy of L. To obtain the Hamiltonian, we first calculate the conjugate to the position (flux) Φ, namely the canonical momentum (charge) , and thus Using the canonical quantization procedure like Eq. (S8), we introduce Hence, the Hamiltonian becomes where Here, we set the time-dependent driving V d (t) = −V d sin (ω d t + ϕ) = Im{V d e −i(ωdt+ϕ) }, thus Ω(t) = iΩ e i(ωdt+ϕ) − e −i(ωdt+ϕ) /2, where Ω = ϵV d is so-called Rabi frequency.The parameter ϵ represents the Rabi frequency corresponding to the unit amplitude of the drive. To solve the time evolution governed by the above time-dependent Hamiltonian, we consider the rotating frame which is generated by where ∆ = ω − ω d is the frequency detuning, and the rotating-wave approximation is adopted by ignoring high frequency oscillation ±2ω d . With ∆ = 0 and E C ≫ Ω, the large anharmonicity results in the resonant drive acting almost exclusively between the first two energy levels |0⟩ and |1⟩ without leakage to higher levels.Hence, considering the two-level qubit, we have where σ+ j (σ − j ) is the raising (lowering) operator.If the qubit begins in the ground state |0⟩, its time-dependent state during the unitary evolution is Qubit is detuned from its idle frequency to the operating ωi.Meanwhile, we apply resonant microwave drives on this qubit with scanning XY amplitude VIQ and measure the vacuum Rabi oscillations shown in b. b, The heatmap of the probabilities of qubit in the state |1⟩ as a function of duration and XY amplitude.c, For each XY drive amplitude, we fit the curve of vacuum Rabi oscillation by using Eq.(S17) to obtain the experimental Rabi frequency, denoted as black hollow circle.The red solid line is the result of fitting the experimental Rabi frequencies by using a smooth piecewise function and the grey dashed line implies the linear relationship between Rabi frequency and XY drive amplitude when the drive amplitude is less than V sat IQ . and the probability of qubit in |1⟩ is given by P 1 (t) = sin 2 (Ωt/2) = [1 − cos (Ωt)] /2.Considering the energy relaxation, the envelope of P 1 (t) will decay in a dissipative evolution and thus where T 1 is the energy relaxation time that depends on the qubit frequency ω.In order to obtain the Rabi frequency Ω, one can fit the data of P 1 (t) by using the form of function A exp (−t/T 1 ) cos (Ωt) + B. Typical experimental data of calibrating XY drive with different driving amplitudes are displayed in Fig. S5.The above results are based on the resonance condition ω = ω d .If the detuning ∆ = ω − ω d ̸ = 0, the effective Rabi frequency will be Therefore, to obtain the correct Rabi frequency when ω = ω d , we should find the corresponding Z pulse amplitude that makes the qubit resonate with the microwave before calibrating XY drive.This step can be easily achieved via spectroscopy experiment or Rabi oscillation by scanning the Z pulse amplitude of the qubit. B. Generation and manipulation As shown in Fig. S6, we generate XY drive pulse by using IQ mixer.The output driving pulse results from mixing the IQ signals with a intrinsic LO (Fig. S6).Although the Rabi frequency Ω is proportional to the actual driving amplitude V d , the relationship between Ω and the input amplitude of IQ signals V IQ is not always linear due to the semiconductor nature of the IQ mixer (GaAs and similar semiconductor materials).When V IQ is relatively small, IQ mixer is in the linear work region and V d ∝ V IQ satisfies.However, the strong amplitude leads to a nonlinear relationship between V d and V IQ , so that Ω ∝ V IQ is not valid in the saturation region.This may be caused by the velocity saturation of carriers in the IQ mixer.In order to analytically describe Ω versus V IQ , we impose the following smooth piecewise function and its inverse: where η, V sat IQ and Ω max are the parameters to be fitted.Here η is the slope in linear region that represents the Rabi frequency corresponding to the unit amplitude of XY driving (IQ signals), V sat IQ denotes the critical amplitude before entering the saturation region of IQ mixer, and Ω max is the maximum Rabi frequency when V IQ → ∞. C. Origin of multi-qubit crosstalk Now we consider two driven qubits Q i and Q j in the circuit (see Fig. S7).The total Lagrangian can be expressed as where C ij is the coupling capacitance.The corresponding canonical momentums are where , and thus where Substituting Eq. (S23) into Eq.(S21), we obtain with the effective capacitance parameters Then the total Hamiltonian is given by the Legendre transformation: Using canonical quantization, we introduce Qq = i Qzpf,q (â † q − âq ) Φq = Φ zpf,q (â † q + âq ) with q ∈ {i, j}, Qzpf,q = ℏ( CΣq /L c,q ) 1 2 /2 and Φ zpf,q = ℏ(L c,q / CΣq ) where the parameters are Focusing on Eqs.(S31), (S35) and (S36), one can notice that the local driving Hamiltonian of each qubit depends on both external drive V d,i (t) and V d,j (t) due to the presence of coupling capacitance.However, this crosstalk is usually very small.As an example, we take the typical values C d,i = C d,j = 30 aF, C i = C j = 85 fF and C ij = 0.25 fF.Then we have in which M V is the signal crosstalk matrix D. Measurement and correction of crosstalk To compensation the crosstalk, we need to measure the total signal crosstalk matrix and perform where M −1 V is the inverse matrix.However, in practice we cannot obtain M V directly, we need to characterize the crosstalk matrix of Rabi frequencies M Ω and calculate M V by using where ϵ = diag{ϵ 11 , ϵ 22 , . . ., ϵ N N } and the crosstalk matrix of Rabi frequencies is defined as where c ij and φ ij are the amplitude and phase crosstalk coefficients to be measured. In the linear region of IQ mixer, we actually use Eq.(S19) to describe the relationship between Rabi frequency and the input IQ signal, and thus where η is given by η = diag{η 1 , η 2 , . . ., η N } with η i being the Rabi frequency of Q i corresponding to the unit amplitude of IQ signals.Now, we introduce an efficient method for characterizing c ij and φ ij in the crosstalk matrix M Ω . .Let us take an example of Q i .As shown in Fig. S9a control lines of Q i and Q j .Meanwhile, Q i is biased near the resonant frequency with the detuning ∆ i = ω i − ω d,i .Due to the crosstalk, the effective Hamiltonian of Q i under the rotation frame becomes , and the corresponding effective Rabi frequency is where Ω ij = c ij Ω j denotes the crosstalk Rabi frequency from Q j to Q i , and φ ii = ϕ j − ϕ i represents the additional XY phase added in Q i relative to Q j .By scanning φ ii and measure the probabilities of Q i in |1⟩ as a function of the duration of XY drive, we can obtain Ω (i) R .Using Eq. (S48) to fit the results of Ω (i) R , we can determine the crosstalk coefficients c ij and φ ij .The procedure for determining c ji and φ ji is similar as long as we treat Q j as Q i .Here we show the partial crosstalk matrix between the 24 qubits used in experiments in where ρ(t) = |ψ(t)⟩⟨ψ(t)| is the density matrix, Ĥ is the Hamiltonian Eq. ( 1) in the main text, and Lj = σ− j / √ T 1 represents the Lindblad operators for the energy relaxation, with T 1 being the energy lifetime. For the numerical simulation, we adopt T 1 = 32.1 µs based on the device information shown in Table .S1. Here, we consider a ladder with the number of qubits N = 16, and the same initial state shown in the inset of Fig. S2a.We employ the stochastic Schrödinger equation to efficiently solve the Lindblad master equation (S49).First, we study the dynamics of the particle number ⟨n(t)⟩ under the decoherence, and the results are plotted in Fig. S11a.With the evolved time t = 200 ns, the value of ⟨n(t)/2L⟩ is around 0.497, suggesting that decoherence does not significantly influence the conservation of the particle number.We then numerically demonstrate that decoherence does not strongly affect the dynamics of autocorrelation function C 1,1 (t) with the evolved time up to 200 ns, and the dynamics of C 1,1 (t) simulated by solving the Lindblad master equation (S49) is more or less the same to the unitary dynamics (see Fig. S11b).where D = 2 N is the total dimension of the Hilbert space.To generate the Haar-random states via the evolution ÛR in this experiment (seen in the main text or Fig. S12a), we bias the auxiliary qubit Q A away from the resonance frequency and apply the XY drive pulses on all the remainder qubits Q R participating in the resonance.The experimental pulse diagram is shown in Fig. S12b.After a time t R , we perform joint readout of Q R with N s single-shot measurements to obtain the joint probabilities, and then calculate the participation entropy The evolved time is up to a longer time t = 600 ns.The dashed lines show the power-law fitting C1,1 ∝ t −z .b, For the disordered system with W/2π = 32 MHz, the transport exponent z obtained from the power-law fitting for the numerical results with the time interval t ∈ [ti, t f ], ti = 50 ns, and different t f .c is similar to b, but for the disordered system with W/2π = 50 MHz. Actually, in ref. [15], it has been shown that the infinite-temperature autocorrelation function can be expanded as FIG. 1 . FIG.1.Superconducting quantum simulator and experimental pulse sequences.a, The schematic showing the ladder-type superconducting quantum simulator, consisting of 30 qubits (the blue region), labeled Q1,↑ to Q15,↑, and Q1,↓ to Q15,↓.Each qubit is coupled to a separate readout resonator (the green region), and has an individual control line (the red region) for both the XY and Z controls.b, Schematic diagram of the simulated 24 spins coupled in a ladder.The blue and yellow double arrows represent the infinite-temperature spin hydrodynamics without preference for spin orientations.c, Schematic diagram of the quantum circuit for measuring the autocorrelation functions at infinite temperature.All qubits are initialized at the state |0⟩.Subsequently, an analog quantum circuit ÛR (t R ) acts on the set of qubits Q R to generate Haar-random states.This is followed by a time evolution of all qubits, i.e., ÛH (t) = exp(−i Ĥt) with Ĥ being the Hamiltonian of the system, in which the properties of spin transport are of our interest.d, Experimental pulse sequences corresponding to the quantum circuit in c, displayed in the frequency (ω) versus time (T ) domain.To realize ÛR (t R ), qubits in the set Q R are tuned to the working point (dashed horizontal line) via Z pulses, and simultaneously, the resonant microwave pulses represented as the sinusoidal line are applied to Q R through the XY control lines.Meanwhile, the qubit Q A is detuned from the working point with a large value of the frequency gap ∆.To realize the subsequent evolution ÛH (t) with the Hamiltonian (1), all qubits are tuned to the working point. FIG. 3 . FIG. 3. Subdiffusive transport on the superconducting qubit ladder with disorder.a, The time evolution of autocorrelation function C1,1(t) for the qubit ladder with L = 12 and different values of disorder strength W , ranging from W /2π = 35 MHz (W /J ∥ ≃ 0.5) to W /2π = 70 MHz (W /J ∥ ≃ 9.6).Markers (lines) are experimental (numerical) data.b, Transport exponent α as a function of W obtained from fitting the data of C1,1(t).Error bars (experimental data) and shaded regions (numerical data) represent the standard deviation. FIG. 4 .FIG. 5 . FIG. 4. Subdiffusive transport on the superconducting qubit ladder with linear potential.a, Time evolution of autocorrelation function C1,1(t) for the tilted qubit ladder with L = 12 and W S /2π ≤ 20 MHz.b is similar to a but for the data with W S /2π ≥ 24 MHz.Markers (lines) are experimental (numerical) data.c, Transport exponent α as a function of W S .For W S /2π ≤ 20 MHz and W S /2π ≥ 24 MHz, the exponent α is extracted from fitting the data of C1,1(t) with the time window t ∈ [50 ns, 200 ns] and t ∈ [100 ns, 400 ns], respectively.Error bars (experimental data) and shaded regions (numerical data) represent the standard deviation. FIG FIG. S2.Demonstrate of hard-core bosonic model.a, Time evolution of max(⃗ s)=1 p(⃗ s) for the Hamiltonian of the superconducting circuit described by the Bose-Hubbard model (S1), with a system size L = 8.The inset shows a schematic of the chosen initial state, where the sites represented with solid black circuits are initialized by the state |1⟩, and the remainder sites are initialized by |0⟩.b, The dynamics of the particle number ⟨n(t)⟩ for the Hamiltonian of the superconducting circuit with a system size L = 8. S10) where C Σ = C + C d , Φ = LI J .Here C, C d and L are the qubit capacitance, the driving capacitance, and the nonlinear inductance, respectively.The above equation can be viewed as the Euler-Lagrange equation: ∂Ldriven ∂Φ − d dt ∂Ldriven ∂ Φ FIG. S4. Circuit diagram of a driven transmon qubit.The qubit is coupled to a time-dependent driving voltage Vd.The capacitances of the qubit and the drive are labeled as C and Cd, respectively.The magnetic flux threading the loop is denoted as Φ.The driving current Id is split into IC and IJ. FIG. S5.Typical experimental data of measuring the relationship between Rabi frequency and XY drive amplitude.a, Experimental pulse sequence.Qubit is detuned from its idle frequency to the operating ωi.Meanwhile, we apply resonant microwave drives on this qubit with scanning XY amplitude VIQ and measure the vacuum Rabi oscillations shown in b. b, The heatmap of the probabilities of qubit in the state |1⟩ as a function of duration and XY amplitude.c, For each XY drive amplitude, we fit the curve of vacuum Rabi oscillation by using Eq.(S17) to obtain the experimental Rabi frequency, denoted as black hollow circle.The red solid line is the result of fitting the experimental Rabi frequencies by using a smooth piecewise function and the grey dashed line implies the linear relationship between Rabi frequency and XY drive amplitude when the drive amplitude is less than V sat IQ . FIG.S6.Generation of XY drive via frequency mixing.The intrinsic local oscillation (LO) is generated from a microwave signal source, while the input IQ signals are generated from two channels of the arbitrary waveform generator.The whole circuit is mixed at room temperature and then goes into cryoelectronics (dilution refrigerator).If the amplitude of LO is fixed, the output pulse amplitude will be proportional to the amplitude of IQ signals in small amplitude cases where the IQ mixer is in a linear work region. FIG.S8.Schematic of microwave signal crosstalk.Here, we take two qubits Qi and Qj as an example.Their individual driving voltages V d,i (t) and V d,j (t) induce two types of crosstalk.One type of crosstalk is due to the presence of coupling capacitance Cij, which causes the crosstalk only in amplitude.The parameters ϵij and ϵji are explained in Eq. (S36), which depends on the coupling capacitance Cij between the two qubits.The other type of crosstalk is caused by the propagation of microwave signals through the medium on the chip.According to electrodynamics, it will lead to the crosstalk both in amplitude and phase.The parameters ξ and ϕ are the amplitude attenuation factor and phase retardation of microwave propagation, respectively. Fig FIG. S10.Partial crosstalk matrix of XY drive.The heatmap represents the modulus of the crosstalk coefficient, namely |cij|.Here, we show the crosstalks between 24 qubits in the ladder. FIG. S11.The effect of decoherence.a, For the qubit ladder with a length L = 8 (the number of qubits N = 16), the dynamics of particle number ⟨n(t)⟩ with decoherence, i.e., energy relaxation, quantified by T1 = 32 µs.b, The dynamics of autocorrelation function C1,1(t) with decoherence (dashed curve), in comparison with the unitary dynamics (solid curve). p FIG. S12.Generation and characterization of the XY drive approach to prepare the Haar-random states.a, The schematic diagram of the quantum circuit.b, The corresponding experimental pulse sequence.We bias the auxiliary qubit QA away from the resonance frequency and apply the XY drive pulses on all the remainder qubits QR participating in the resonance at frequency ωref ≈ 4.534GHz, with a duration tR.c, The evolution of participation entropy SPE vs. the duration of XY drive.The dashed line represents the participation entropy of N −qubit Haar-random state.Here, we fix Q 1,↑ as QA, and N is the total number of QR. d, The bitstring histogram of the measured D = 2 N joint probabilities.The solid line shows the ideal results of Poter-Thomas distribution.For N = 15 and N = 23, we perform Ns = 5 × 10 5 and Ns = 3 × 10 7 single-shot measurements, respectively FIG. S15.Impact of the finite-time effect.a, Numerical results for the time evolution of autocorrelation function C1,1(t) for the qubit ladder with L = 12, and two values of disorder strengths W/2π = 32 MHz and 50 MHz.The evolved time is up to a longer time t = 600 ns.The dashed lines show the power-law fitting C1,1 ∝ t −z .b, For the disordered system with W/2π = 32 MHz, the transport exponent z obtained from the power-law fitting for the numerical results with the time interval t ∈ [ti, t f ], ti = 50 ns, and different t f .c is similar to b, but for the disordered system with W/2π = 50 MHz. FIG. S16.Additional numerical results for the spin transport on the titled superconducting qubit ladder.a, Schematic diagram of three different product states |ψ0⟩ for the definition of the autocorrelation function C1,1 = ⟨ψ0|ρ1(t)ρ1|ψ0⟩.From the top to bottom, the domain wall number of product states |ψ0⟩ is ndw = 10, 4, and 2, respectively.b, Time evolution of the autocorrelation function C1,1 = ⟨ψ0|ρ1(t)ρ1|ψ0⟩ with the product states shown in a for the titled superconducting qubit ladder with WS/2π = 60 MHz.
2023-10-11T18:44:12.767Z
2023-10-10T00:00:00.000
{ "year": 2024, "sha1": "a67ff2c84c939ec8205e7d675a01e1059d72abcf", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "ArXiv", "pdf_hash": "bb550752c3433d750c442c002c1b5adfb01cfc0b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
263633396
pes2o/s2orc
v3-fos-license
Neuroendoscopy surgery for hypertensive intracerebral hemorrhage with concurrent brain herniation: a retrospective study of comparison with craniotomy Background Hypertensive intracerebral hemorrhage combined with cerebral hernia (HIH-CH) is a serious condition. Neuroendoscopy can effectively remove intracranial hematoma, but there is no relevant research support for its utility in patients with HIH-CH. The purpose of this study is to investigate the efficacy and safety of neuroendoscopy in patients with HIH-CH. Methods Patients with HIH-CH who received craniotomy or neuroendoscopy treatment were included. The patients were divided into craniotomy (CHE) group and neuroendoscopy (NEHE) group. Clinical data and follow-up outcome of the two groups were collected. The primary outcome was hematoma clearance. Results The hematoma clearance rate (%) of patients in NEHE group was 97.65 (92.75, 100.00), and that of patients in CHE group was 95.00 (90.00, 100.00), p > 0.05. The operation time and intraoperative bleeding volume of patients in NEHE group were significantly less than those in CHE group (p < 0.05). There was no significant difference in the volume of residual hematoma and the incidence of rebleeding between the two groups (p > 0.05). The length of stay in ICU in NEHE group was significantly shorter than that in CHE group (p < 0.05). Conclusion Neuroendoscopy can safely and effectively remove the intracranial hematoma in patients with hypertensive intracerebral hemorrhage and cerebral hernia, significantly shorten the operation time, reduce the amount of intraoperative hemorrhage, shorten the ICU stay. Highlights -What is already known on this topic -As drainage and craniotomy are the main treatments for hypertensive intracerebral hemorrhage with -cerebral hernia (HIH-CH), there is a lack of relevant research for the support of utility of neuroendoscopy in these patients. -What this study adds -Neuroendoscopy can safely and effectively remove the intracranial hematoma in patients with hypertensive intracerebral hemorrhage and cerebral hernia.-How this study might affect research, practice or policy -In future clinical practice and study, neuroendoscopy should be a reasonable choice for patients with HIH-CH. Introduction Hypertension is a common disease in the world, which brings huge burden to human beings and causes about 10 million deaths every year (1).Epidemiological data show that there were about 1.39 billion hypertensive patients worldwide in 2010 (2).China is a large country of hypertension.According to previous studies, the prevalence of hypertension among Chinese adults (≥ 18 years old) is 27.9%, and the number of adult hypertensive patients is 245 million (3).Hypertensive intracerebral hemorrhage is one of the serious complications of hypertension, accounting for 28% of all stroke in European and American countries and 48% in China (3,4).Cerebral hernia is one of the serious manifestations of hypertensive intracerebral hemorrhage, which can lead to secondary brain stem injury with high mortality and poor prognosis (5,6).At present, drainage and craniotomy are the main treatments for hypertensive intracerebral hemorrhage with cerebral hernia (6).In recent years, neuroendoscopy has been gradually applied to the treatment of hypertensive intracerebral hemorrhage, and achieved good results (7)(8)(9).Neuroendoscopy can clearly explore the intracranial structure, and at the same time, it has a certain amplification effect, that is, large lesions in the brain can also be removed through a small incision (10,11).Endoscopic neurosurgery can reduce the risk of bleeding and infection during operation.Endoscopic neurosurgery is also performed under direct vision, which can effectively protect brain tissue by reducing the stretch and damage to brain tissue.Nishihara et al. showed that compared with hematoma puncture and drainage, neuroendoscopy can remove hematoma faster, relieve the compression of hematoma on surrounding normal brain tissue, reduce the occurrence of brain edema, and shorten the length of stay in ICU (10,12).Some previous studies have shown that endoscopic neurosurgery is superior to microsurgery in terms of intraoperative bleeding, hospital stay, and lung infection rate, and NHISS score is superior to craniotomy, which can significantly reduce the disability rate and mortality of patients (13,14).However, there is a lack of relevant research for the support of utility of neuroendoscopy in hypertensive intracerebral hemorrhage combined with cerebral hernia.The purpose of this study is to investigate the efficacy and safety of neuroendoscopy in patients with hypertensive intracerebral hemorrhage combined with cerebral hernia through preliminary retrospective analysis, so as to provide a reference for deciding whether to use neuroendoscopy in clinical practice and for future research. Study population Hypertensive cerebral hemorrhage patients admitted to our department from January 2015 to June 2021 were enrolled.Inclusion criteria: (1) Age ≥ 18; (2) All cases were confirmed as supratentorial hemorrhage by CT; (3) Clinical physical examination showed dilated pupil on one side; (4) acute onset hemorrhage and emergency operation was performed; (5) Follow up for more than 3 months.Exclusion criteria: (1) Bilateral mydriasis; (2) Complicated with malignant tumor, rheumatic diseases, arteriovenous malformations, aneurysms, moyamoya disease, severe cardiopulmonary disease, diabetes, renal insufficiency and coagulation dysfunction; (3) Incomplete follow-up data.; (4); (5) In this study we included cerebral hernia confirmed by midline shift on CT and ipsilateral large fixed pupil.This study was approved by the Ethics Committee of the 909th Hospital, School of Medicine, Xiamen University (Approval number: L2022011).At the same time, as a retrospective study, patients were exempted from signing the informed consent form.This study strictly follows the STROBE statement and its checklist.After the patients were included in the final analysis, they were divided into Neuroendoscopic hematoma evacuation group (NEHE group) and Craniotomy hematoma evacuation group (CHE group). Treatment All cases underwent emergency operation under general anesthesia.The bedside head CT was reviewed routinely within 3 h after operation.All patients were treated with early rehabilitation. Neuroendoscopic hematoma evacuation Before operation, the thickest section of hematoma and the puncture point nearest to the body surface should be located by conventional bedside CT, and the functional area should be avoided from puncture access.A 3.5 ~ 4.5 cm incision was made with the puncture point as center.According to the size of the hematoma, a round bone window with a diameter of 2.0 ~ 2.5 cm was made.After suspension and incision of the endocranium, the dura mater was opened, then the cortex was punctured with a self-made transparent endoscopic channel under the monitoring of the endoscope, and the hematoma was cleared under the direct vision of the endoscope after reaching the hematoma cavity.If there is obvious bleeding, unipolar electrocoagulation and aspirator were used to stop bleeding.Hemostatic gauze, gelatin sponge or fluid gelatin can be used to stop bleeding if there is a little bleeding in the cavity wall of hematoma.If the hematoma breaks into the ventricles of the brain, a drainage tube shall be placed at the hematoma site after the operation (when no bleeding is found on the CT of the head after the operation, the drainage tube shall be removed within 3 days).Then the channel was pulled out and the bone defect was repaired.At the end of surgery, the skull cavity was closed tightly and the scalp tissue was sutured carefully.The specific operation process is shown in Figure 1, and the perioperative CT images of typical cases are shown in Figure 2. Craniotomy hematoma evacuation Craniotomy was performed with bone flap under microscope.A horseshoe shaped incision was made routinely, and the bone flap was formed with a milling cutter.After opening the dura mater, the cortex was separated from the hematoma with the brain pressing plate, and the hematoma was removed under the microscope.If the hematoma breaks into the ventricles of the brain, a drainage tube shall be retained at the hematoma site after surgery (when no bleeding is found on the CT of the skull after surgery, the drainage tube shall be removed within 3 days), bone flaps shall be removed, a drainage tube shall be retained subcutaneously, and the skull cavity shall be closed routinely and scalp tissue shall be sutured. Outcome The primary outcome of this study was hematoma clearance rate.Calculation method: hematoma clearance rate = (preoperative hematoma volume -postoperative residual hematoma volume) / preoperative hematoma volume × 100%.Hematoma volume was calculated with 3D Slicer software.The secondary outcomes of this study were the operation time, intraoperative bleeding, postoperative bleeding, massive cerebral infarction, Glasgow Outcome Scale (GOS) score at 3-month follow-up, hospital stay and ICU hospital stay, and the incidence of adverse events during the follow-up period. Follow-up All patients received regular long-term follow-up after surgery, and the scheduled follow-up time was 1 month, 3 months.If there is any change in the condition beyond the scheduled follow-up, the patients can visit outpatient clinic at any time.Each follow-up includes but is not limited to: blood cell count, regular urine and stool test, liver function, kidney function, coagulation function, brain CT, GOS score, National Institute of Health stroke scale (NIHSS) score, Activities of daily living (ADL) score and Quality of Life (QOL) score.The NIHSS, ADL and QOL score were obtained in outpatient clinic by physicians (ZY, ZX, WJ, FL, and WH).Adverse events during the follow-up CT comparison of typical cases of hypertensive intracerebral hemorrhage before and after endoscopic clearance.(A): Simple lobar hematoma.The hematoma almost breaks out of the cortex.Bedside CT locates the shallowest part of the hematoma as the puncture point, and a channel was puts to remove the hematoma; (B): The amount of hematoma in the basal ganglia area is large, and the hematoma has reached the temporal lobe cortex.The bedside CT located the shallowest part of the hematoma as the puncture point, and a channel was placed to remove the hematoma; (C): The hematoma in the basal ganglia and thalamus breaks into the ventricles of the brain.The hematoma extends from the cortex to the thalamus.The longest diameter of the hematoma is to clear the hematoma through the temporal lobe fistula.The channel does not need to swing obviously, and the damage is small; (D): The giant fusiform hematoma in the basal ganglia was removed by navigation via the long frontal axis approach; (E): The hematoma in the basal ganglia area broke into the ventricle, and the channel was placed through the puncture point of the ventricle under the guidance of navigation.At the same time, the hematoma in the basal ganglia area and part of the hematoma in the ventricle were cleared. Data collection All data of this study were extracted based on the electronic medical record system, including baseline data, namely, demographic information, including age, sex, history of cigarette or alcohol use, medical history, preoperative physical examination information, including height, weight, systolic blood pressure, diastolic blood pressure, consciousness.The operation time, intraoperative bleeding volume, residual hematoma volume, recurrent postoperative ebleeding and secondary massive cerebral infarction were all recorded and collected.Severe disability in GOS score was defined as: The patient is conscious, but their function is extremely limited and requires long-term care.Massive cerebral infarction was defined as the diameter of the infarct is>3 cm and involves more than 2 anatomical areas, or the infarct area is>20 cm 2 and involves more than 2 anatomical areas (15).Ultra-early surgery was defined as hematoma surgery was conducted within 4 h after onset of stroke. Statistical analysis SPSS 24.0 statistical software (IBM, United States) is used for statistical analysis.Continuous variables were represented by median or mean ± standard deviation, and Wilcoxon rank sum test or student t test was used for comparison between groups.The categorical variables were expressed as quantity and percentage, and the comparison between the two groups was performed by Pearson chi square test or Fisher's exact test.A two-side p < 0.05 was considered statistically significant. Baseline characteristics According to the inclusion and exclusion criteria, 111 patients with hypertensive intracerebral hemorrhage combined with cerebral hernia were included, including 60 patients underwent endoscopic surgery and 51 patients underwent craniotomy (Table 1).There was no significant difference between the two groups in terms of baseline characteristics, preoperative GCS score, laboratory test results, and location of cerebral hemorrhage (p > 0.05).There was no significant difference between the two groups in the time from onset to surgical treatment (p > 0.05). Comparison of surgical efficacy and complications The hematoma clearance rate (%) of patients in NEHE group and CHE group was 97.65 (92.75, 100.00) and 95.00 (90.00, 100.00), respectively, with no significant difference between the two groups (p > 0.05).The operation time and intraoperative bleeding volume of patients in NEHE group were significantly less than those in the control group (p < 0.05).There was no significant difference in the volume of residual hematoma and in the incidence of recurrent postoperative bleeding between the two groups (p > 0.05).And no recurrent postoperative bleeding occurred in patients underwent ultra-early hematoma evacuation.A patient in NEHE group was transferred to CHE due to recurrent postoperative bleeding of 40 mL.The incidence of massive cerebral infarction in NEHE group was lower than that in CHE group, but the difference was not statistically significant (p > 0.05).There was no significant difference in the rates of pulmonary infection, gastrointestinal bleeding, and tracheotomy between the two groups (Table 2).No intracranial infection occurred in all patients. Outcomes The treatment results of the two groups were compared (Table 3).The length of stay in ICU of patients in NEHE group was significantly shorter than that in CHE group (p < 0.05).The GOS score in NEHE group was significantly higher than that in CHE group 3 months after operation (p < 0.05).The vegetative state rate (7/60, 11.7%) and severe disability rate (19/60, 31.7%) in NEHE group were lower than those in CHE group (10/51, 19.6% and 21/51, 41.2%) respectively, but the difference was not statistically significant (p > 0.05).In NEHE group, 2 cases died of pulmonary infection after discharge.Seven patients died in the CHE group, including 2 patients with large amount of recurrent postoperative bleeding, 3 patients with secondary massive cerebral infarction, 1 patient with secondary brain stem hemorrhage after surgery, and 1 patient with brain swelling after secondary surgery. Discussion Neuroendoscopic treatment of hypertensive intracerebral hemorrhage has been supported by many studies.Compared with traditional craniotomy, neuroendoscopic treatment has significant benefits.However, due to the severe condition, few patients of hypertensive intracerebral hemorrhage combined with cerebral hernia were treated with neuroendoscopy.The results of this study show that neuroendoscopy has the same hematoma clearance rate as craniotomy in the treatment of hypertensive intracerebral hemorrhage with cerebral hernia, but the operation time, intraoperative hemorrhage, and the incidence of massive cerebral infarction are less.Moreover, the 3-month follow-up results showed that the outcome (mortality, rate of vegetative state, and severe disability) of patients treated with neuroendoscopy was not inferior to that of patients treated with craniotomy.In general, the results of this study show that neuroendoscopy can be safely and effectively used to treat patients with hypertensive intracerebral hemorrhage combined with cerebral hernia.Hypertension is the most important cause of cerebral hemorrhage, especially uncontrolled hypertension.Previous studies have shown that hypertensive patients have an increased risk of cerebral hemorrhage of 3.5-to 9-fold compared with people with normal blood pressure (16,17).However, at present, the situation of hypertension in the world is not optimistic, especially in China, where the rate of blood pressure reaching the treatment goal (systolic blood pressure < 140 mmHg) is only 9% (18).Therefore, there are a large number of patients with current and potential hypertensive intracerebral hemorrhage.At present, there are still many disputes on the treatment of hypertensive intracerebral hemorrhage, especially for most supratentorial intracerebral hemorrhage, the effectiveness of surgery is still unclear (19,20).At present, conservative treatment is often adopted for those with less hematoma, and stereotactic drainage and injection of urokinase are also used to dissolve the hematoma (19, 20).However, for patients with a large amount of hematoma and progressive deterioration of the patient's condition, especially those with brain hernia, surgical treatment is still needed to save lives (21).For patients with cerebral hernia, the conventional surgical measures are craniotomy with bone flap and removal of hematoma under microscope.Because of the relatively large surgical trauma, the traction of brain tissue causes severe postoperative edema in the surgical area, and in addition, the secondary cerebral infarction may occur, which often requires decompressive craniectomy (22).After decompressive craniectomy, because of the lack of skull protection at the bone window, the brain tissue is dragged and swayed, which aggravates the formation of softening focus and is more likely to induce epilepsy (23,24).In the later stage, the defect needs to be repaired again, causing secondary injury, which also increases the economic burden of patients. Neuroendoscopic treatment of intracerebral hemorrhage has the advantages of small trauma, high safety, fast recovery, and low cost (25).Neuroendoscopic removal of intracranial hematoma has been widely carried out, but most of them are used to treat patients with relatively small hematoma and those without brain hernia (26,27).The results of this study show that it is safe and effective to treat hypertensive intracerebral hemorrhage with cerebral hernia with neuroendoscopy.We believe that the effectiveness and safety of this treatment strategy are related to the following points: First, the hematoma was effectively cleared under direct vision.Our results showed that there was no significant difference in the hematoma clearance rate between endoscopic and craniotomy.Second, the operation time is significantly shorter than that of craniotomy, which also shortens the time of brain hernia.The key to the treatment of brain hernia is to clear the hematoma as soon as possible and alleviate the brain hernia.The neuroendoscopic surgery takes a shorter time from making a skin incision to removing the hematoma under the endoscope when compared with traditional craniotomy.The shorter the duration of brain hernia, the smaller the secondary damage (28).Third, neuroendoscopic surgery is minimally invasive.The channel with a diameter of only about 2.0 cm can be used to remove the hematoma under endoscope, which will cause less traction damage to the cortex and brain tissue when compared with traditional craniotomy.Moreover, the endoscope can directly reach the deep part of the hematoma, and the angle mirror can better observe the surrounding hematoma without too large swinging channels and pulling the brain tissue.The microscope is a columnar field of vision.For better exposure of deep hematoma and surrounding hematoma, the cortex should be stretched apart which might cause more secondary damage (29).Fourth, the intraoperative bleeding is small.Most of the bleeding in hypertensive intracerebral hemorrhage surgery comes from the head incision skin flap and the bone window edge, and there is little bleeding from the clearance of the hematoma (30).Small bone window and small incision can reduce the bleeding area, and shorten the operation time can also reduce the bleeding time. The results of this study showed that for patients with unilateral mydriasis or bilateral mydriasis whose pupil retraction was normal before operation, the incidence of secondary massive cerebral infarction after operation was not high using microscope or endoscope (there were 7 patients with craniotomy and 3 patients with neuroendoscopy group in this study).Three patients with massive cerebral infarction after neuroendoscopic surgery was treated conservatively, and one patient underwent decompressive craniectomy.After that, the patient's condition was stable and gradually recovered.The 3-month follow-up results showed that, compared with craniotomy, the rate of vegetative state and the rate of severe disability in patients undergoing endoscopic neurosurgery had a downward trend, while the rate of mild disability had an upward trend, but none of them reached statistical significance.First, it may be related to the small sample size.When the sample size increases, these differences will possibly show statistical significance.As mentioned earlier, clearing hematoma and reducing brain hernia in a shorter time may help reduce the rate of vegetative state and the rate of severe disability.However, it may be difficult to have a significant impact on the disability caused by irreversible damage to brain tissue (31, 32).Third, neuroendoscopic hematoma removal also has its difficulties, mainly due to the difficulty of hemostasis.In clinical practice, we found that most of the responsible blood vessels and other broken small blood vessels can be found after endoscopic aspiration of hematoma in the emergency operation of hypertensive cerebral hemorrhage with cerebral hernia.We speculate whether it is related to the rupture of new blood vessels in the process of hematoma enlargement and expansion.Some of the broken ends of blood vessels were wrapped by blood clots, but most of them still had slight bleeding after the hematoma was removed.Bipolar electrocoagulation is difficult to be implanted due to the small endoscopic channel.We usually use unipolar electrocoagulation combined with an aspirator to stop bleeding, and specifically, we use fluid gelatin or gelatin sponge to stop tiny bleeding on the wound surface.Through experience, most bleeding has been effectively stopped.In this study, there were 5 cases with recurrent postoperative bleeding volume greater than 15 mL in the endoscopic group, and only 1 case with 40 mL rebleeding was transferred to successful craniotomy.The potential reasons of recurrent postoperative bleeding include the original rupture of culprit vessel, surgery-related injury of vessels around the hematoma, the decompression of tissues around the hematoma, and uncontrolled blood pressure.Whether neuroendoscopy can increase the rate of mild disability still needs further research and observation.In this study, only 6 patients received hematoma evacuation within 4 h after onset of stroke both in NEHE (10.0%) and CHE groups (11.8%).Due to small sample, we did not analyze the efficacy of neuroendoscopic hematoma evacuation on these ultra-early patients. This study has some limitations.First, as mentioned above, the sample size of this study is small.For the results with different trends, FIGURE 2 FIGURE 2 Endoscopic clearance of hypertensive intracerebral hemorrhage assisted by small bone window.(A): The small incision is about 4 cm long; (B): The diameter of small bone window is about 2 cm; (C): Small bone flap removed by milling cutter; (D): Clear passage fistulation and endoscopic evacuation of hematoma; (E): Healing incision. TABLE 1 Baseline characteristics of patients underwent hematoma evacuation. TABLE 2 Comparison of surgical efficacy and complications. TABLE 3 Follow-up outcomes. 10.3389/fneur.2023.1238283Frontiers in Neurology 07 frontiersin.orgthe current sample size is not enough to draw a final conclusion, which needs further observation in future research.Second, this study is a retrospective study.Inevitably, there is a certain bias between patients with different operations, including differences in baseline data, disease conditions, doctors' treatment plans, and nursing care during hospitalization, and even doctors' experience may be different.Third, this study did not further analyze the efficacy and safety of neuroendoscopy in treating hypertensive intracerebral hemorrhage with cerebral hernia at different locations, nor did it analyze patients at different times of treatment.Fourth, this study failed to analyze the influence of various factors on the efficacy and safety of neuroendoscopic treatment of hypertensive intracerebral hemorrhage with cerebral hernia, such as preoperative blood pressure level, GCS score and NHISS score.In order to further observe, summarize and analyze the efficacy and safety of endoscopic treatment of hypertensive intracerebral hemorrhage with cerebral hernia, a multicenter, prospective cohort study and a randomized controlled study are needed.
2023-10-05T15:16:22.867Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "b54f0f2147d9e44f34031ca6d263d174fdd4748c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fneur.2023.1238283", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "88efee6b852f74393266839698f99011415860f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
136158747
pes2o/s2orc
v3-fos-license
Residual stresses in shape memory alloy fiber reinforced aluminium matrix composite Process-induced residual stress in shape memory alloy (SMA) fiber reinforced aluminum (Al) matrix composite was simulated by ANSYS APDL. The manufacturing process of the composite named as NiTi/Al is start with loading and unloading process of nickel titanium (NiTi) wire as SMA to generate a residual plastic strain. Then, this plastic deformed NiTi wire would be embedded into Al to become a composite. Lastly, the composite is heated form 289 K to 363 K and then cooled back to 300 K. Residual stress is generated in composite because of shape memory effect of NiTi and mismatch of thermal coefficient between NiTi wire and Al matrix of composite. ANSYS APDL has been used to simulate the distribution of residual stress and strain in this process. A sensitivity test has been done to determine the optimum number of nodes and elements used. Hence, the number of nodes and elements used are 15680 and 13680, respectively. Furthermore, the distribution of residual stress and strain of nickel fiber reinforced aluminium matrix composite (Ni/Al) and titanium fiber reinforced aluminium matrix composite (Ti/Al) under same simulation process also has been simulated by ANSYS APDL as comparison to NiTi/Al. The simulation results show that compressive residual stress is generated on Al matrix of Ni/Al, Ti/Al and NiTi/Al during heating and cooling process. Besides that, they also have similar trend of residual stress distribution but difference in term of value. For Ni/Al and Ti/Al, they are 0.4% difference on their maximum compressive residual stress at 363K. At same circumstance, NiTi/Al has higher residual stress value which is about 425% higher than Ni/Al and Ti/Al composite. This implies that shape memory effect of NiTi fiber reinforced in composite able to generated higher compressive residual stress in Al matrix, hence able to enhance tensile property of the composite. Introduction Shape memory alloys are a group of metallic alloys that can return to their original form in terms of shape and size after subjected to a memorization process between two transformation phases, which is temperature dependent and this transformation phenomenon is known as the shape memory effect [1]. The shape memory effect of shape memory alloy is used to design smart material. This type of smart material also known as SMA composite [2] and the design concept have been proposed by the Watanabe et. al [3], where the illustration of SMA composite design concept is show as Figure-1. The metal matrix composite fiber reinforced with SMA material will have better material characteristic and be safer because compressive stress is often seems as good stress for matrix composite because it can increase fatigue life of the component by reduce micro-crack of composite [2,4]. Therefore, knowledge and understanding of residual stress at matrix of SMA composite is very important because by knowing the distribution of residual stress at the design process would help the research to design better SMA composite and hence improve the performance of composite. Methodology This research is focus on simulation of three composite models. The composite models are NiTi/Al, Ni/Al and Ti/Al. The flow chart of the simulation and the geometry of the composite models is shown in Figure-2 and Figure-3, respectively. ANSYS APDL is used to simulate the residual stress of the composites. There are four types of material involves i.e. Al and three types of fiber, Ni, Ti and NiTi. The material property of them is shown as Table-1 and Table-2. The technique to determine the phase transformation temperatures in NiTi are briefly explained in Tan et al [5]. Quarter model is used in this simulation as shown in Figure-4. Quarter model is chosen because it can reduce great number of nodes and elements in finite element furthermore the model is symmetric to both loading and geometry condition. The boundary condition in this case is symmetric boundary condition and would get symmetric answer. Then, the quarter model would subject to loading step follow the design method present in Figure-1 and the simulation step that follow the steps of Figure-2 has been modified and is shown as flow diagram in Figure-5. In order to simplified the model, the NiTi wire is embedded in Al matrix using similar technique in Jamian et al. [10,11]. After solve all the load step, the data of residual stress would be able to read and plot out. However, before to start the simulation, sensitivity test shall conduct to determine the optimum number of nodes and elements Results and Discussion Before start doing simulation on NiTi/Al composite, modeling of NiTi wire has been done to determine the tensile loading required to generate maximum residual strain. The stress-strain relationship of NiTi is shown in Figure- Furthermore, as shown in Figure-6 the tensile loading required to induce phase transformation from multi variant oriented martensite to single variant oriented martensite is about 150 MPa. The residual strain of NiTi wire can be generated by subjected tensile loading 150MPa above but the residual strain is reach maximum when the tensile loading reached 190MPa. In this work, tensile loading 240MPa is chosen to generate maximum residual strain for NiTi wire before placed into Al matrix. The maximum residual strain will be treated as maximum residual strain for simulation of NiTi model. Then, to determine the optimum number of elements a sensitivity test has been conducted. In sensitivity test, a testing node has been chosen to compare the result of stress in z-direction, Sz with Table-3. The optimum number of element and node is 13680 and 15680, respectively. This number of number and nodes has been used to simulate in other two composite models. The residual stress distribution of NiTi/Al composite generated during manufacturing process has been simulated by ANSYS APDL. There are total 7 load steps involved in the simulation process and the time interval for each load step is 1s. Therefore, the total time taken of the simulation process is 7s. The time taken for each load step is set like this because the variable of the model is time invariant and easier to analyze during discussion. The result of stress in z-direction, Sz distribution for every 1s or load step is shown as Figure-7. The stress from origin to 1.802mm away in x-direction is showed in all graphs. Furthermore, the contour plot of each load steps has been shown in Figure-7(a) to (g) for the better inspection on the residual stress distribution. Figure-7(a) indicates the stress distribution of NiTi/Al composite at time=1s. The tensile load applied on the NiTi wire is 120MPa which is the half of the tensile loading. Based on Figure-7(a), the stress is constant between range of 0mm to 0.36mm. The stress value is 120MPa which is equal to the tensile load applied on the NiTi wire. The stress value is decline slowly after 0.36mm and become zero around 0.78mm because there is modified Al matrix constraints the deformation of NiTi wire. Figure-7(b) indicates the stress increased from 120MPa to 240MPa between range of 0mm to 0.36mm at time=2s. This stress value is same as the tensile loading applied on NiTi wire. The stress value is declined after 0.36mm and reach zero at 0.78mm. After loading process is completed, unloading process is started at load step 3 and the residual stress distribution at time=3s is shown as Figure-7(c). By inspected on Figure-7(c), the residual stress is significantly small and can be treated as stress free after unloading. The stress is no exactly zero because numerical method is used to obtain the value while running the solver. Initially, the NiTi wire is pulled alone without presence of Al matrix because ANSYS APDL is unable to add Al matrix element to the NiTI wire after run the solver. Therefore, the complete model of NiTi/Al composite is built first although the Al is does not exist at the beginning of the simulation process. Then, the Al element are deactivated at load step 4 and run the first three load step again to eliminate load effect of Al elements on the NiTi wire. After deactivation of Al matrix, the stress distribution is change but still significant small because the Al and NiTi wire is suppose in stress free condition as shown in Figure-7(d). Then, the model undergo load step 5 to activate the Al matrix elements surrounding of NiTi wire. The Al matrix is modified to low Young modulus material to reduce simulation failure in load step 4. Hence, in this step Al matrix have converted back to original material property after activated the element. The stress distribution as shown in Figure-7(e) is same as load step 4 since no external load is applied on composite. Then, the simulation is proceeded to load step 6 where the temperature of the model increased from 289K to 363K. The residual stress is generated in this load step where there are no external force acting on it. Figure-7(f) shows the residual stress distribution of NiTi/Al composite at time=6s. The residual stress generated in this load step is due to shape memory effect and mismatch of thermal expansion between Al matrix and NiTi fiber reinforcement. Based on Figure-7(f), the residual stress is around 520 MPa at origin and increased to 585 MPa at 0.36mm which is in the region of NiTi wire. Then, the residual stress starts to decrease drastically from 585MPa to a maximum compressive residual stress value -70 MPa around 0.78mm which is at region on Al matrix. The shrinkage of NiTi is due to shape memory effect and expansion of Al matrix due to increase of thermal load. This phenomenon has caused the residual stress is decreased along the interfacial between matrix and fiber due to the pulling action of NiTi on Al matrix. This pulling action generated a compressive residual stress on Al matrix that surrounded the NiTi wire. However, the compressive residual stress is slowly decrease for the region far away from the NiTi wire and eventually reach a saturated value -5.324 MPa. Then, for load step 7 the model temperature is decreased from 363K to 300K (room temperature). Based on Figure-7(g), the residual stress distribution pattern is still the same but the residual stress value is decreased if compare with Figure-7(g). The maximum residual stress is about 210.8 MPa which is decreased about 64% and the minimum residual stress is about -2MPa which is decreased about 62%. Beside simulation stress distribution of NiTi/Al composite is simulated, stress distribution of Ni/Al composite and Ti/Al composite also has been simulated. Then, the maximum compressive stress generated in Al matrix in Ni/Al and Ti/Al composite then would be compared with NiTi/Al composite at 363K and 300K. Table-4 indicates the NiTi/Al has highest compressive residual stress compared to Ni/Al and Ti/Al. The compressive residual stress of NiTi/Al generated during heating process about 425 % higher than Ni/Al and Ti/Al. This is because the residual stress of NiTI/Al is generated with combination shape memory effect of NiTi and mismatch of thermal expansion of between matrix and fiber. In contrast, Ni/Al and Ti/Al do not have shape memory effect during heating process so they formed weaker compressive residual stress compared with NiTi/Al. Furthermore, the compressive residual stress of these composites is higher at higher temperature. This is because of the degree of thermal expansion of Al matrix is higher when the temperature of composite is higher. Conclusion The process induced residual stress of NiTi/Al has been simulated by ANSYS APDL version 16.2. Furthermore, two others composite, Ni/Al and Ti/Al, also has been simulated. After simulation, the three types of composites able to generate compressive residual stress on Al matrix. Compressive residual stress on Al matrix able to enhance the material property of composite by closed the micro crack of the Al matrix of the composite. However, after compared the maximum residual stress generated by difference composites. The NiTi/Al, SMA composite, is the superior because it has about 425% of compressive residual stress value higher than Ni/AL and Ti/Al. The higher value of compressive residual stress is caused by the residual stress of NiTi/Al is induced with combination of shape memory effect and mismatch of thermal expansion of matrix and fiber. Meanwhile, the residual stress of Ni/Al and Ti/Al is only induced by the mismatch of thermal expansion.
2019-04-29T13:15:59.124Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "06b3ab0f5edb6aacdb0a9081042785537a0085c8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/165/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a526556da0a232734ab723d92319001782e5c19d", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
30395706
pes2o/s2orc
v3-fos-license
The Histone Chaperone Facilitates Chromatin Transcription (FACT) Protein Maintains Normal Replication Fork Rates* Ordered nucleosome disassembly and reassembly are required for eukaryotic DNA replication. The facilitates chromatin transcription (FACT) complex, a histone chaperone comprising Spt16 and SSRP1, is involved in DNA replication as well as transcription. FACT associates with the MCM helicase, which is involved in DNA replication initiation and elongation. Although the FACT-MCM complex is reported to regulate DNA replication initiation, its functional role in DNA replication elongation remains elusive. To elucidate the functional role of FACT in replication fork progression during DNA elongation in the cells, we generated and analyzed conditional SSRP1 gene knock-out chicken (Gallus gallus) DT40 cells. SSRP1-depleted cells ceased to grow and exhibited a delay in S-phase cell cycle progression, although SSRP1 depletion did not affect the level of chromatin-bound DNA polymerase α or nucleosome reassembly on daughter strands. The tracking length of newly synthesized DNA, but not origin firing, was reduced in SSRP1-depleted cells, suggesting that the S-phase cell cycle delay is mainly due to the inhibition of replication fork progression rather than to defects in the initiation of DNA replication in these cells. We discuss the mechanisms of how FACT promotes replication fork progression in the cells. The mechanisms of DNA replication at the level of naked DNA template in eukaryotes are essentially the same as those in prokaryotes (1,2). However, eukaryotes must replicate nucleosomes, which are the fundamental units of chromatin (3). The nucleosome is a histone octamer, comprising the histone (H3-H4) 2 tetramer and two histone H2A-H2B dimers, which is wrapped by 146 bp DNA (4). The nucleosome structure must be dynamically regulated during each step of DNA replication, including licensing, initiation, and elongation (5)(6)(7). Although the molecular mechanisms of these DNA replication processes have been extensively studied (5)(6)(7), their relationship to the concomitant changes in chromatin structure remains elusive. Eukaryotic DNA replication on chromatin template should be regulated by a variety of chromatin-acting factors, such as histone modification enzymes, nucleosome remodeling complexes, and histone chaperones (8 -12). Among these factors, histone chaperones are known to facilitate nucleosome assembly and disassembly by promoting specific histone-DNA and histone-histone interactions in an ATP-independent manner (9 -13). Especially, at the elongation step of DNA replication, nucleosomes must be disassembled before and reassembled after the passage of replication forks (8 -12, 14). Nucleosome reassembly is well known to be executed by the evolutionarily conserved histone chaperone, chromatin assembly factor-1 (CAF-1), 4 which was originally isolated as a nucleosome assembly factor in the SV40 DNA replication system in vitro (15). CAF-1 is also required for the reassembly of nucleosomes into newly synthesized DNA duplexes in the cells (16,17). Based on functional and physical interactions between CAF-1 and proliferating cell nuclear antigen, a sliding clamp for eukaryotic DNA polymerases ␦ and ⑀, nucleosome reassembly is reported to be mechanistically coupled to DNA synthesis (18). Recently, parental histone (H3-H4) 2 tetramers have been reported to be transferred as either the tetrameric form or as the histone H3-H4 dimer to daughter strands in a DNA replication-dependent manner in human cells (24). Because CIA/ Asf1 has been shown to directly split histone (H3-H4) 2 tetramers into histone H3-H4 dimers in vitro (25), CIA/Asf1 is the best candidate for splitting histone (H3-H4) 2 tetramers in the cells. Thus, an understanding of the molecular mechanisms of histone transfer from parental to daughter strands and nucleosome reassembly has begun to emerge (10 -12, 14). However, the mechanism of parental nucleosome disassembly at the elongation step of DNA replication has not been studied. In addition to CAF-1 and CIA/Asf1, another evolutionarily conserved histone chaperone, facilitates chromatin transcription (FACT) (26), comprised of Spt16/Cdc18 and structurespecific recognition protein 1 (SSRP1) (27), has also been reported to be involved in DNA replication (28 -30). FACT has been shown to directly interact with key DNA replication enzymes and factors such as DNA polymerase ␣ (pol ␣), replication protein A (RPA), and MCM complex, all of which are essential components for DNA replication (28, 30 -33). FACT was also reported to facilitate the DNA helicase activity of the MCM complex on nucleosomal DNA in vitro (31). Furthermore, FACT was shown to be important for proper DNA replication initiation in human cells (31). Despite many studies on the involvement of FACT in DNA replication, the mechanistic roles of FACT in nucleosome disassembly, histone transfer, and nucleosome reassembly at the elongation step of DNA replication remain obscure. To elucidate the functional roles of FACT during the process of DNA replication on chromatin in the cells, we generated and analyzed chicken DT40 conditional SSRP1 knock-out cells. Here, we provide several lines of evidence to demonstrate that FACT maintains normal DNA replication elongation rates by primarily and preferentially disassembling prereplicative nucleosomes ahead of DNA replication forks. EXPERIMENTAL PROCEDURES Plasmid Construction and Gene Disruption-Two SSRP1 disruption constructs were generated from genomic polymerase chain reaction (PCR) products inserted with the puromycine (puro)-or blasticidin (bsr)-selection marker cassette. Chicken SSRP1 cDNA was prepared by reverse transcription PCR, and the FLAG tag was added to its C-terminal end by PCR. FLAG-SSRP1 was inserted into the expression vector carrying the tet-repressible promoter, pUHG 10-3 (34). DT40 cells were successively transfected with puro-SSRP1, FLAG-SSRP1/ pUHG 10-3, and bsr-SSRP1. To prepare deletion mutants of SSRP1, SSRP1 fragments were amplified by PCR using appropriate primers on the SSRP1 cDNA template. The DNA frag-ments obtained were inserted into the vector pAneo (35), which carries the chicken ␤-actin promoter and the neomycin resistance gene driven by the SV40 promoter. Cells expressing SSRP1 fragments were obtained by subsequent transfection with these vectors. Cell Cycle Analysis by Flow Cytometry-Flow cytometry was performed as previously described (17). For two-dimensional cell cycle analysis, cells were cultured in the presence of bromodeoxyuridine (BrdU; BD Biosciences) for 10 min, fixed in 70% ethanol, and stained with FITC-labeled anti-BrdU antibody (BD Biosciences) and propidium iodide. DNA Fiber Assay-The DNA fiber assay was performed as previously described (37). Fiber lengths were measured using ImageJ, and micrometer values were expressed in kilobases using a conversion factor: 1 m ϭ 2.59 kb. Measurements were recorded from areas of the slides with untangled DNA fibers to prevent the possibility of recording labeled patches from tangled bundles of fibers. MNase Assay-The MNase assay was performed as previously described (39). Cells were pulse-labeled with 20 M BrdU for 20 min and harvested. To isolate nuclei, cells were treated with 0.1% Nonidet P-40 in NB buffer (15 mM Tris-HCl, pH 8.0, 0.5 mM EDTA, 2 mM magnesium acetate, 2 mM CaCl 2 , 1 mM DTT, and protease inhibitor mixture (Sigma)). The isolated nuclei were washed twice with NB buffer and digested with 0.0074 -0.20 units/ml of MNase (Sigma) at 37°C for 8 min. Genomic DNA was then isolated using Easy DNA (Invitrogen), electrophoresed in a 2% agarose gel, stained with ethidium bromide, and transferred to Hybond N membrane (GE Healthcare). BrdU-labeled DNA was detected using anti-BrdU antibody. Generation of Conditionally SSRP1-depleted Cells-To investigate the functional role of the histone chaperone FACT in DNA replication in the cells, we generated a FACT knockout cell line by using gene-targeting constructs designed to replace exons 6 -8 of the chicken SSRP1 gene, which encodes a small subunit of the FACT complex (27), with a puro or bsr selection marker cassette (Fig. 1A). We presumed that SSRP1 would be essential for the viability of chicken DT40 cells because gene targeting of SSRP1 in mice was lethal (40). To avoid the predicted lethality of SSRP1-deficient cells, we generated SSRP1 conditional knock-out cells. A FLAG-tag conjugated chicken SSRP1 gene under control of a tet-repressible promoter (37) was transfected into DT40 cells after one of the two SSRP1 alleles had been disrupted with the puro-SSRP1 targeting construct. SSRP1 Ϫ/ϩ cells expressing FLAG-SSRP1 were selected and subsequently transfected with the bsr-SSRP1 targeting construct to disrupt the other allele, and we obtained " SSRP1 Is Essential for Cell Viability-As expected, the addition of doxycycline (Dox) to SSRP1 Ϫ/Ϫ cell cultures efficiently suppressed the transgene expression of FLAG-SSRP1. The FLAG-SSRP1 mRNA (Fig. 1B) and protein ( Fig. 1C) disappeared after the addition of Dox. Upon depletion of SSRP1, the amount of Spt16, the large subunit of the FACT complex, also gradually decreased (Fig. 1C). SSRP1 Ϫ/Ϫ cells ceased to grow within 72 h and began to die in the presence of Dox (Fig. 1D). These findings suggest that SSRP1 plays an essential role in chicken cell viability. Domain of SSRP1 Required for Cell Viability-Considering the apparent essential role of SSRP1, we attempted to identify the functional domains of SSRP1 required for cell viability. Vertebrate SSRP1 is the bipartite orthologue of yeast Pob3 and Nhp6 ( Fig. 2A) (26 -30). Histone chaperones including Pob3 are known to usually contain acidic amino acids-rich region(s) (11, 13) ( Fig. 2A). To clarify the functional roles of the acidic region of Pob3 and the Nhp6 domain of SSRP1, constructs consisting of the corresponding domains were expressed in SSRP1 Ϫ/Ϫ cells ( Fig. 2A). As shown in Fig. 2B, the Nhp6 domain and the acidic region of the Pob3 domain in SSRP1 were not required for cell viability. These results are consistent with the previous findings that Pob3, but not Nhp6, is essential for yeast cell viability (30). Taken together, the domain analysis of SSRP1 revealed that its Pob3 domain without the acidic region was found to be sufficient for cell viability (Fig. 2, A and B). Depletion of SSRP1 Delays the Progression of S Phase-To next address which stages of the cell cycle are perturbed in SSRP1-depleted cells, we characterized the effect of SSRP1 depletion on cell cycle progression by flow cytometry (FACS) using an FITC-conjugated antibody against the thymidine analog BrdU. The proportion of S-phase cells incorporating BrdU decreased within 72 h after the addition of Dox (Fig. 3A, left Schematic representation of the SSRP1 locus and gene-targeting constructs. Closed boxes indicate exons. puro and bsr indicate the drug resistance genes of puromycin and blasticidin S, respectively. B, suppression of SSRP1 mRNA expression in SSRP Ϫ/Ϫ cells by Dox. RNA was prepared from SSRP1 Ϫ/Ϫ cells cultured in the presence of Dox for the indicated periods. SSRP1 cDNA was prepared by real-time reverse transcription PCR and electrophoresed in a 2% agarose gel. CDC7 was used as the loading control. It is worth noting that another group independently constructed a similar cell line using DT40 cells (65). C, depletion of the FLAG-SSRP1 protein. Whole cell lysates were prepared from "wild-type (WT)" or SSRP Ϫ/Ϫ ϩ FLAG-SSRP1 (SSRP Ϫ/Ϫ ) cells cultured in the presence of Dox for the indicated times. FLAG-SSRP1, Spt16, and histone H3 (loading control) were detected by Western blotting. D, growth curves. WT or SSRP Ϫ/Ϫ cells (1 ϫ 10 5 ) were inoculated in 1 ml of medium and passaged daily. Dox was added at time 0. panels). The amount of BrdU incorporated into individual cells was decreased even 48 h after the addition of Dox (Fig. 3A, right panels). In addition, the G 1 and sub-G 1 fractions gradually increased in the presence of Dox (Fig. 3A). Considering these results, subsequent experiments were performed using cells cultured for 48 h in the presence or absence of Dox. It was noted that a small amount of Spt16 remained in the SSRP1depleted cells at that time (Fig. 1C). After release from a nocodazol block (G 2 /M arrest), cell cycle progression was further analyzed. Cells cultured in the absence of Dox reached the next G 2 /M phase 7-8 h after release from the nocodazol block, and SSRP1-depleted cells traversed the S phase of the cell cycle more slowly than those expressing SSRP1 and began to reach G 2 /M phase at 8 -9 h after release (Fig. 3B). It is reported that a mutation of POB3, the yeast SSRP1 homologue, also causes a slower S phase (30). The previous studies and our observations indicate that the functional role of FACT (1-439) without the acidic region, is described under "Discussion." B, growth curves. SSRP Ϫ/Ϫ cells expressing each SSRP1 fragment described in A were cultured in the absence (Ϫ) or presence (ϩ) of Dox (upper and lower panels, respectively). Because full-length FLAG-SSRP1 is regulated by Dox, but not the SSRP1 fragments, each SSRP1 fragment is expressed independently of the presence of Dox. SEPTEMBER 2, 2011 • VOLUME 286 • NUMBER 35 in S-phase cell cycle progression is evolutionarily conserved from unicellular to multicellular eukaryotes. SSRP1 Maintains the Normal Rate of DNA Replication Fork Progression-Delayed S-phase progression could be caused by checkpoint activation or defects in the initiation and/or elongation steps of DNA replication (41). Thus, we attempted to elucidate which of these possibilities caused the S-phase cell cycle defect in SSRP1-depleted cells. No hallmarks of checkpoint activation, such as the phosphorylation of Check1 (Chk1), were observed in SSRP1-depleted cells (supplemental Fig. S1). FACT Maintains Replication Fork Rates To test the other possibilities, the elongation rate of DNA replication (fork rate) and the frequency of DNA replication initiation were measured. The fork rate was examined using a DNA fiber assay (42). In this assay, cells were pulse-labeled with thymidine analogs CldU and IdU for 10 and 15 min, respectively, and the length of labeled DNA replication tracks on DNA fiber spreads were quantified by immunostaining. The fork rate in SSRP1-depleted cells was about one-quarter of that found in wild-type or SSRP1-undepleted cells (Fig. 4, A and B, and supplemental Fig. S2). Interestingly, the Pob3 domain without the acidic region, which supports cell viability (Fig. 2, A and B), is sufficient for replication fork progression (Fig. 4C). Because several components of the eukaryotic DNA replication machinery acting on replication forks (43) have been shown to be required for maintaining a normal replication fork rate (37,44), FACT could facilitate the fork rate through its concerted actions with DNA replication machinery. The DNA Replication Initiation May Be Intact in SSRP1-depleted Cells-The frequency of DNA replication initiation was next assessed by measuring origin-to-origin distances using a molecular combing assay (45), which allows the visualization and measurement of tethered strands of DNA. An increase or a decrease in the origin-to-origin distance indicates infrequent or frequent DNA replication initiation in a given stretch of DNA, respectively (46). The average of origin-to-origin distance ( Fig. 5A and supplemental Fig. S3) measured in DNA from SSRP1depleted cells decreased to about half of that measured in DNA from SSRP1-undepleted cells (Fig. 5B), indicating that the frequency of initiation of DNA replication was up-regulated in SSRP1-depleted cells. There are two possible explanations for the increase in the initiations of DNA replication found in SSRP1-depleted cells. First, FACT may repress the initiation of DNA replication such that the action of FACT also represses transcription initiation from cryptic sites (47,48). Alternatively, the decreased fork rate in SSRP1-depleted cells could up-regulate the initiation of DNA replication at the dormant origins, so that slowing the replication fork rate with aphidicolin or hydroxyurea would trigger the initiation of DNA replication at otherwise dormant origins (49 -51). Chromatin Loading of DNA Replication Proteins Was Proficiently Detected in SSRP1-depleted Cells-FACT interacts with Pol␣ and the MCM2-7 complex, both of which are essential for DNA replication (28,31,32) (Fig. 6A). To determine whether the fork progression defect in SSRP1-depleted cells was caused by a dysfunction of Pol␣ or the MCM2-7 complex, we examined for reductions in the amounts of Pol␣ and two MCM complex components, MCM2 and MCM4, under SSRP1-depletion conditions. The total amounts of these proteins in SSRP1 Ϫ/Ϫ cells, even 60 h after the addition of Dox, was not measurably different from those in SSRP1-undepleted cells (Fig. 6B). The chromatin-bound forms of Pol␣, MCM2, and MCM4 in SSRP1 Ϫ/Ϫ cells were further monitored after the release of cells from nocodazole-induced G 2 /M arrest in the presence of Dox. The recruitment of Pol␣, MCM2, and MCM4 onto chromatin was effectively detected in SSRP1-depleted cells (Fig. 6C), although the recruitment of MCM2 and MCM4 seemed slightly delayed. The loaded MCM helicase has been shown to be sufficient for the initiation of DNA replication in SSRP1-depleted cells because the frequency of DNA replication initiation in SSRP1-depleted cells increased instead of decreased (Fig. 5B). This suggested that the fork progression defect in SSRP1-depleted cells was not due to a dysfunction in the chromatin loading of Pol␣, MCM2, and MCM4. The Activity of FACT in Nucleosome Assembly on Newly Synthesized Daughter Strands Seems to Be Less Significant-In general, histone chaperones facilitate nucleosome assembly and disassembly (9 -13). Indeed, FACT is reported to be involved in both nucleosome disassembly and reassembly in transcription elongation (52). Moreover, nucleosomal histones H2B and H3 are efficiently evicted by RNA polymerase II but do not reform upon inactivation of Spt16, suggesting that Spt16 reassembles nucleosome structure during transcription elongation (53). We therefore tried to elucidate whether FACT is involved in nucleosome reassembly during DNA replication. To assess this possibility, a micrococcal nuclease (MNase) digestion assay was performed to monitor nucleosome reassembly on daughter strands. Newly synthesized DNA was pulse-labeled with BrdU and detected with an anti-BrdU antibody. MNase sensitivity was apparently unchanged in SSRP1-depleted cells compared with that in SSRP1-undepleted cells (Fig. 7A, lanes 1-5 versus 6 -10). In contrast, p150 (the large subunit of another histone chaperone CAF-1)-depleted DT40 cells, which were previously generated by Takami et al. (17), were more sensitive to MNase (Fig. 7A, lanes 11-15 versus 16 -20). This observation is consistent with the previous findings that CAF-1 reassembles nucleosomes on daughter strands during DNA replication both in vitro and in vivo (15)(16)(17). These results suggest that the activity of FACT in nucleosome assembly on newly synthesized daughter strands is less significant than that of CAF-1. If the dysfunction of nucleosome reassembly on newly synthesized daughter strands leads to slow DNA replication fork progression, the rate of replication fork progression in p150 (CAF-1)-depleted cells would be expected to be more dramatic than that in SSRP1-depleted cells. However, the deceleration of DNA replication was not as severe in p150-depleted cells as in SSRP1-depleted cells (Fig. 7B). Taken together, the results suggest that the activity of FACT in promoting the rate of DNA elongation is more significant than that of CAF-1, whereas the activity of FACT in nucleosome assembly on newly synthesized daughter strands is relatively weak. DISCUSSION A pioneering study using Xenopus egg extracts in a cellular DNA replication system showed that the histone chaperone FACT is required for DNA replication (29). This finding was also supported by S-phase perturbation caused by dysfunction of FACT in the previous (28, 30 -33) and present studies (the data in Fig. 3). To further elucidate the functional roles of FACT in DNA replication, we examined the rates of DNA replication elongation (Fig. 4) and initiation (Fig. 5), the amounts DNA replication enzymes/factors (Fig. 6), and nucleosome assembly activity on newly synthesized daughter strands (Fig. 7) under dysfunctional FACT conditions (Figs. 1 and 2). Below, we discuss our observations within the context of previous studies and suggest possible roles for FACT in DNA replication. Are the DNA Replication Defects Observed in SSRP1-depleted Cells Due to Transcriptional Defects?-Because FACT is known to be involved in transcription elongation (26,27,(52)(53)(54)(55), we first examined the possibility that transcriptional defects caused by FACT dysfunction lead indirectly to DNA replication defects in SSRP1-depleted cells by determining the amounts of several DNA replication factors in both SSRP1-depleted and -undepleted cells. However, the amounts of Pol␣, MCM2, and MCM4 in SSRP1-depleted cells were not different from those in SSRP1-undepleted cells (Fig. 6). Furthermore, DNA replication factors must have been present in sufficient amounts for DNA replication initiation because origin firing in SSRP1-depleted cells increased compared with that in undepleted cells (Fig. 5). Therefore, inhibition of the rate of replication fork pro-gression in SSRP1-depleted cells (Fig. 4) is probably not caused by a reduction in the transcription of genes encoding DNA replication factors but is related to dysfunction of FACT in DNA replication. This situation is consistent with the previous observation that immunodepletion of FACT from Xenopus egg extracts, in which transcription does not occur, causes defects in DNA replication (29). Taken together, the results suggest that FACT dysfunction directly led to DNA replication defects in SSRP1-depleted cells. Functional Roles of FACT in DNA Replication Initiation Versus Elongation-FACT is proposed to participate in the initiation stage of DNA replication (31,33). It has been shown that FACT interacts and coexists with the MCM complex at a particular chromosomal DNA replication origin as determined by chromatin immunoprecipitation analysis (ChIP) in human cells (31,33). Furthermore, the initiation of DNA replication at a particular site was reduced by the perturbation of the interaction between FACT and the MCM complex (31). Contrary to these observations, in SSRP1-depleted cells, an increase in the frequency of initiation (Fig. 5) and a decrease in the rate of elongation (Fig. 4) were detected by direct analyses of DNA replication products. The apparent differences between the previous (31,33) and present (Fig. 5) findings on the initiation of DNA replication might be due to different experimental assays and conditions. Further studies, such as chromatin immunoprecipitation analyses of replication enzymes and factors at individual origins in SSRP1-depleted DT40 cells will be needed to clarify the functional roles of SSRP1 in DNA replication initiation. Despite the differences between the results in the previous and present studies, the present results (Fig. 4) suggest that SSRP1 is involved in the elongation stage of DNA replication (Fig. 4). Functional Roles of FACT in DNA Replication Elongation-Based on the present and previous findings, FACT may act as (i) a factor that stimulates the DNA replication machinery and/or as (ii) a histone chaperone in DNA replication elongation. As described under "Functional Roles of FACT in DNA Replication Initiation Versus Elongation," FACT interacts with Pol␣, RPA, and the MCM complex (28,31,32) and apparently facilitates replication fork progression via its functional interaction with these replication enzymes/factors. This possibility seems to be supported by a previous study (32) that demonstrated that a point mutation introduced within a conserved surface cluster in the budding yeast Pob3-M, corresponding to the part of the Pob3 domain without the acidic region of the SSRP1 subunit of FACT (Figs. 2 and 4), interferes with DNA replication (30). Because the DNA replication defect in the pob3 point mutant is suppressed by a mutation in RPA, it has been proposed that coordination between the functions of FACT and RPA is important during DNA replication (32). Because the Pob3 domain without the acidic region was sufficient for both cell viability and fork progression (Figs. 2 and 4C) in this study, the defects in cell viability and fork progression would seem to be caused by a dysfunction of RPA in SSRP1depleted cells. Histone-based Activities of FACT in DNA Replication Elongation-Because the Pob3 domain without the acidic region was sufficient for both cell viability and DNA replication (Figs. 2 and 4C), it is possible that defects in cell viability and DNA replication elongation are due to a dysfunction in the histone-based activity of the Pob3 domain lacking the acidic region. Yeast Pob3-M, which corresponds to a part of the Pob3 domain without the acidic region (Figs. 2 and 4), is reported to be genetically linked to the histone H2A C-terminal docking site and to have a function that overlaps with that of the Spt16 N-terminal "peptidase" domain (56). Because the Spt16 N-terminal peptidase domain directly interacts with histones H3 and H4 in vitro (57), the Pob3 domain without the acidic region may regulate chromatin-based DNA replication elongation through its functional interaction with core histones (H2A, H3, and H4). Thus, the histone-based activities of FACT are probably involved in efficient DNA replication elongation. Functional Roles of FACT in Nucleosome Reassembly during DNA Replication Elongation-Mechanisms for the histonebased activities of FACT have been reviewed very recently (58). In the case of transcription elongation, FACT is proposed to facilitate transcription elongation by assisting in nucleosome disassembly and reassembly ahead and behind, respectively, of RNA polymerase II, which translocates along the DNA template during transcription elongation (26,27,52,54,55). Another study proposed that FACT promotes a reversible transition between two nucleosome forms that result in unchanged and dramatically increased accessibility to nucleosomal DNA, respectively (59). It is plausible that FACT also plays similar roles in DNA replication elongation. Because FACT has been proposed to be involved in nucleosome reassembly during DNA replication elongation (32), it is interesting to discuss the implications of our results (Fig. 7A) for the function of FACT in the DNA replication process. In this study, we could not detect a significant requirement for FACT in nucleosome reassembly on daughter strands (Fig. 7A), suggesting that FACT does not seem to be involved in nucleosome reassembly on daughter strands, in contrast to a previous proposal (32). On the other hand, in the case that FACT is involved in nucleosome reassembly on daughter strands as reported, our results (Fig. 7A) could be explained as (i) the reduced fork progression in SSRP1-depleted cells (Fig. 7B) masks the requirement of FACT by compensating for inefficient nucleosome reassembly and (ii) nucleosome reassembly on daughter strands in SSRP1-depleted cells is performed by the remaining pool of Spt16 subunit (Fig. 1C), which has been shown to function alone in DNA replication processes such as in the recovery from DNA replication stress (60). Spt16 alone binds to H2A-H2B dimers and nucleosomes in vitro (52) and redeposits histones during transcription elongation (53). Thus, the remaining pool of Spt16 in SSRP1-depleted cells could redeposit histones, resulting in nucleosome reassembly during DNA replication elongation. The two possibilities may not be mutually exclusive. Taking into account all information described in this paragraph, we cannot fully conclude the degree of importance of FACT in nucleosome reassembly on newly synthesized DNA. In contrast to FACT, deficiency of CAF-1, which caused a defect in nucleosome reassembly (Fig. 7A) as previously reported (15)(16)(17), resulted in an almost normal replication fork rate (Fig. 7B). It suggested that a defect in nucleosome reassembly does not necessarily cause a reduction in replication fork progression. In other words, it seems that the activity of nucleosome reassembly does not significantly facilitate DNA replication elongation. A Possible Mechanistic Action of FACT in Replication Fork Progression-Prereplicative nucleosomes of simian virus (SV40) minichromosomes disassemble just ahead of replication forks (61,62). Furthermore, in vivo studies on the dynamics in histone-DNA interactions have suggested that prereplicative nucleosomes are dissolved during the advancement of the replication fork with the release of associated histones in the form of (H3-H4) 2 tetramers and H2A-H2B dimers (63). Because MCM helicase complex translocates along DNA and unwinds duplex DNA during the elongation phase of DNA replication (64), it is quite possible that prereplicative nucleosome disassembly might occur ahead of the MCM complex. MCM complex and FACT were reported to be detected in a large replication progression complex, the replisome in budding yeast, using a proteomics approach (43). Furthermore, FACT is known to facilitate the DNA helicase activity of the MCM complex on nucleosomal DNA in vitro (31,33). Here, we demonstrated that FACT is required for efficient replication fork progression (Fig. 4). Considering all of the results, we favor a model whereby FACT facilitates DNA replication elongation in a concerted action with the MCM complex by promoting efficient prereplicative nucleosome disassembly ahead of the replication fork.
2018-04-03T01:38:58.983Z
2011-07-07T00:00:00.000
{ "year": 2011, "sha1": "78f552fcab91b8c44657cb4a5b468e1a5e40a3ac", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/35/30504.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "c885daa57d020c941ba71e5e6edbd232c2ff25e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
396588
pes2o/s2orc
v3-fos-license
Prognostic effect of different PD-L1 expression patterns in squamous cell carcinoma and adenocarcinoma of the cervix Programmed death-ligand 1 (PD-L1) is expressed in various immune cells and tumor cells, and is able to bind to PD-1 on T lymphocytes, thereby inhibiting their function. At present, the PD-1/PD-L1 axis is a major immunotherapeutic target for checkpoint inhibition in various cancer types, but information on the clinical significance of PD-L1 expression in cervical cancer is largely lacking. Here, we studied PD-L1 expression in paraffin-embedded samples from two cohorts of patients with cervical cancer: primary tumor samples from cohort I (squamous cell carcinoma, n=156 and adenocarcinoma, n=49) and primary and paired metastatic tumor samples from cohort II (squamous cell carcinoma, n=96 and adenocarcinoma, n=31). Squamous cell carcinomas were more frequently positive for PD-L1 and also contained more PD-L1-positive tumor-associated macrophages as compared with adenocarcinomas (both P<0.001). PD-L1-positive tumor-associated macrophages were found to express CD163 and/or CD14 by triple fluorescent immunohistochemistry, demonstrating an M2-like phenotype. Interestingly, disease-free survival (P=0.022) and disease-specific survival (P=0.046) were significantly poorer in squamous cell carcinoma patients with diffuse PD-L1 expression as compared with patients with marginal PD-L1 expression (i.e., on the interface between tumor and stroma) in primary tumors. Disease-specific survival was significantly worse in adenocarcinoma patients with PD-L1-positive tumor-associated macrophages compared with adenocarcinoma patients without PD-L1-positive tumor-associated macrophages (P=0.014). No differences in PD-L1 expression between primary tumors and paired metastatic lymph nodes were detected. However, PD-L1-positive immune cells were found in greater abundance around the metastatic tumors as compared with the paired primary tumors (P=0.001 for squamous cell carcinoma and P=0.041 for adenocarcinoma). These findings point to a key role of PD-L1 in immune escape of cervical cancer, and provide a rationale for therapeutic targeting of the PD-1/PD-L1 pathway. adenocarcinomas. 8 At present, patients with cervical cancer are treated with radical hysterectomy and pelvic lymphadenectomy or chemoradiation, depending on tumor stage and tumor size. [8][9][10] Unfortunately, the number of patients with adenocarcinoma is still rising and these patients seem to have a poorer survival rate than squamous cell carcinoma patients, especially if adenocarcinoma present with tumor-positive lymph nodes. [11][12][13][14] To improve the prognosis of cervical cancer patients, novel immunotherapeutic strategies need to be developed and established. In addition, histological subtype-specific treatment needs to be considered, which requires a detailed investigation of the tumor microenvironment in relation to clinical outcome of these tumor types. Promising immunotherapeutic therapies targeting immune checkpoint molecules, such as cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD-1) expressed on activated T cells, counteract the immunosuppressive cycle prevailing in the tumor microenvironment and have led to complete and long-lasting clinical responses. 15,16 Also, anti-programmed cell death ligand 1 (PD-L1) therapy has been associated with improved survival outcome in several types of cancer, including lung cancer, melanoma, renal cell cancer, and bladder cancer. 17,18 At present, in advanced cervical cancer, clinical phase I/II trials are ongoing examining the effects of ipilimumab (anti-CTLA-4; NCT01711515), pembrolizumab (anti-PD-1; NCT02054806), and nivolumab (anti-PD-1; NCT02488759); however, no study results have been reported yet. Recently, we have identified a suppressive myeloid cell subset expressing PD-L1, with high and interrelated rates of regulatory T cells in metastatic lymph nodes of patients with cervical cancer. 19 Currently, information is largely lacking about PD-L1 expression patterns in primary and metastatic cervical tumors. Therefore, we investigated the expression of PD-L1 in primary and metastatic cervical cancer in relation to the two major histological subtypes (squamous cell carcinoma and adenocarcinoma), and studied the correlation with pathological and clinical characteristics in two patient cohorts. This study provides more insight into the role of PD-L1 in cervical cancer, and strengthens the rationale for blocking the PD-L1/ PD-1 immunosuppressive axis. Study Group Formalin-fixed, paraffin-embedded material was collected from two different patient cohorts. 20 For triple immunofluorescence staining on four squamous cell carcinoma patients from cohort I, 1:100 rabbit anti-PD-L1 (clone SP142; Spring Bioscience, Pleasanton, CA, USA), 1:25 mouse IgG2a anti-CD14 (clone 7; Abcam, Cambridge, UK), and 1:100 mouse IgG1 anti-CD163 (clone 10D6; Novocastra, Milton Keynes, UK) were used and detected with Alexa Fluor 647 goat anti-rabbit, Alexa Fluor 546 goat anti-mouse IgG2a, and Alexa Fluor 488 goat anti-mouse IgG1 (all from Life Technologies, Grand Island, NY, USA), as described previously. 20 Imaging, Scoring, and Analysis The immunohistochemically PD-L1-stained slides were analyzed and imaged using a bright-field microscope (Olympus BX50; Olympus, Center Valley, PA, USA). Tumor fields were distinguished from normal tissue by the use of nuclear staining with hematoxylin. Primary and metastatic tumor cells were designated PD-L1 positive, when ≥ 5% of the tumor cells were positive for PD-L1. Moreover, in both primary and metastatic tumor samples, a distinction was made between diffuse (throughout the whole tumor) or marginal (peripheral staining, on the interface between tumor and stroma) expression of PD-L1 by the tumor cells; scores were given for the presence of PD-L1-positive tumor-infiltrating cells (yes/no), and for immune cells accumulated around tumor fields forming a PD-L1-positive cordon (yes/no). In primary cervical cancer samples, semiquantitative scores were given for PD-L1-positive stromal cells (low numbers/high numbers). In metastatic lymph node samples, scores were obtained for resident lymph node tissue adjacent to metastases (peritumoral) or distant from metastases (paracortical areas) (low numbers/high numbers). Stromal cells and histiocytes present in B-cell follicles were used as an internal control for PD-L1 positivity. The immunofluorescence was analyzed and imaged using a digital imaging fluorescence microscope (Axiovert-200M; Zeiss, Oberkochen, Germany). Tumor fields were distinguished from normal tissue by the use of DAPI staining. Statistical Analysis The statistical analyses were performed with IBM SPSS (IBM, Armonk, NY, USA) and GraphPad Prism 5 (GraphPad Software, La Jolla, CA, USA). The Pearson's χ 2 or Fisher's exact tests were used for the comparison of PD-L1 expression between squamous cell carcinoma and adenocarcinoma, and clinicopathological characteristics. Kaplan-Meier 5-year survival curves were generated and log-rank analyses were performed. Primary tumors and paired metastatic lymph nodes were compared with the McNemar test. P-values below 0.05 were considered statistically significant. PD-L1 Protein Expression in Primary Cervical Cancer Representative examples of different PD-L1 expression patterns in primary cervical tumors (patient cohort I, see Table 1) are depicted in Figure 1 and the results are summarized in Table 3. We observed PD-L1 positivity in tumor cells, in tumor-infiltrating immune cells, and in stromal immune cells. All tumor-infiltrating and the majority of stromal PD-L1positive immune cells were identified as tumorassociated macrophages, being double positive for CD163 and PD-L1 and/or triple positive for CD163, CD14, and PD-L1 ( Figure 2). PD-L1 positivity was observed in 45% (used as cutoff) of the tumor cells in 54% of the squamous cell carcinomas and in 14% of all adenocarcinomas (P o 0.001). In addition, PD-L1-positive tumor-associated macrophages were present in 53% of the squamous cell carcinomas and in 12% of the adenocarcinomas (P o 0.001) ( Table 3). For stromal PD-L1-positive immune cells and aggregates of PD-L1-positive cells at the tumor stroma interface (termed as PD-L1-positive cordon), no significant differences were found between squamous cell carcinomas and adenocarcinomas. PD-L1 Expression in Relation to Clinicopathological Characteristics and Survival PD-L1 expression was analyzed in relation to clinicopathological characteristics for patient cohort I. Interestingly, we found the majority of PD-L1positive squamous cell carcinoma more often to be In adenocarcinoma, although the group sizes were small, patients with a PD-L1-positive cordon presented with a high FIGO stage (4IBII) (P = 0.010). No further significant correlations were found for PD-L1 positivity and clinicopathological characteristics (tumor size, parametrium invasion, vaginal involvement, and lymph node involvement). In addition, log-rank tests were performed and Kaplan-Meier plots were generated for disease-free survival and disease-specific survival of the two histological subtypes to assess the correlation with PD-L1 positivity. Squamous cell carcinoma patients with diffuse PD-L1 expression or patients with PD-L1-negative tumors had worse disease-free survival (P = 0.022 and P = 0.029, respectively) and disease-specific survival (P = 0.046 and P = 0.096, respectively) compared with patients with marginal PD-L1 expression in the primary tumor (Figures 3a and b). In squamous cell carcinoma patients, no significant association was found between PD-L1positive tumor-associated macrophages and survival (Figure 3c), whereas adenocarcinoma patients with PD-L1-positive tumor-associated macrophages had a significantly worse disease-specific survival (P = 0.014) compared with adenocarcinoma patients without PD-L1-positive tumor-associated macrophages (Figure 3d). PD-L1 Expression in Primary Tumor and Paired Metastatic Lymph Node Next, we studied PD-L1 expression by immunohistochemistry in patient cohort II with samples available from primary and paired metastatic lymph nodes from patients with squamous cell carcinoma and adenocarcinoma ( Table 2). The results for cohort II are summarized in Table 4 and Supplementary Table 1. In the primary tumor, in correspondence to the results obtained in cohort I, squamous cell carcinomas were more often positive for PD-L1 (P = 0.024) and had more often PD-L1-positive tumor-associated macrophages (P = 0.012). In addition, 25% of the squamous cell carcinomas had a strong PD-L1-positive cordon, compared with 3% of the adenocarcinomas (P = 0.012) ( Table 4). Next, we compared PD-L1 expression between primary tumors and paired metastases. Discordant tumor cell staining of PD-L1 between primary tumor cells and metastatic tumor cells was found for squamous cell carcinomas in 22 of 71 (31%) cases and for adenocarcinomas in 5 of 28 (18%) cases (Supplementary Table 1). Nevertheless, overall in squamous cell carcinoma and adenocarcinoma patients, no significant differences was found between primary tumors and paired metastatic lymph nodes in PD-L1 positivity of tumor cells, diffuse and marginal PD-L1 expression, the presence of PD-L1-positive tumor-associated macrophages, and the presence of a PD-L1-positive cordon (Figures 4c-e and g). In both squamous cell carcinomas and adenocarcinomas, more dense cordons of PD-L1-positive immune cells were found surrounding the metastases compared with the paired primary tumors (P = 0.001 for squamous cell carcinoma and P = 0.041 for adenocarcinoma) (Figure 4f). Discussion New immunotherapies targeting the PD-1/PD-L1 axis have been reported to give very promising clinical responses in patients with various types of cancer. 17,[21][22][23][24] However, until now no data are available on the clinical efficacy of blocking this checkpoint in cervical cancer. PD-L1 positivity has been reported previously in cervical intraepithelial neoplasias and cervical carcinomas, [25][26][27] and, recently, we have reported on the presence of PD-L1-positive immune cells in tumor-draining lymph nodes, including metastasis-free and metastatic lymph nodes. 19,20 However, extensive studies on PD-L1 expression in a large patient cohort of primary and paired metastatic cervical cancer samples, in relation to histological subtype and clinicopathological patient characteristics, are lacking. In the present study, we observed diverse, heterogeneous PD-L1 expression patterns among primary tumors from patients with cervical cancer. Although, there are controversies concerning the use of different PD-L1 antibody clones, several studies have shown that the clones used in the present study are specific and validated for immunohistochemical assays. 28,29 Apart from the tumor cells, we also observed PD-L1 positivity in immune cells present in the tumor fields and in the stromal compartment. In more than 20% of the tumors, we observed a PD-L1-positive cordon which was also described in other tumor types. 30 costimulatory markers CD80 and CD86 as expressed on mature dendritic cells. 33 We identified these PD-L1-positive immune cells as CD163 + and/or CD14 + tumor-associated macrophages, whereas, remarkably, another study on cervical cancer claimed them to consist mainly of CD8 + T cells. 27 The presence of PD-L1 + T cells was also reported by other studies; however, in these studies compelling evidence in the form of double stainings was lacking, 17,34 and, therefore, it is more likely that PD-L1-positive tumorinfiltrating cells are from myeloid origin with an M2 macrophage-like phenotype as observed by us, which is in accordance with multiple other studies. 31,35,36 Similar M2-like cells, conditioned by tumor-derived soluble factors, have been shown to be poor CD8 + T-cell primers, potent inducers of FoxP3 + regulatory T cells and proangiogenic-and protumor-invasive factor producers facilitating tumor progression. 37,38 Although different myeloid cell sub-populations and a low CTL/regulatory T-cell ratio have been found to correlate to survival in the cervical tumor micro-environment, 39,40 the precise role of PD-L1-positive tumor-associated macrophages is yet to be fully elucidated. Nevertheless, in vitro observations by Heusinkveld et al., 41 suggest that cervical cancerderived IL-6 and prostaglandin-E2 convert monocytes to T-cell-tolerizing macrophages with low levels of costimulatory molecules and IL-12p70, and high levels of IL-10 and PD-L1 expression consistent with a poor ability to prime naïve T cells. In accordance, we have previously shown that high IL-6 in the tumor microenvironment of cervical cancer is associated with poor patient survival. 42,43 This is the first study to report on the difference in PD-L1 expression between squamous cell carcinoma and adenocarcinoma. Two previous publications on PD-L1 expression in cervical cancer did not include adenocarcinoma patients in the cohorts analyzed. 25,27 Strikingly, we observed prognostic differences for PD-L1 expression patterns between squamous cell carcinoma and adenocarcinoma patients; we found significantly more PD-L1 expression by tumor cells (cutoff 45%) and higher rates of PD-L1-positive tumor-associated macrophages in squamous cell carcinomas as compared with adenocarcinomas. Similarly, differential findings for PD-L1 expression in the two histological subtypes were reported in lung cancer patients. 35,44 Earlier studies have reported conflicting data on correlations between PD-L1 expression in different solid tumor types with both improved 30,45,46 and poor prognosis. [47][48][49][50][51] However, recent meta-analyses have shown a predominant correlation with poor survival. [52][53][54] We were not able to detect an association between PD-L1 expression per se and survival, which is in accordance with an earlier study in patients with cervical cancer, analyzing the whole cohort through the use of tissue microarrays. 25 Of note, we did find an unambiguous survival benefit for squamous cell carcinoma patients with marginal PD-L1 tumor expression (at the tumor-stromal interface) as compared with patients with diffusely positive PD-L1 tumors. In head and neck cancer, diffuse PD-L1 expression was detected in only 1/14 tumors, whereas marginal PD-L1 expression was detected in 13/14 tumors, but no survival analysis was performed. 31 Marginal PD-L1 expression might be induced by extrinsic factors, such as IFNγ, TNFα, and IL-1β locally produced by juxtaposed T lymphocytes, whereas diffuse PD-L1 expression is more likely to result from constitutive expression because of underlying tumor-intrinsic molecular mechanisms such as PTEN loss and aberrant JAK/ STAT signaling. 24,30,55 Importantly, conjunction with infiltrating effector T cells and the release of type-1 effector cytokines might explain the observed association between marginal expression of PD-L1 and a more favorable prognosis (see Figures 3a and b). Recently, we reported on a survival benefit for cervical cancer patients with high numbers of Tbet-positive T cells, indicative of high IFNγ production. 56 In adenocarcinoma, we observed a survival benefit for patients with tumor lacking PD-L1-positive tumorassociated macrophages. These findings point to a difference in immunological microenvironments and tumor escape mechanisms between cervical adenocarcinoma and squamous cell carcinoma in line with previous reports on histology-specific oncogenic mutations, 3,4 and immunological profiles. [5][6][7] Our findings suggest that targeting the PD-1/PD-L1 pathway might be a promising immunotherapy approach in patients with cervical cancer, as PD-L1 is expressed in 54% of the squamous cell carcinomas. In addition, recent studies have shown that even patients with PD-L1-negative primary tumors, including lung cancer, gastric cancer, colorectal cancer, renal cell cancer, and bladder cancer and melanoma, respond to anti-PD-L1 treatment. 18,24,57 This might be due to the observed heterogeneous and discordant PD-L1 tumor cell staining between primary tumor cells and metastatic tumor cells with, in some cases, PD-L1-positive metastases originating from PD-L1-negative primary tumors (see Supplementary Table 1), which was also observed in clear-cell renal cell carcinoma. 58,59 Adenocarcinoma patients with PD-L1-positive tumor-associated macrophages had a poor survival; however, anti-PD-1 or anti-PD-L1 therapy might be successful, as in bladder cancer patients with PD-L1-positive tumorinfiltrating immune cells objective responses were obtained after anti-PD-L1 therapy. 18 Of note, patient stratification on the basis of PD-L1-positive tumorassociated macrophages is very important in this regard, as patients with adenocarcinoma infiltrated by PD-L1-positive tumor-associated macrophages represented a relatively small minority (see Table 4). In conclusion, this study showed that PD-L1 was more frequently expressed by squamous cell carcinoma than by adenocarcinoma. Diffuse PD-L1 expression in squamous cell carcinoma patients was correlated with poor disease-free survival and disease-specific survival compared with marginal PD-L1 expression, which was associated with a remarkably favorable prognosis. In adenocarcinoma patients, the presence of PD-L1-positive tumorassociated macrophages was associated with a poor disease-specific survival as compared with patients without PD-L1-positive tumor-associated macrophages. Our data thus suggest that targeting the PD-1/PD-L1 pathway may be therapeutically efficacious and should be considered in the treatment of cervical cancer patients.
2017-11-08T17:57:13.306Z
2016-04-08T00:00:00.000
{ "year": 2016, "sha1": "81a089c7856524b65f4e04542b6b74a8fa0240b3", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/modpathol201664.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "81a089c7856524b65f4e04542b6b74a8fa0240b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
67781731
pes2o/s2orc
v3-fos-license
Outbreak Investigation of Diarrheal Diseases in Jajarkot Background: Diarrhea is a major public health problem in Nepal. Recently, there was an outbreak of diarrheal diseases in different districts of mid and far western region of Nepal and the most affected district was Jajarkot. The objective of this study was to detect the causative organism and analyze the epidemic outbreak patterns of diarrhea in selected health institutions in Jajarkot district, Midwestern Region of Nepal, in terms of their demographic characteristics and laboratory findings of stool specimens. INTRODUCTION Diarrhea is a major public health problem in Nepal.It is characterized by the passage of three or more loose or liquid stools per day, or more frequently than is normal for the individual. 1 The infection may be spread through contaminated food or drinking-water, or from person to person as a result of poor hygiene. 1 It is an important cause of morbidity and mortality in many regions of the world, with more than 4 billion cases and 2.5 million deaths estimated to occur annually. 2testinal infection with V. cholera results in the loss of large volume of watery stool leading to severe and rapidly progressing dehydration and shock.Without adequate rehydration therapy severe cholera kills about half of affected individuals. 2cently, there was an outbreak of diarrheal diseases in Jajarkot, mid-western region of Nepal.The objective of this study was to detect the causative organism and analyze the epidemic outbreak patterns of diarrhea in selected health institutions in Jajarkot district, in terms of demographic characteristics and laboratory findings of stool specimens. METHODS A descriptive study was conducted using secondary data from two health institutions of Jajarkot district from middle of the March to July 2009.The purposive sample of stool and demographic data were collected from Khalanga District Hospital and Khagenkot (Dalli) health post.The stool sample were collected in Cary Blair Transport Media and brought to the National Public Health Laboratory, Kathmandu, for further microbiological analysis.The demographic profile of were entered using Microsoft Excel 2003.The statistical analysis was done in Statistical Package for Social Science version 13.0 for windows. RESULTS The total number of cases (n=425) admitted to District Hospital, Khalanga from mid-march 2009 to mid-July 2009 were analysed. The first four weeks starting from third week of March 2009 indicate a steady rise in morbidity with no reported cases of mortality.However, from the first week of April 2009 the morbidity trend increased and reached up to 41 cases in mid May.There was steady trend of low morbidity and mortality till the second week of June with sudden increase in morbidity and mortality from third week of June.The morbidity was increasing by the end of July, whereas mortality trend was decreasing.The maximum number of morbidity cases (97) was recorded in the second week of July 2009.In terms of mortality, the peak of 38 cases was observed in the second week of July 2009 (Figure 1).Jajarkot district is widely inhabited by Dalit group (ethnic code 1), Disadvantaged group (ethnic code 2) and Upper Caste group (ethnic code 6).Table 3 shows that out of the total number of patients visiting the district hospital, majority (49 %) were of the Upper Caste group (Brahmin, Chhetri, Thakuri etc) while at the Dalli Health Post the majority of the affected population appeared to be of the Dalit group (50%).Disadvantaged group also appeared to be in the higher-affected category at both sites. From the diarrhea mortality and morbidity data provided by the district health office for the Jajarkot district as a whole, the attack rate (AR) was calculated to be 8.2% and case fatality ratio (CFR) was 1% (number of cases 12,500, deaths 128, Jajarkot population -151,551).The most affected VDCs in terms of mortality were Majhkot and Kortang where number of deaths was 19 and 16 respectively.As reported, only 10 deaths out of 128 occurred in the health institution, which suggests that patients with access to public health institutions were less likely to die from diarrhea. DISCUSSION Clinical presentation of extensive watery loose motion, vomiting and rapid dehydration of diarrhea cases leading to death in a short time had raised suspicion about the possibility of cholera infection in Jajarkot and Midwest diarrhea epidemic.To find around 40% (5 out of 13) stool samples positive for Cholera imparted a strong suggestion to consider Jajarkot diarrhea as a case of Cholera epidemic.This initial finding was substantiated later by further detection of Cholera organisms from the stool samples collected from Jajorkot, Rukum and Dailekh during the preparation of the article. Cholera continues to be major public health problem despite the fact that the public health aspects of the disease were described in detail over a century ago. 3 Tamang et al reported 46 laboratory confirmed V. cholerae cases out of 148 cases of watery diarrhoea (31%), which was conducted in a hilly district Kavre. 4In this study, only strain 01 (El Tor, Ogawa) was reported.This is also reflected in our findings where Vibrio cholera 01 biotype El Tor serotype Ogawa was found to be the predominant strain. not considered feasible.A more robust and systematic research on the outbreak would lead to more closer finding in an outbreak in our context. CONCLUSIONS Diarrhea remains a major public health problem in Nepal.Its lethal potential in underdeveloped, remote and malnourished population should not be undermined.Cholera appears to have been the most important cause for mortality in Jajarkot diarrhea outbreak. In view of detection of cholera organism as in previous outbreaks in different districts of Nepal, the diarrhea outbreak in any districts of the country should be closely monitored for the possibility of a cholera epidemic. Cholera is likely to continue to take death tolls in diarrhea outbreaks in coming years.Therefore a reliable and rapid response mechanism should be in place for all future outbreaks.An Epidemiology and Laboratory based surveillance system is strongly recommended.Shrestha et al reported outbreak of Cholera in a similar seasonal period, but with higher isolation rate of V. cholerae. 5Our investigation showed that the peak morbidity and mortality was from mid June to mid July.Similar observation have been shown by Shrestha et al. 6 Kansakar et al reported V. cholerae 01 El Tor Ogawa responsible for cholera outbreak in Kathmandu Valley. 7l these findings from different studies carried out in Kathmandu Valley, Kavre District and Jajarkot District confirm the presence of V. Cholerae in respective outbreaks.This suggests that cholera outbreaks are likely to continue to occur in coming years.Therefore, a reliable and rapid response mechanisms needs to be put in place for all future epidemics in terms of a Epidemiology and Lab-based surveillance system. These health institutions were selected purposively because most of the other health institutions of the district were inaccessible and hence data sampling was Figure 1 . Figure 1.Diarrheal Morbidity and Mortality Trends of Jajarkot district Table 1 . ). Organisms detected in the stool sample Table 2 . Age wise distribution of diarrhea cases in Jajarkot Table 3 . Ethnicity wise distribution of diarrhea cases in Jajarkot
2018-12-27T16:41:48.364Z
2010-03-30T00:00:00.000
{ "year": 2010, "sha1": "78ed8aaa3873391040adfebcd6154c98699b5005", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/JNHRC/article/download/3008/2610", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "78ed8aaa3873391040adfebcd6154c98699b5005", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237232273
pes2o/s2orc
v3-fos-license
Production of Yeast Invertase from Sauerkraut Waste Sauerkraut waste was found to be a favorable medium for the production of invertase (β-D-fructofuranoside fructohydrolase, EC 3.2.1.26) by Candida utilis. Sauerkraut waste presents a serious treatment problem because of its extremely high biochemical oxygen demand (BOD), low pH, and high NaCl content (4). Hang et al. (3) found sauerkraut waste to be a more favorable medium for cultivating yeasts than malt extract broth and as good or better than peptonedextrose broth. Of the species tested, Candida utilis grew most rapidly and gave the highest cell yield. Dworschack and Wickerham (2) demonstrated that C. utilis produced exceptionally large amounts of extracellular and total invertase (fl-D-fructofuranoside fructohydrolase, EC. 3.2.1.26) in a medium containing 3% sucrose, 0.5% peptone, and 0.3% yeast extract. The objective of this work was to study the capability of C. utilis to produce invertase in sauerkraut waste. Sauerkraut waste was obtained from a nearby factory and contained the following, expressed as milligrams per liter: BOD, 12,400; total acid as lactic, 7,400; Kjeldahl nitrogen, 620; total phosphorus, 81; and NaCl, 18,600. Experiments were carried out in 500-ml Erlenmeyer flasks containing 100 ml of autoclaved sauerkraut waste incubated at 25 C on a rotary shaker at a speed of 200 rpm. All flasks were inoculated with a 22-h-old yeast culture at a 1% level (vol/vol). The methods used to determine the 5-day BOD, total acid as lactic, Kjeldahl nitrogen, total phosphorus, and NaCl were described previously (4). Dry cell weight was determined by filtering, washing with distilled water, and drying at 105 C overnight. Reducing sugar was measured by the method of Clark (1). Total invertase was determined directly on the whole culture. The supernatant fluid, after centrifugation at 2,000 rpm for 10 min, was used for extracellular invertase assay. One unit I Approved by the Director of the New York State Agricultural Experiment Station for publication as Journal Paper no. 1995. of invertase is defined as the quantity of enzyme which catalyzes the formation of 1 gmol of reducing sugar per 5 min at pH 4.5 and 25 C. All samples were prepared in duplicate and the reported data are average values. Invertase production approximately paralleled the amount of yeast growth (Fig. 1). The most rapid production of invertase occurred between 12 and 24 h during the exponential growth phase. The ratio of total invertase to extracellular invertase was approximately 5 to 1. Dworschack and Wickerham (2) used pep- tone-sucrose-yeast extract broth to produce yeast invertase. However, we found in this work that the 48-h-old washed cells of C. utilis grown in sauerkraut waste produced 672,000 U of invertase per g of dried yeast, whereas those grown in peptone-sucrose-yeast extract broth produced only 307,500 U of invertase per g of dried yeast. C. utilis thus produced twice as much invertase in sauerkraut waste as in peptone-sucrose-yeast extract broth. This could be attributed to the presence of some stimulating substances in the waste water. It is also possible that the lactic acid that is present is a more favorable carbon source for invertase production than is sucrose. Dworschack and Wickerham (2) showed that C. utilis produced high yields of invertase whether the carbon source was sucrose, glucose, maltose, or xylose, and still higher yields with lactic acid, glycerol, and ethyl alcohol. The yeast completely neutralized the waste acid, and reduced the BOD, Kjeldahl nitrogen, and total phosphorus to 1,400, 168, and 8 mg/ liter, respectively. Our data thus indicate that C. utilis could be used to reduce the strength of sauerkraut waste with the concomitant production of yeast invertase.
2020-12-10T09:04:16.886Z
1973-03-01T00:00:00.000
{ "year": 1973, "sha1": "2299148cdbb6010081e69be6593a8352ed466752", "oa_license": null, "oa_url": "https://aem.asm.org/content/aem/25/3/501.full.pdf", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "25ce5fb020cca4f1bd5fb2836c8681226cb90301", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
125837411
pes2o/s2orc
v3-fos-license
Issues on the Translation of Certain English Collocations into Arabic: from Collocations to Free Constructions in the Target Language The translation of collocations between different languages is not always an easy task, but can at times be a problematic and a challenging practice amongst linguists and translators/interpreters. The present paper argues that the translation of English collocations into Arabic can be a flexible practice if Arabic possesses the equivalent collocation while the literal meaning of the whole English collocation is intended. The translator can still find an appropriate equivalent collocation in Arabic, even if the literal meaning of the first word in the English collocation is not intended. This, however, requires the translator to find a word in Arabic that conveys the intended meaning of the word in English and collocates with the other Arabic word simultaneously. The paper also claims that the translator may resort to make use of a free construction in Arabic to stand for the English collocation concerned. This often takes place if Arabic does not possess an equivalent collocation to the English collocation as the literal meaning of the latter is not the intended meaning, the verbs in the former and the latter differ in terms of type and function and/or the verb in the former can convey the intended meaning of the whole English collocation. Introduction The mastery of collocation is deemed an important part in foreign/second language learning as well as in translation/interpreting. Translators/interpreters in particular and foreign/second language learners in general should pay careful attention to foreign/second language collocations. This is owing to the fact that L1 has significant bearing on the L2 production, including collocations, as investigated and confirmed by some scholars, such as Bahns & Eldaw (1993) and Hussein (1990). Such scholars have found that L2 learners commit multiple mistakes when making up l2 collocations. The same notion may apply to novice translators/interpreters, particularly when rendering texts from their mother tongue into a foreign language. Indeed, combining textual elements that do not collocate in a particular language may make the text look alien, foreign and exotic to the target reader, which may negatively affect its comprehensibility. This may also have a negative impact on the thematic structure of the text alongside the level of coherence and cohesion of the whole text. This is advocated by Hatim & Mason (1997), who contend that collocation plays a significant role in the cohesion and coherence of texts. The present paper addresses the concept of collocation, discussing and analysing the translation of certain English collocations into Arabic. It at the outset offers certain definitions of the concept of collocation, drawing a clear line between collocations and other idiomatic expressions. A relatively succinct theoretical account of collocation will next be provided, showing how collocation was first introduced by Palmer (1938), how it was then developed as a technical term by Firth (1957) and how different linguists later proposed different theories to deal with the concept of collocation. The paper then presents two different classifications of collocation made by different linguists. The first is suggested by Benson et al (1986) who divide collocations into lexical and grammatical collocations. The second classification is proposed by Newmark (1988) who asserts that collocation falls into three categories: verb plus object collocation, adjective plus noun collocation and noun plus noun collocation. A large section is next devoted to tackle the issues and problems in the translation of certain English collocations into Arabic. This section confirms that the problem in the translation of English collocations into Arabic arises when the receptor language does not possess an equivalent collocation to that of the source language. Two different English collocations have been meticulously examined and thoroughly studied when rendered into Arabic: these are verb plus object collocation and adjective plus noun collocation. The paper has profoundly discussed diverse cases of the aforementioned English collocations and how they can acceptably be translated into Arabic in all cases. Finally, the present paper argues that the translation of English collocations into Arabic can be a flexible practice if Arabic possesses the equivalent collocation while the literal meaning of the whole English collocation is intended. The translator can still find an appropriate equivalent collocation in Arabic, even if the literal meaning of the first word in the English collocation is not intended. This, however, requires the translator to find a word in Arabic that conveys the intended meaning of the word in English and collocates with the other Arabic word simultaneously. The paper also claims that the translator may resort to make use of a free construction in Arabic to stand for the English collocation concerned. This often takes place if Arabic does not possess an equivalent collocation to the English collocation as the literal meaning of the latter is not the intended meaning, the verbs in the former and the latter differ in terms of type and function and/or the verb in the former can convey the intended meaning of the whole English collocation. Definition and Concept Collocation forms an important part of any language. It refers to the patterns of lexical items that co-occur (Mahadi, Vaezian & Akbari, 2010, p. 25). Indeed, collocations are said to be words which come in sequence 'with a greater than random probability' (Bowker, 2002, p. 64). Collocation is a mixture of two or more words that appear together regularly in different contexts of languages. This runs in line with Robins (1967), who asserts that collocation is a regular association of lexical items in a particular language (p. 63). This is seconded by Hoogland (1993), who points out that collocation is deemed a lexical relationship that involves two or more lexical items following each other in a sequence as the use of a specific word, e.g. a noun, limits the choice of other words that can follow it, e.g. adjectives (p. 75). Brashi (2009) claims that collocation points to the tendency of specific words that come together in a particular language, as opposed to other words in the same language which do not tend to come in sequence (p. 21). He goes on to confirm that the meaning of any particular collocation can generally be derived from at least one of its constituents (p. 21). Such crucial aspect of collocation is what draws a clear line between it and idioms whose meanings are nod deduced from the meanings of their constituents (Cruse, 1986, p. 37). For instance, the intended meaning of the idiomatic expression: 'football is not my cup of tea' resides in the fact that 'I do not enjoy playing football'. Needless to say, the literal meanings of the constituents of the above idiom are not intended and will never lead to the appropriate intended meaning of the idiom. However, the idiomatic meaning of the idiom, which has nothing to do with the literal meanings of its constituents, is the appropriate intended meaning. Conversely, the intended meaning of the collocation: 'to perform a task' is 'to do a job' or the like. Evidently, the literal meanings of the constituents of the above collocation, which may read as 'do/make/carry out/etc' and 'job/work/etc', do unquestionably lead to the appropriate intended meaning of the collocation. Brief Theoretical Account of Collocation The importance of the concept of collocation springs chiefly from the claim made by a number of linguists that all languages possess fixed forms and known expressions that are familiar to native speakers and are viewed by them as chunks of lexical items rather than individual words. Such expressions are used in written as well as in spoken language without change. One pivotal subarea of these expressions is known as collocation (Brashi, 2009, p. 22-23). A collocation is considered to be a link between lexical items. Such relation is deemed arbitrary as it is not based on rules, rather it is primarily grounded on common usage (Benson et al, 1986). The term 'collocation' was first introduced by Palmer (1938) in his well-known dictionary: 'A Grammar of English Words'. It then became a technical term, i.e. entered the linguistics terminology after the work of Firth (1957) who propounded the theory of 'meaning by collocation ' (p. 194). He proposed in his theory that 'collocation' has a lexical meaning "at the syntagmatic level not at the paradigmatic level" (Firth, 1957, p. 196;Brashi, 2009, p. 23). The difference between the syntagmatic relationship of lexis and the paradigmatic relationship resides chiefly in the notion that the former is concerned with the ability of a particular word to combine with a group of other lexical items, whilst the latter is composed of a set of lexical elements relating to the same word class and can be at the same time replaced by one another in a particular lexical and grammatical context (Brashi, 2009, p. 23). Hence, Firth's (1957) study of collocation focused primarily on the syntagmatic relationship, i.e. the meaning relationships between words, and not on the meaning of individual words, the main concern of the paradigmatic relationship. An example of the paradigmatic relationship according to the explanation given by Firth (1957) may lie in the relationship between the following two words: 'achieve' and 'accomplish', which generally mean 'yuḥaqqiqu' in Arabic. It is noteworthy that both words belong to the same word class 'verb' and can replace one another in certain linguistic contexts as they are deemed synonymous. However, this paradigmatic relationship does not at all address the ability of one of these words to combine with the other word, the main focus of the concept of collocation. A good example of the syntagmatic relationships between lexical items was provided by Firth (1968) as an adjective-noun collocation: 'dark night'. He points out that one of the meaning of 'dark' is its collocation with 'night' and one of the meaning of 'night' is its collocation with 'dark' (Firth, 1968, p. 182;Brashi, 2009, p. 23). Based on the foregoing, Firth (1968) confirms the importance of recognizing the set of words that accompany a particular word if its meaning is to be completely studied. Lyons (1966) rejected at the outset Firth's theory of 'meaning by collocation', arguing that a particular collocation cannot be comprehended from all the constituent parts that make up the expression in which the collocation is stated. He goes on to claim that part of the meaning of a single lexical item in the collocation may not hinge upon its association with other lexical item(s) in the same collocation. Indeed, Lyons's (1966) view of collocation is chiefly founded on a distributional theory, which is repugnant to Firth's (1957) theory of 'meaning by collocation'. On the contrary, Lyons (1977) seems to have changed his views as he asserts that there exists a high degree of interdependence between lexical items in the sense that such lexemes appear in texts and collocate with one another. He further adds that the potentiality of such lexemes to collocate with one another can rationally be analysed as part of the meaning of such lexemes (p. 613). Halliday (1966) does not view lexis as part of grammatical theory, rather a complementary element thereto. He introduces the concept of 'set' as an additional dimension of word collocation. He views collocation as a co-occurrence relationship of words that come in sequence. On the other hand, he considers the set a group of members with an equal degree of occurrence in a particular collocation (Halliday, 1966, p. 153;Brashi, 2009, p. 23-24). For instance, the word 'strike' and the word 'maintain' belong to the same lexical set as they both collocate with the noun 'balance'. Sinclair (1966) views lexis and grammar from two distinct perspectives (p. 411). He asserts that patterns of language are generally dealt with in grammar in such a way that they were addressed through a system of options. Nonetheless, the key point that Sinclair (1966) is concerned with is the inclinations of lexemes to come in sequence in the form of different collocations. Such inclinations ought to inform us of important facts concerning language that can never otherwise be obtained by grammatical analysis (p. 411). He goes on to demonstrate that the contrast between different lexemes is deemed more flexible than that of classes of grammar (Sinclair, 1966, p. 411;Brashi, 2009, p. 24). One pivotal aspect of Sinclair's (1966) theory lies mainly in his distinction between significant collocation and casual collocation. He explains that a significant collocation is the one that occurs more than usual based on the particular lexemes. Crystal (1992) contends that collocation forms a real obstacle in foreign language mastery. He then points out that the more the collocation has a fixed form, the more we believe that it is an idiom, i.e. it needs to be learned as a whole, and not in part (p. 105). This is owing to the fact that the idiomatic collective meaning of idioms cannot usually be understood from the meanings of their constituent parts, as explained earlier. Halliday & Hasan (1976) regard collocation as 'the most problematic part' of text cohesion (p. 288). Indeed, collocations are mostly viewed as language-specific, hence, they lead to language mistakes. They may cause a problem to the foreign/second language learners as well as novice translators/interpreters when the equivalent of their mother tongue construction makes use of different collocations (Brashi, 2009, p. 24). In fact, collocations do not only cause a problem to foreign/second language learners and novice translators/interpreters, but also to native speakers, as accentuated by Palmer (1979). There is ample evidence that even some native speakers of English encounter problems in collocating specific lexemes in certain written contexts. Such problems vary depending on the writer's experience in academic writing alongside his/her education (Palmer, 1979;Hussein, 1990;Baltova, 1994;Brashi, 2009). Based on the foregoing, foreign/second language learners, including translators/interpreters should not learn a word in isolation, rather they should learn word collocations, i.e. common combinations of lexical items (Khuwaileh, 2000). This notion is in line with Faerch et al (1984), who stress the importance of learning words by way of combining them with their collocates. They argue over the merit of presenting the new words with their collocates and that would unquestionably scaffold the L2 learner by large. Commenting on the relatively little knowledge of collocation L2 learners possess, Howarth (1998) holds the view that L2 learners are generally not informed of a considerable number of collocations. He continues to argue that it is not only L2 learners who are not cognisant of this category, however, it is an area that has received little attention in both lexicography and language pedagogy (p. 162). Moreover, it is claimed that even best L2 learners face certain difficulties in producing particular collocations correctly (Bonk, 2000). This is lent credence by McCarthy (1990), who contends that even advanced L2 learners may commit unacceptable or inappropriate collocations (p. 13). It seems evident that the role of the mother tongue influence is clearly responsible for the errors made in the production of the foreign/second language collocations, as shown by Hussein (1990), Biskup (1992), Bahns & Eldaw (1993). This ipso facto proves that L2 learners generally encounter real difficulties concerning collocations regardless of their level of language proficiency. The same applies to novice translators/interpreters with varying degrees of errors, particularly when these novice translators/interpreters render texts into a language which is not their own. However, producing incorrect collocations in English by foreign/second language learners or novice translators/interpreters may well lead to the formulation of a text that sounds alien, foreign, exotic, unnatural, etc. Types of Collocation Collocations are commonly used in languages in general, but are frequently employed in English in particular, as demonstrated by Hill (2000). They represent almost all of the English expressions (Lewis, 2000). In crude terms, collocation can be viewed as an umbrella that generally covers all fixed expressions and phrases in all types of texts (Brashi, 2009, p. 22). Benson et al (1986) classify collocations under two main parts: lexical collocations as well as grammatical collocations. Such parts are further divided into subcategories. Lexical collocations comprise nouns, verbs, adjectives and adverbs. They usually do not include clauses, prepositions and infinitives. An example of lexical collocation is 'rich imagination', which is translated into Arabic as 'khayālun wāsiʿun, literally (wide imagination)'. The second part of collocations stated by Benson et al (1986) is grammatical collocations. They form a phrase that is normally composed of a noun and an adjective or a verb and a preposition. It can also consist of a grammatical structure containing clauses or infinitives. An example of grammatical collocation is the phrase in Chomsky's (1965) terminology, which reads as 'decide on a boat', which means 'choose to buy a boat', whereas the meaning of the said phrase as a free combination is 'pass a decision while on a boat' (Brashi, 2005). It is worth noting that grammatical collocations will be beyond the scope of the present paper, which will focus chiefly on lexical collocations. Newmark (1988) argues that the major difficulties in the work of translation are deemed lexical but not grammatical. This is owing to the fact that translating deals mainly with words, phrases, collocations and idioms (p. 32). Another classification of collocations is provided by Newmark (1988) who classifies collocations into three categories. The first is adjective followed by a noun, whilst the second is noun followed by a noun. The last category represents a verb plus object collocation (p. 212). This runs in line with Ghazala (1995), who points out that collocation in English can appear in the form of an adjective followed by a noun, such as 'blind confidence', a noun followed by a noun, such as 'job security' and a verb followed by a noun, such as 'do a job' (p. 108). These are deemed the most commonly used types of collocations in the English language (Newmark, 1988;Ghazala, 1995;Brashi, 2005). It is worth pointing out that the present paper will deal only with the first and the third categories as the second does not usually pose problems when rendered from English into Arabic. Noun plus noun collocation is rendered into Arabic with the use of either one of the following grammatical constructions. The first is noun followed by a noun (genitive construction(, such as 'research proposal' is rendered into Arabic as 'muqtarahu baḥthin'. The second is noun followed by an adjective, such as 'state university' is translated into Arabic as 'jāmiʿatun ḥukūmiyyatun'. Issues and Problems in the Translation of Collocations A number of linguists, translators and interpreters have realised the concept of collocation as an obstacle or a problem; amongst those are Palmer (1979), Hussein (1990), Bahns & Eldaw (1993), Baltova (1994), Brashi (2009), and so on. Collocational competence, which refers to the acquaintance and knowledge of collocation is unequivocally deemed a pivotal requirement for the purpose of foreign/second language mastery and therefore, for translation/interpreting. Speaking and writing a foreign/second language in the same way as that of its native speakers require observing collocations and applying them in the language production. Hence, collocational competence may well stand as a high linguistic level of language proficiency that translators/interpreters as well as foreign/second language learners should achieve (Brashi, 2009, p. 22). The translation of collocations has long been a constant problem in the field of translation theory and practice. Translators encounter a real challenge in matching the appropriate nouns with the appropriate nouns, the appropriate adjectives with the appropriate nouns, the appropriate verbs with the appropriate nouns, etc. Such problem emanates from the fact that different languages realise and configure collocations in different ways. Furthermore, the equivalents of words that may collocate in a particular language may not necessarily collocate in another (Brashi, 2005;Zughoul, 1991). A number of translation scholars have addressed the concept of collocation as a problematic area in translation, amongst them are Beekman & Callow (1974), Emery (1988a;1988b), Newmark (1988), Hatim & Mason (1990), Baker (1992), Smadja (1993) and others. Newmark (1988) claims that realising and recognising a collocation are considered amongst the most pivotal problems in translating. He goes on to argue that translation is at times an ongoing struggle until the translator finds the appropriate collocations in the receptor language (p. 213). Brashi (2009) claims that collocation can be viewed as a concept falling between syntax and lexis. Such notion supports the view that language competence is best described as a process of interaction between syntax and lexis (p. 22). Beekman & Callow (1974) view the translation of collocations as an appealing feature in the translators' work and a criterion against which the competence of the translators is assessed. The translation of collocations usually demands deep knowledge as there is often no equivalence between collocational structures within different languages (p. 163). Hatim & Mason (1990) assert that one crucial problem that often confronts the translators resides chiefly in the production of the appropriate collocations in the receptor language. They further add that the effect of the source language will always be there on the target text, even if produced by expert translators, a matter which would ultimately result in making unnatural collocations (p. 204). Baker (1992) claims that collocations across languages are generally arbitrary. Heliel (1990) believes that collocations cause real problems in translating. He goes on to explain that while free combinations are flexibly constructed, collocations pose a real challenge for translators when rendering texts from English into Arabic and vice versa. Lexical items that collocate with many other words create obstacles for translators. Translators encounter major problems in finding the appropriate equivalent in the receptor language, which may not exist in ordinary bilingual dictionaries (Brashi, 2005). As indicated above, the present paper will discuss two types of English collocations and how they can appropriately be rendered into Arabic. This would show also the type of problems associated with the translation of collocations. The two types of collocations discussed in the current study are: verb plus object and adjective plus noun. Verb Plus Object This is a commonly employed collocation in English. Generally, English verb plus object collocations are translated into Arabic with the use of the same grammatical structure if Arabic possesses the same equivalent collocations. For instance, 'to perform a task' is translated into Arabic as 'yuʾaddī mahammatan, literally (perform a task)'. Evidently, the English collocation consists of the verb 'perform' which has the Arabic equivalent verb 'yuʾaddī' and the object 'task' which has also the Arabic equivalent noun 'mahammatan'. Hence, such English collocation is easily rendered into Arabic as it possesses an equivalent Arabic collocation which keeps the same grammatical structure intact. At times English verb plus object collocation may be rendered into Arabic with the use of a collocation employing the same grammatical structure, but using a verb which is not the literal rendition of the English verb (Brashi, 2005). This requires full understanding of the intended collective meaning of the collocation in question, so that the translator can arrive at a TT collocation which conveys the same intended collective meaning as that of the ST collocation. This confirms the fact that translating English verb plus object collocation into Arabic is not always an easy task (Brashi, 2005). An example of this situation can be found in the English collocation 'to run an engine' which is rendered into Arabic as 'yushaghghilu muḥarrikan, literally (operate an engine)'. Having examined the previous example, it is obvious that the verb in the English collocation 'run' has not been translated literally into Arabic. This is due to the fact that the literal rendition of the verb 'run' into Arabic, which reads as 'yajrī', does not collocate with the Arabic noun 'muḥarrikan', nor does it convey the same intended meaning as that relayed by the verb 'run'. Consequently, the English verb 'run' has been rendered into Arabic with the use of the verb 'yushaghghilu' to mean the same and to collocate well with the noun 'muḥarrikan' which literally means 'engine'. In certain instances, English verb plus object collocation may be rendered into Arabic with the use of a different grammatical structure. In other words, English verb plus object collocation can sometimes be translated into Arabic using verb plus preposition plus noun (Ghazala, 1995). This is a clear example of domestication, albeit at a structural level. It is claimed that any translation task involves domesticating exercise (Venuti, 2007). Such notion clearly presents the significance of adaptation in the process of establishing multilingual communication (Vandal-Sirois & Bastin, 2012, p. 21). An example of this case can be shown in the collocation 'to pay a visit' which can be translated into Arabic as 'yaqūmu biziyāratin, literally 'pay a visit)'. It seems obvious that the foregoing example involves two essential points that merit discussion. The first lies chiefly in the fact that the verb 'pay' in the English collocation has not been translated verbatim into Arabic as 'yadfaʿu'. This springs from the fact that the verb 'yadfaʿu' does not collocate with the Arabic prepositional phrase 'biziyāratin'. Moreover, The intended meaning of the English verb 'pay' in the collocation concerned is not the literal meaning. The second point associated with the example above resides mainly in the grammatical transposition that has occurred as a result of translating the noun 'visit' into a prepositional phrase 'biziyāratin'. Such grammatical transformation has existed as a result of the difference in type and function between the English verb and its Arabic equivalent verb. The former is transitive, i.e. it requires an object, while the latter is intransitive, i.e. it does not require an object. Hence, the exercise of the grammatical transposition on the previous example has been obligatory to arrive at the appropriate Arabic collocation. Dickins et al (2002) point out that literal translation often demands the exercise of grammatical transposition. English verb plus object collocation may also be rendered into Arabic by way of omission. Ghazala (1995) asserts that English verb plus object collocation can exceptionally be translated into Arabic with the use of a verb. This is known as translation by omission (Dickins, et al, 2002). An example of this situation can read as 'shake hand' which is rendered into Arabic as 'yuṣāfiḥu, literally (shake hand)'. This example presents the difference between languages in terms of expressing different propositions. While English has required two lexical items, i.e. verb plus object to convey that a particular person makes use of his/her hand to salute another, Arabic has only needed one word, i.e. a verb to express the same proposition. The example also shows that the verb in the English collocation 'shake' has not been translated verbatim into Arabic as 'yahuzzu'. This is owing to the fact that the literal meaning of the English verb has not been intended in this very situation, however, the collective meaning of the English collocation is what has been meant. Having examined the above, it seems evident that the translation of English verb plus object collocation into Arabic is not always a simple practice, rather it requires full awareness and deep knowledge of the meaning and structure of the collocations in both languages. English verb plus object can be flexibly rendered into Arabic with the use of the same grammatical structure if Arabic possesses the same equivalent collocation. In this very instance, the English verb and object are rendered verbatim into Arabic as the literal meaning of the English collocation is what is intended. However, in some other instances, the literal meaning of the verb in the English collocation is not intended, while the English object may correctly be translated literally into Arabic. In this case, the translator needs to find a verb in Arabic that conveys the intended collective meaning of the English collocation as well as collocates well with the Arabic object. Doing so, the English verb plus object collocation shall be translated into Arabic with the use of the same grammatical structure though the English verb has not been translated literally. At times The grammatical structure of the English verb plus object may be sacrificed on the grounds that the translator needs to exercise grammatical transposition on the Arabic collocation and/or he/she adopts the strategy of translation by omission. These two cases may emerge if the literal meaning of the English verb is not intended, if the verbs in the English and Arabic collocations differ in terms of type and function and/or if the Arabic verb on its own conveys the intended collective meaning of the whole English verb plus object collocation. If the first two conditions are met, English verb plus object are often translated into Arabic with the use of verb plus prepositional phrase. However, if the last condition is met, English verb plus object collocation is acceptably rendered into Arabic using a verb. Adjective Plus Noun English adjective plus noun collocations are often rendered into Arabic with the use of noun pus adjective collocations (Ghazala, 1993;Brashi, 2005). This is due mainly to the fact that English and Arabic differ in terms of their word order and how different parts of speech are syntactically distributed and organized in both languages. Hence, English adjective plus noun collocations are translated into Arabic with the use of noun plus adjective collocation if Arabic possesses the same equivalent collocation. An example of this case can be illustrated by the phrase 'hard labour' which can be rendered into Arabic as 'ʿamalun shāqun, literally (hard labour)'. Needless to say, the English adjective 'hard' has been rendered verbatim as 'shāqun' and the same applies to the English noun 'labour' which has been rendered literally as 'ʿamalun' though the word order of the Arabic collocation is reversed as a result of the difference in syntax between the two languages. English adjective plus noun collocations can still be rendered into Arabic while keeping the same Arabic grammatical structure as the one shown above, even if the intended meaning of the English adjective is not the literal meaning. For instance, 'great pleasure' is translated into Arabic as 'saʿādatun ghāmiratun, literally (covering pleasure)'. Having examined the example concerned, it is obvious that the English adjective 'great' has not been verbatim rendered into Arabic as 'ʿaẓīmatun'. One reason for this is that the literal meaning of the term is not the intended meaning in this very situation. What is more, if the literal meaning has been supposedly employed, it won't collocate with the Arabic noun 'saʿādatun', nor will it produce an idiomatic natural target text. Conversely, the English noun 'pleasure' has been literally translated into Arabic as 'saʿādatun' as this is the intended meaning in this very situation. As indicated earlier, the word order of the Arabic collocation is reversed due to syntactic reasons. At times English adjective plus noun collocations may be rendered into Arabic with the use of the same grammatical structure though Arabic usually places the adjective after the noun. This is known as 'causative adjective' (Brashi, 2005). In such grammatical structure, the adjective is added to the noun in a genitive addition. It is noteworthy that the typical Arabic noun plus adjective collocation can still be utilised. An example of this case is the English adjective plus noun collocation 'nice weather' which can typically be rendered into Arabic with the use of the typical noun plus adjective collocation 'ṭaqsun ʿalīlun, literally (nice weather)'. Likewise, the previous English collocation can be translated into Arabic using the same grammatical structure through genitive addition, such as 'ʿalīlu alṭaqsi, literally (nice weather)'. Clearly, the English adjective 'nice' has been rendered into Arabic with the use of the Arabic adjective 'ʿalīlu'. Similarly the English noun 'weather' has been translated into Arabic using the noun 'alṭaqsi'. It is worth pointing out that such Arabic adjective plus noun collocation is not as commonly used as the Arabic noun plus adjective collocation. What is more, Arab linguists resort to the use of Arabic adjective plus noun collocations when special emphasis on the adjective is required. English adjective plus noun collocations can sometimes have no equivalent collocations in Arabic. In This case, the translator has no choice but to exercise descriptive paraphrases to convey the intended meaning of the English collocation in question. In certain instances, even if Arabic does not possess equivalent collocation to the English collocation concerned, the translator can employ literal translation of the lexical elements that constitute the English collocation if the literal meaning of such elements is the intended meaning. Doing so, the translator may create a construction which is not deemed a collocation, however, it relays the intended meaning of the English collocation concerned. An example of this case can be shown in the English collocation 'bad news' which can be literally rendered into Arabic as 'ʾakhbārun sayyiʾatun, literally (bad news)'. Having considered the previous example, it seems evident that the English adjective 'bad' has been verbatim translated as 'sayyiʾatun'. Similarly, the English noun 'news' has been literally rendered as 'ʾakhbārun' though the word order in Arabic is reversed to fit the Arabic language norms. Nonetheless, although the Arabic phrase is a precise translation of the English collocation in question, it can never be deemed an Arabic collocation. Having meticulously studied the present section, evidence suggests that the translation of English adjective plus noun collocations into Arabic is not always an easy task, but requires some restructuring. It at the outset requires word order reversal as Arabic usually places the adjective after the modified noun. This is a clearly significant syntactic difference between English and Arabic of which translators should take account. In crude terms, if Arabic possesses an equivalent collocation to the English adjective plus noun collocation where the literal meaning of the English collocation is the intended meaning, the translator can easily provide the Equivalent Arabic collocation as a rendition of the English collocation, with the Arabic word order reversed. Even if the literal meaning of the English adjective plus noun collocations is not intended, the translator can still find an Arabic collocation which is composed of noun plus adjective and which wholly conveys the intended collective meaning of the English collocation concerned. English adjective plus noun collocations can at certain instances be translated into Arabic with the use of the same grammatical structure if the Arabic context demands placing special emphasis on the adjective by foregrounding it in the collocation. In this case, Arabic makes use of the genitive construction to structure such collocation. Nevertheless, the translator can still reverse the Arabic collocation to be the typical noun plus adjective collocation while receiving the same intended meaning. However, the special emphasis that has been given to the adjective will then be lost. In certain linguistic situations, the English adjective plus noun collocation has no equivalent collocation in Arabic, a situation that requires the translator to exercise descriptive paraphrases of the intended collective meaning of the English collocation concerned. However, the translator may adopt literal translation of the lexical items that form the English collocation if the literal meaning is what is intended. Doing so, the translator is viewed to have managed to translate the English collocation in question with an Arabic construction that is not considered an Arabic collocation, but a free construction. Concluding Remarks Evidence suggests that the concept of collocation forms a major part in any specific language. It generally points to the lexical elements that coexist in a particular discourse on condition that the meaning of the whole collocation can be deduced from at least one of its lexical elements. The mastery of foreign/second language collocations by foreign/second language learners in general, and by translators/interpreters in particular, is of paramount importance as L1 clearly affects the production of L2. Furthermore, the combination of lexical elements that do not collocate in a particular language results in producing a target text that looks exotic to the target reader and may ipso facto be incomprehensible. Collocation is deemed a subarea of the fixed expressions that each language enjoys, and is also considered a link between lexical items. It was first introduced by Palmer (1938) and was then developed as a technical term by Firth (1957) in his proposed theory 'meaning by collocation'. Different linguists have next propounded distinct theories to deal with collocation. Likewise, different classifications of collocations have been made by different linguists, such as Benson et al (1986) who categorise collocations under two categories: lexical and grammatical collocations. On the other hand, Newmark (1988) divides collocations into three divisions: verb plus object collocation, adjective plus noun collocation and noun plus noun collocation. The translation of collocation has long been a real challenge for translators, particularly in finding the equivalent collocation in the receptor language that best conveys the intended meaning of that of the source language. The process of matching the appropriate nouns with the appropriate nouns, the appropriate verbs with the appropriate nouns, and so on is a formidable and arduous task, and some time poses major problems for the translator. This, of course, is due chiefly to the different and diverse ways in which languages configure collocations. Also, the equivalents of the words that do collocate in a specific language, may not necessarily collocate in another. English verb plus object collocation is rendered into Arabic following the same grammatical structure if Arabic possesses the equivalent collocation while the literal meaning of the English collocation is the intended meaning. Even if the literal meaning of the English verb is not intended, the same grammatical structure is kept intact in Arabic, however, the difficulty that emerges here is to find the appropriate Arabic verb that can relay the intended meaning of the English verb and collocate with the Arabic object concurrently. English verb plus object collocation can at times be translated into Arabic with the use of verb plus prepositional phrase if the literal meaning of the English verb is not intended and/or the English and Arabic verbs differ in terms of type and function. Also, English verb plus object can be rendered into Arabic using a verb if the literal meaning of the English verb is not intended and the Arabic verb can relay the whole intended collective meaning of the whole English collocation. In the previous two cases, the translator has no choice but to adopt an Arabic free construction to stand for the English verb plus object collocation. On the other hand, English adjective plus noun collocation is translated verbatim into Arabic with reversing the Arabic word order if Arabic possesses an equivalent collocation. Even if the literal meaning of the English adjective is not the intended meaning, the same Arabic grammatical structure is used. Again, the difficulty that arises here resides primarily in the collocational competence of the translator to find an Arabic adjective that does not only convey the intended meaning of the English adjective, but it also collocates with the Arabic noun properly. English adjective plus noun collocation can at times be rendered into Arabic with the use of the same grammatical structure if the Arabic context places special emphasis on the adjective by foregrounding it in the sentence concerned through the use of genitive construction. Nevertheless, the typical Arabic noun plus adjective collocation can still be used in this very situation, though the emphasis given to the adjective will unquestionably be lost if backgrounded in the sentence concerned. There are also other situations in which English adjective plus noun collocation has no equivalent collocation in Arabic, though adopting literal translation in this situation is acceptable as the intended meaning of the whole collocation is the literal meaning. Although the translator through literal translation may arrive at an appropriate and acceptable translation, his/her end result, i.e. the target text will never be considered a collocation, rather it will be deemed a free construction. Finally, the present paper argues that the translation of English collocations into Arabic can be a flexible practice if Arabic possesses the equivalent collocation while the literal meaning of the whole English collocation is intended. The translator can still find an appropriate equivalent collocation in Arabic, even if the literal meaning of the first word in the English collocation is not intended. This, however, requires the translator to find a word in Arabic that conveys the intended meaning of the word in English and collocates with the other Arabic word simultaneously. The paper also claims that the translator may resort to make use of a free construction in Arabic to stand for the English collocation concerned. This often takes place if Arabic does not possess an equivalent collocation to the English collocation as the literal meaning of the latter is not the intended meaning, the verbs in the former and the latter differ in terms of type and function and/or the verb in the former can convey the intended meaning of the whole English collocation. This research paper has particularly addressed the translation of English verb plus object collocation and English adjective plus noun collocation into Arabic. It has been restricted to deal with specific issues pertaining to the translation of these two types of English collocation into Arabic. Further research is needed to examine the reversed process and whether or not the results will be similar to those of the present research. Important research is also required to investigate the translation of English collocations into other languages and vice versa.
2019-04-22T13:12:34.798Z
2018-09-27T00:00:00.000
{ "year": 2018, "sha1": "32454edde0f06a61070e3fcdca4ca340a5cd41d2", "oa_license": null, "oa_url": "http://www.sciedu.ca/journal/index.php/elr/article/download/14152/8804", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "79d788ef9f40d99b6a739b5da33a04d4e6562e08", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
33817294
pes2o/s2orc
v3-fos-license
Clinical and experimental study of oxaliplatin in treating human gastric carcinoma AIM: To evaluate the therapeutic effectiveness of oxaliplatin on human gastric carcinoma and to explore its mechanisms. METHODS: Twenty-two cases of stage IV gastric carcinoma received 4-6 (mean 4.6) cycles of first line combined chemotherapy with oxaliplatin (oxaliplatin 85 mg/m 2 , iv, gtt, 1 h, d 1; leukovorin 200 mg/m 2 , iv, gtt, 1 h, d 1 and d 2; 5-FU 300 mg/m 2 ,iv, d 1 and d 2, 5-FU, continuous iv, gtt, 48 h; 1 cycle/2 wk). Response rate, progression-free survival (PFS), total survival time, toxic side effects were evaluated. The inhibitory effect of oxaliplatin on human gastric cell line SGC-7901 was detected and IC 50 was calculated by MTT. Transmission electron microscopy, flow cytometry and TUNEL were performed to evaluate the apoptosis of cell line induced by the drug. The expression of Caspase-3 m-RNA was detected by RT-PCR. INTRODUCTION Gastric cancer is one of the common carcinomas in human being. Drug treatment draws more and more attention as an essential part of comprehensive treatment of gastric malignancy. Gastric carcinoma is relatively sensitive to chemotherapy. It is generally considered that chemotherapy may prolong patient's life and decrease relapse. Oxaliplatin (L-OHP) is an innovative third generation platinum compound with powerful anti-neoplasm competence, lack of cross drug resistance with CDDP, with a synergistic effect with 5-FU and satisfactory safety profile. This new anticancer drug provides us more choices in fighting against malignancy, especially colon cancer. At present, treating gastric cancer with oxaliplatin and the relationship between chemotherapy and cancer cell apoptosis draw more and more attention. The discovery of Caspase family (cysteine proteases) that is implicated in the execution of programmed cell death in organisms ranging from nematodes to humans, brings the fresh air to the research of malignant cell apoptosis. The Caspase family is big and family members interact with each other to promote or inhibit the process of apoptosis. Caspase-3 locates in the downstream of the Caspase cascade. The proteolytic activation of Caspase-3 plays a key role in apoptotic process. This article summarizes the effect and side effects of chemotherapy with oxaliplatin on 22 cases of stage IV human gastric cancer, and tries to elucidate the mechanisms of chemotherapy by detecting apoptosis of cancer cells and evaluating the role Caspase-3 plays in apoptotic process. MATERIALS AND METHODS Patients A total of 22 cases of stage IV human gastric cancer patients who underwent chemotherapy in the Affiliated Xinhua Hospital of Shanghai Second Medical University from January 1999 to September 2002 were enrolled in this study. There were 17 men and 5 women, and their age ranged from 25 to 70 years (mean, 60±10 years). Among the 22 patients, 16 had poorly differentiated adenocarcinoma and 6 had signet ring cell carcinoma. Methods Each case received a combination chemotherapy containing L-OHP ( L-OHP 85 mg/m 2 by continuous intravenous infusion for 2 h on d 1, leucovorin 200 mg/m 2 by continuous intravenous infusion for 1 h on d 1 and d 2, 5-FU 300 mg/m 2 by bolus intravenous injection on d 1 and d 2, 5-FU 1200 mg/m 2 by continuous intravenous infusion for 48 h, one course lasting 2 wk for 4-6 courses). Cell culture Human gastric adenocarcinoma cell line SGC-7901, purchased from the Shanghai Institute of Cell Biology, Chinese Academy of Sciences, was routinely maintained in RPMI 1640 containing 100 mL/L fetal bovine serum (FBS), 100 U/mL penicillin, 100 U/mL streptomycin at 37 in a humidified atmosphere containing 50 mL/L CO 2 . MTT assay Cells were seeded at the density of 5×10 3 per well in 96-well plates in RPMI-1640 containing 100 mL/L FBS. After 24 h, fresh medium was added, containing oxaliplatin at concentrations of 0 to 10 mg/L. After 48 h incubation, MTT assay was performed, 150 µL of stock MTT (0.5 mg/mL) was added to each well, and the cells were further incubated at 37 for 4 h. The supernatant was removed and 150 µL DMSO was added to each well. An ELISA reader was used to measure the absorbance at a wavelength of 525 nm. Transmission electron microscopy The cells treated with 0.1 mg/L oxaliplatin were trypsinized and harvested after 24 h. Subsequently the cells were fixed in 40 g/L glutaral and immersed with Epon 821, embedded in capsules and converged for 72 h at 60 , then prepared into ultrathin sections (60 nm) and stained with uranyl acetate and lead citrate. Cell morphology was examined by transmission electron microscopy. Flow cytometry SGC-7901 cells were treated with oxaliplatin or oxaliplatin plus AC-DEVD-CHO at oxaliplatin concentrations of 0 to 10 mg/L for 30 min. Cells were digested by 2.5 g/L trypsin, washed in 0.01 mol/L PBS, fixed by cold alcohol at 4 and dyed with annexin-V (according to the description of annexin-V kit), and then analyzed by flow cytometry. TUNEL SGC-7901 cells were added to 6-well plates with cover glassslides at 6×10 4 cells/well, after incubated with oxaliplatin or oxaliplatin plus AC-DEVD-CHO at different oxaliplatin concentrations of 0 to 10 mg/L and fixed in 40 g/L formaldehydum polymerisatum for 1 h. After washed in 0.01 mol/L PBS twice, the cells were treated with reaction buffer, labeled with fluorescein dUTP in a humid box for 1 hour at 37 , then combined with anti-fluorescein antibody, colorized with NBT/BCIP. Cells were visualized with light microscopy. The apoptotic index (AI) was calculated as follows: AI = (number of apoptotic cells/total number) ×100%. RT-PCR Total RNA was extracted from cells using an RNA extraction reagent, TR IZOL (Life Technologies, USA), according to standard acid-guanidium-phenol-chloroform method [17] . About 4 µg of total RNA as reversely transcribed at 42 for 60 min in a total of 30 µL reaction volume using a first-strand cDNA synthesis kit (Boehringer Mannheim, Germany). cDNA was incubated at 95 for 5 min to inactivate the reverse transcriptase, and served as template DNA for 28 rounds of amplification using the GeneAmp PCR system 2400 (Perkin-Elmer Applied Biosystems, CA, USA). PCR was performed in a standard 25 µL reaction mixture consisting of 1.5 mmol/L magnesium chloride (pH 8.3), 2.5 mmol/L dNTPs, 12.5 pmoL each of sense and antisense primers and 2.5 U of Taq DNA polymerase (M BI, Canada). Amplification was performed for 1 min at 94 , for 1 min at 62 and for 1 min at 72 after heat-start for 5 min. Finally, an additional extension step was carried out for 10 min at 72 . As control, the DNA template of Caspase-3 was replaced by that of β-actin in the reaction. The amplification products were separated on 12 g/L agarose gels and visualized by ethidium bromide staining. PCR primers for Caspase-3 were as follows: forward primer, 5'-ATG GAG AAC ACT GAA AAC TCA -3'; reverse primer , 5'-TTA GTG ATA AAA ATA GAG TTC -3': according to the Caspase-3 gene structure in GeneBank. An 834 bp PCR product of Caspase-3 and a 315 bp product of β-actin were obtained. Statistical analysis The difference between each two groups was analyzed by ANOVA. P<0.05 was considered statistically significant. Clinical results Nine cases achieved objective responses (including 1 complete response and 8 partial responses), the response rate was 40.9% progression-free survival (PFS) 4.2 mo, and total survival time 7.2 mo. The rate of accumulative neurotoxicity, vomiting and diarrhea, bone marrow depression was 93.5%, 20% and 32.9%, respectively (Table 1). Inhibitory effect of oxaliplatin on SGC-7901 Taking the means of data from MTT assay, we got a smooth inhibition curve, which was a typical inverse 'S', and the IC 50 was calculated to be 0.71 mg/L by GrapHpaol Prism software ( Figure 1). The inhibition of L-OHP on SGC-7901 cell line was typically dose dependent. A maximal inhibitory rate reached 85.3%. Apoptosis induced by L-OHP We used transmission electron microscopy, TUNEL and Annexin-V labeling flowcytometry to quest for the mechanism of its anti-neoplastic effect. After treatment of SGC-7901 cells with oxaliplatin (0.1 mg/L) for 24 h, some cells showed apoptotic characteristics including chromatin condensation, chromatin crescent formation, nucleus fragmentation and apoptotic body formation by transmission electron microscopy ( Figure 2). Apoptotic index was 0.38% in the control group. In the experimental group, the apoptotic index determined by the TUNEL method was 7.35% while receiving L-OHP 1mg/L (slightly higher than IC 50 ) for 4 h and 14.35% while increasing the L-OHP concentration to 5 mg/L. As time went on, the apoptotic index remained stable in the control group, and was significantly increased in two experimental groups (0.5 mg/L and 1 mg/L) which reached a peak of 7.93% and 10.15%, respectively on the 4th d and decreased slightly on the 7th d. The two experimental groups had a similar trend. We could conclude that the increase in apoptotic index correlated with the L-OHP concentration and time. The apoptotic level was positively correlated with L-OHP at a concentration within 0-2.0 mg/L, as detected by Annexin-V labeling flowcytometry. The apoptotic index reached a peak of 76.47% when the concentration of L-OHP was 2 mg/L. On the contrary, the apoptotic index dropped when the concentration reached 5 mg/L. Expression of Caspase-3 m-RNA By means of RT-PCR, we detected an enhancement of Caspase-3 m-RNA expression (0.48±0.47 vs 0.18±0.20, P<0.05) induced by L-OHP which was also in positive correlation with the apoptotic level. Role of activated Caspase-3 in apoptotic process induced by oxaliplatin AC-DEVD-CHO, a Caspase-3 specific inhibitor, could significantly inhibit and delay apoptosis induced by L-OHP (Figures 3, 4). DISCUSSION Gastric carcinoma is one of the major causes of cancer morbidity and mortality in China. The natural history shows a high metastatic potential since many patients with gastric carcinoma at advanced stage will relapse or initially present with metastasis. One of the first issues solved by clinical research over the last decade is the value of chemotherapy in the metastatic setting. Indeed, chemotherapy has been shown to have a favorable impact on survival and quality of life compared with supportive care alone. However, in this disease some traditional chemotherapy regimens were considered as poorly tolerated or less effective [1][2][3][4] . Oxaliplatin is an innovative platinum compound indicated as a first-line therapy in combination with 5-FU and folinic acid for metastatic colorectal cancer [5][6][7][8] . Oxaliplatin has a powerful anti-neoplasm competence, little cross drug resistance with CDDP, a synergistic effect with 5-FU and a satisfactory safety profile [9][10][11] . We replaced CDDP with oxaliplatin in a traditional FLP protocol, trying to explore its anti-neoplasm activity and side effects in treating advanced gastric carcinoma. In 22 patients, 9 cases achieved objective responses (including 1 complete response and 8 partial responses), the overall response rate reached 40.9%, PFS 4.2 mo, and overall survival time 7.2 mo. The toxicity was tolerable, the rate of vomiting and diarrhea (1 case with grade III diarrhea), bone marrow depression was 20% and 32.9%, respectively. No alopecia and skin toxicity were encountered. Although the incidence of accumulative neurotoxicity was as high as 93.5%, all of them were grade I-II. Acute symptoms manifesting as transient dysaesthesia and/or paraesthesia of the extremities were commonly observed, their occurrence was triggered or enhanced by exposure to cold. No patient experienced pharyngolaryngeal dysaesthesia characterized by a transient sensation of difficulty in breathing or swallowing without any objective evidence of respiratory distress, which was encountered during the multi-center research in treating colorectal cancer after 9 cycles [12][13][14] . In all the cases in this study, symptoms improved after treatment discontinuation. It is suggested that oxaliplatin is effective and well-tolerated in patients with stage IV gastric carcinoma. We chose human gastric cancer cell line SGC-7901 for experimental study. First, we used MTT to prove if L-OHP could inhibit SGC-7901 growth. using the means of our data from the experiments, we obtained a smooth inhibition curve, which was a typical inversed 'S'. IC 50 was 0.71 mg/L and the maximal inhibitory rate reached 85.3%.The inhibition of L-OHP on SGC-7901 cell line was typically dose dependent. Naturally occurring or programmed cell death can regulate cell number, facilitate morphogenesis, remove harmful or otherwise abnormal cells, and eliminate cells that have already performed their functions during the life development as well as in tissue homeostasis and aging. The role of apoptosis in the process of carcinogenesis, development of cancer and malignancy treatment has drawn more and more attention in recent years [15][16][17] . In this study, we tried to evaluate the level of apoptosis induced by oxaliplatin. Transmission electron microscopy could reveal the changes of cell ultrastructure during the apoptotic process. TUNEL assay is a traditional method for detecting apoptosis, but its selectivity is poor. It could hardly differentiate the apoptotic cells from the necrotic ones. Phosphatidylserine (PS) only exists in the cytoplasm side of cell plasma membrane, and externalization of PS occurs in the early stage of apoptosis. Annexin-V could specifically conjugate to the PS to detect the apoptotic cells [18] . So we combined traditional transmission electron microscopy and TUNEL assay with relatively highly selective annexin-V labeling flowcytometry to detect SGC-7901 cell line apoptosis induced by oxaliplatin. After treatment of SGC-7901 cells with oxaliplatin, some cells showed typical morphologic changes of apoptosis including chromatin condensation, chromatin crescent formation, nucleus fragmentation and apoptotic body formation under transmission electron microscope, TUNEL assay showed the AI positively correlated with drug concentration and treatment time. Annexin-V-fluorescein labeling flowcytometry was much more sensitive than TUNEL in detecting the early stage apoptosis. The apoptotic level positively correlated with L-OHP at a concentration within 0-2.0 mg/L, as detected by annexin-V labeling flowcytometry. The apoptotic index reached a peak, when the concentration of L-OHP was 2 mg/L. On the contrary, the apoptotic index dropped while the concentration reached 5 mg/L. To put these results together, we believed that the induction of apoptosis played a key role in inhibiting malignant cells at a low drug concentration of L-OHP, and that cytotoxicity and apoptosis coexisted while the drug concentration was high. This discovery may provide a theoretical basis for this type of treatment. The discovery of cytosolic aspartate-specific proteases, called Caspases, which are responsible for the deliberate disassembly of a cell into apoptotic bodies, brings the fresh air to the research of malignant cell apoptosis. The Caspase family is big and dozens of family members interact with each other to promote or inhibit the process of apoptosis. Caspases are present as inactive pro-enzymes, most of which are activated by proteolytic cleavage. There are two pathways of Caspase activation, namely the cell surface death receptor pathway and the mitochondriainitiated pathway. In the cell surface death receptor pathway, activation of Caspase-8 following its recruitment to the deathinducing signaling complex (DISC) is the critical event that transmits the death signal. This event is regulated at several different levels by various viral and mammalian proteins. Activated Caspase-8 can activate downstream Caspases, such as Caspase-3 by direct cleavage or by indirectly cleaving bid and inducing cytochrome C release from the mitochondria. In the mitochondrial-initiated pathway, Caspase activation is triggered by the formation of an Apaf-1/cytochrome C complex that is fully functional in recruiting and activating proCaspase-9. Activated Caspase-9 will then cleave and activate downstream Caspases such as Caspase-3, -6, and -7 [19][20][21][22][23] . So we can find out that Caspase-3 locates in the downstream of the Caspase cascade. The proteolytic activation of Caspase-3 has been found to play a key role in apoptotic process [24][25][26] . Caspase-3 may then cleave vital cellular proteins or activate additional Caspases by proteolytic cleavage. In this study, we detected an enhancement of Caspase-3 m-RNA expression induced by oxaliplatin by means of RT-PCR. Caspase-3 m-RNA expression was also positively correlated with the apoptotic level. AC-DEVD-CHO, a Caspase-3 specific inhibitor, could significantly inhibit and delay apoptosis induced by oxaliplatin. Taken together, we believe that Caspase-3 synthesis and activation play a key role in the apoptotic process of SGC-7901 cell line induced by oxaliplatin. Our research demonstrates that oxaliplatin is effective and well tolerable in treating gastric cancer. Inducing cancer cell apoptosis may be one of the anti-neoplasm mechanisms. This apoptosis may be mediated by up-regulation of Caspase-3 synthesis and activation. The efficacy and safety profile of oxaliplatin as a chemotherapeutic drug in combined anti-gastric carcinoma chemotherapy regimen should be further confirmed by double blind multi-center clinical studies.
2018-04-03T02:26:37.581Z
2004-10-01T00:00:00.000
{ "year": 2004, "sha1": "348ce1fec662a767f52a42f60171c282ced1535d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v10.i19.2911", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "53081237876c9b16b8b17a3534ca4c38dd7c0072", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268327799
pes2o/s2orc
v3-fos-license
Indonesian Twitter Users’ Language Attitude Towards English - Indonesian Code-mixing Social media users need language as a tool to communicate, both in verbal and written form. Over time, the language used in social media also shows some significant growth. Talking about language in social media, there is an interesting phenomenon recently on social media that can be reached within the internet, namely the code-mixing of Indonesian and English (which in Indonesian are also popularly known as Jaksel language, Keminggris , Kemlondo , Indoglish , Englonesian , or Bahasa Gado-gado ). This study is aimed to find out the attitudes of 500 Indonesian Twitter users towards English-Indonesian code-mixing with a questionnaire. This study uses a descriptive-quantitative approach. For the result of each aspect, it shows a quite positive indication: Cognitive aspect 2.92, Affective aspect 2.75, and Conative 2.79. If the three aspects being summed up, also indicated a quite positive attitude towards the code-mixing with point or score 2.82. The attitudes come from many reasons and factors, such as the need for understandability in communication, language prestige, prepossessing or attracting attention, confidence, and language aptitude. Introduction As a result of advances in science, technology, and culture, the use of foreign languages is increasingly entering the daily lives of Indonesian people, especially young people.So many young people nowadays are exposed to new things on the 51 internet already.Technology advances, according to Widawati (2018), also have made the language progress and been promoted very rapidly.Up to 2015, according to Kemkominfo (Indonesia Ministry of Communication and Information Technology), internet users in Indonesia reached 80 million people, an increase of as much as 300% from the last 5 years.Until the following years, internet users in Indonesia had reached 130 to 150 million people.As a matter of fact, 60 million of them had accessed the internet via mobile.Widawati (2018), later said that this is a sign of an extraordinary level of productivity in using language. Proficiency in English is starting to become an advantage or even a necessity in the modern era.We can see young parents are quickly recognizing this.Over time, the language used in social media also shows some significant growth.Talking about language in general and English on social media, there is an interesting phenomenon recently on social media that can be reached within the internet, which is Jaksel speech style, a linguistic phenomenon where it is namely the codemixing of Indonesian and English (which in Indonesian are also popularly known as Keminggris, Kemlondo, Indoglish, Englonesian, or Bahasa Gado-gado).The use of the language is being discussed, especially on Twitter.Before the Jaksel language variety became a conversation on other social media such as Instagram and Youtube, the Jaksel language variety was already popular on Twitter.This phenomenon came up around September 2018 and it continued spreading quickly on the internet (Rusydah, 2020).Poernamasari (2019) stated the mixing of the two languages is currently considered to be representing the modernization process through social media, and this speech style is commonly used by young people through their daily social activities, either at school, at work environments, when they hang out with friends, or through their social media accounts. The usage of English mixed with Indonesian or vice versa does not only occur in these platforms and activities, but it happens universally, namely in several other regions in Indonesia (Poernamasari, 2019).Ivan Lanin, one of the writers and language activists on Twitter in an interview with one of the online media, Kompas Cyber Media (2018), states that this phenomenon has been going on for a long time and everywhere, he also stated that this is not a seasonal phenomenon that has just happened currently--or this is the first time it has happened, otherwise its existence will still continue even though jokes or issues, and Jaksel language are no longer widely discussed. These days, there are many kinds of social networks on the internet.Social networks' users on the internet generally use Twitter as a place to make jokes and to discuss certain social phenomena, one of which is the linguistic phenomenon of speakers of many languages.Prabowo (2018) wrote that recently there has been a lot of bullying on social media for those who mix Indonesian and English.Twitter (or any other social media platforms) is considered as a free and unlimitedaccessed social media, it provides a comfortable space for some people to have free speech, yet also a free media to offend others, even though they have their own regulation, where the violation can be reported and followed-up to certain extents.Poernamasari (2019) furthermore said that mixed language like this might be a symptom that infects many people on Twitter, regardless of the many jokes about a mixture of Indonesian and English which are related to people with better economic conditions or higher education, who have this tendency in everyday conversation. This kind of phenomenon is inevitably the course of stereotypes.The inherent use of this language seems to give an affirmation to an identity (Poernamasari, 2019).In the end, English is just a language, like race, religion, sexuality, or even haircut style, it is just a human variable.O'Neill and Massini-Cagliari, (2019) stated that variables are quite sensitive to being associated with some type of social meaning or stereotyping, independently of any inherent or intrinsic qualities of the variable.They also stated that such stereotyping can be the cause of linguistic prejudice and even discrimination, which can result in social exclusion and have serious impact for engagement of individuals with the education system and the establishment and thus pose an economic and developmental challenge. Despite the fuss over the widespread use of mixed language and how people react to these issues and phenomena, Betawi culturalist, Ridwan Saidi, as quoted by Friana (2018) on TirtoID, views the mixed language phenomenon as a creative behaviour.According to him, this bilingual pattern in language is not destructive.On the other hand, Prabowo (2018) on TirtoID quoted a clinical psychologist, Kasandra Putranto, who stated that if the Jaksel language variety is used as a method of learning English, it should not be viewed as something negative, according to him, every era has its own style of language.Prabowo (2018) on TirtoID also quoted what Ikwan Setiawan stated, a lecturer of the Faculty of Cultural Sciences, Universitas Negeri Jember (UNEJ), that the phenomenon of young people using mixed languages is normal behaviour and academically is not a problem. It becomes very interesting to see when the function of language as it can unify nations and other positive values, it also begins to change in a negative direction.Language is then used as a tool that can corner, offend, and oppress other people through interactions in person or through social media on the internet.Stereotyping a language and any beliefs towards a language is a discourse in attitude towards a language and it then becomes a sub-topic in sociolinguistics.For example, judging a language as ugly, beautiful, or harsh and the speakers of the language as rude, rich, sophisticated, or educated (Schu ppert et al., 2015).According to Dragojevic (2017), every person surely has a different opinion about certain different things, including opinions about a language.He stated that language attitudes are appraising reactions or responses to different language varieties.They can be the opinion, ideas, and prejudice that the speakers would have with respect to a language.They reflect two sequential cognitive processes: social categorization and stereotyping. The writer becomes excited to explore attitudes, as the writer himself has seen so many biases and cases towards multilingualism, especially on the internet.In this case, the writer also has been using Twitter for a decade and seeing so many linguistic phenomena where some people would be so angry and bitter by only seeing other people's writings on social media that are written in certain varieties or other foreign languages.But some others are very supportive.So that it can refer to negative or positive attitude, integrative or instrumental attitude, and how language attitude can be measured with some methods. Recent studies in language attitude, like Rossi and Saneleuterio (2016), found that language attitudes play an important role in bilingualism and second language learning.Among the many ways, we often hear foreign languages being used and the language mixing becomes inevitable.As code-switching can be felt as a problem or as a result of being open-minded in language learning.The paper analyses the responses of a questionnaire by a sample of 53 bilingual subjects.Most of them are English and Spanish speakers, and it is relevant when it comes to the correlation between the number of languages a person knows and the language prejudice of someone.When it comes to code-switching and code-mixing, Abdurahman (2020) found that the variety of English and Indonesian which is grammatically independent, i.e., alternation, is perceived as more socially attractive, while the standardized English is perceived as more intelligent.The conclusion from his study is that the ideological and social factors may affect people's perception and overall attitude towards the use of English in Indonesia and the mix or switch of these both languages.Also, Abdurahman, Gandana, and Novianti (2018) explored Indonesian university students and faculty members' attitudes towards the use of English in both face-to-face and virtual contexts.The results show that there were mixed attitudes towards English among the respondents.This study suggests that while virtual domains can provide a space for learning and practicing English, a beneficial utilization of the language ultimately depends on how English language learning is planned and designed. Last, research by Naveed (2014), he used a questionnaire to collect the data from 200 students in four Pakistani colleges and universities to find out the students' attitudes towards both educators and students' code-switching, as well as reasons for performing and not performing to code-switching in EFL contexts. The results indicated positive attitudes of students towards using the target language by themselves and their educators.Meanwhile, using Urdu was perceived beneficial for expressing ideas, explaining new vocabulary words, and optimizing the learners' chance to improve their English proficiency. From the studies above, there are similarities and differences to the current research, namely that all of them focus on investigating and exploring people's attitudes towards language and language variations.Several of them are investigating the attitudes towards phenomena in multilingualism, one of which is code-mixing and code-switching.However, those studies are different from the present study in terms of the domain of the community that participated.Exploring language attitudes, also means that it is possible to be able to explore the reasons why someone chooses a language (code) or even variations of languages, including code-mix and code-switching.Otherwise, this study will find out more about attitudes towards code-mixing--which is English-Indonesian code-mixing--in the Twitter domain.The participants are Twitter users across Indonesia.Another difference is the data collecting method that will be used. The writer finds this research interesting and useful to explore where the writer himself has not seen many similar studies.The writer sees not so much research on language attitudes on social media in Indonesia.The attitude of social media users in Indonesia towards the use of English mixed into Indonesian is barely unknown.Therefore, sociolinguistic research on language attitudes towards the use of English should be explored.This research will focus on studying the language attitudes of Indonesian Twitter users towards English-Indonesian codemixing.The writer chooses this title to find out about language attitudes on the internet, specifically on Twitter.Based on the background above, this study is undertaken to answer the following questions: 1) What are Indonesian Twitter users' language attitudes towards English-Indonesian code-mixing? Method In this study, the writer used a descriptive-quantitative approach.The population of this study were mainly Indonesian Twitter users.There are so many Twitter users nowadays.Prihadi (2015, cited in Nugraheni, 2017) emphasized that in 2015, the number of Twitter users in Indonesia is estimated to have reached 50 million.Since there are limitations in time, energy, and funds, the writer deliberately limited the research subjects and used samples instead.For the sample, this study used non-probability sampling.The writer collected 500 respondents with self-selection sampling technique (or also known as voluntary sampling technique).The participants will be the ones who perform English-Indonesian code-mixing. This study used a questionnaire to obtain the data about the attitudes of Twitter users, where the items were structured in the form of open-ended and closed-ended questions.The questionnaire was created using Google Forms.It was then distributed and broadcasted on Twitter.The items in the questionnaire in this study used and adapted questionnaire items from Al-Qaysi (2016) who had conducted research about attitude towards code-switching previously, but in this study, the term "code-switching" is changed to "code-mixing" to adapt to the topic of this research.Since Indonesian is the national language, the questionnaire was translated into Indonesian to avoid misunderstanding and to assist the participants in selecting the proper options and context. The questionnaire survey includes 18 items that are categorized into three sections.The first section includes 2 items representing the respondents' demographic data: gender and age.The second section includes 1 question to represent the respondents' usage of code-mixing in daily life and social media interaction.The last section consists of 14 items of statement investigating the respondents' attitude towards the code-mixing. For the data analysis technique in this study, Likert scale is used related to the respondents' language attitudes.Every answer is given a score with a range from one to four, with a range from 'strongly disagree', 'disagree', 'agree', and 'strongly agree'.The highest score is 4 (four), while the lowest score is 1 (one).The highest score indicates a high positive attitude, then the lowest score indicates the other way around.The responses were analyzed in Microsoft Excel.The writer calculated the average of the answers, calculating the percentage, and classifying the dominant percentage.Conclusions were drawn by calculating the average points obtained from the respondents.To calculate the percentage of the answers, the writer used this pattern: = ( ℎ 100) Results As for the characteristics of the respondents, they will be described by gender, age, and usage frequency or how they perform the code-mixing in daily communication, especially in social media (Twitter). Gender of the Respondents From the chart 1, we can see and conclude that women are the most participated in this research with a total of 331 respondents or around 66.2%.Then followed by 119 male respondents (23.9%) and undecided with 50 respondents (10%). Age of the Respondents From this Chart 2, we can see that the dominant respondents come from the range of age 17 to 25 with a total of 321 respondents (64.2%).Then followed by 26-35 with 112 respondents (22.4%).By this, we can conclude that the respondents from late teenagers to young adults are the ones that participate the most in this research. Usage Frequency of English-Indonesian Code-mixing From Chart 3, we can see that the respondents tend to perform the English-Indonesian code-mixing in this research.There are 287 respondents (57.4%) who choose "Sometimes" and "Always" with 120 respondents (24%).We can conclude that the respondents that participate in this research, have high usage of the English-Indonesian code-mixing.) respondents that chose Agree and 169 (33.8%) respondents chose Strongly Agree for this first item.Here, we can conclude that the respondents (78%) agree that English-Indonesian code-mixing can help to enhance their communication skills.Next, there are 185 respondents (37.0%) on Agree and 227 (45.4%) on Strongly Agree for item 2. Mainly, we can conclude that the respondents (81.9%) agree on the code-mixing to develop their language skills. For item 3, showed that there are 157 respondents (31.4%) on Strongly Disagree, 167 respondents (33.4%) on Disagree, 132 (26.4%) respondents on Agree, and 44 respondents (8.8%) on Strongly Agree.The distribution of the response is quite fair.But we can still see that most of the respondents (64.8%) do not agree that English-Indonesian code-mixing can show them off as a welleducated person, while the rest believe that the code-mixing shows them that they are. On Item 4, asked about how the code-mixing is still related to their perceived language prestige where they think that the code-mixing could make them feel more prestigious.This item revealed that there were 220 respondents (44.0%) on Strongly Disagree and 185 respondents (37.0%) on Disagree.The distribution of the responses is quite extremely polarized on negative answers.We can see that the respondents (81.0%) do not agree that the code-mixing can make them look prestigious. Item 5, revealed that 40.0% of them stand on the Agree side and 26.0% on the Strongly Agree.We can conclude that 66.0% of them agree that the code-mixing can make them understand better.Then, item 6, revealed that 37.0% of them agree with it and 47.6% of them strongly agree.Here, we can conclude that 84.6% of them agree that the code-mixing can help to convey new words easily.For item 7, revealed that 30.8% of the respondents agree and 50.6% strongly agree with it.This can mean that most of them (81.4%)agree that the code-mixing can help them to practice the second language. Item 8, asked about how the code-mixing can help the respondents learn new words from friends or teachers (school or university).This item indicated that 39.4% of the respondents agree and 43.8% of them strongly agree with the notion.This means that most of them (83.2%)agree that English-Indonesian code-mixing can help them to learn or memorize new words. Last, item 9, showed that 25.2% of the respondents agree and 65.0% strongly agree with the statement.From this, we can conclude that 90.2% of the respondents agree that the usage of English can help them express ideas that they cannot do in Indonesian.Item 10, has 34.8% of the respondents who agree and 32.2% who strongly agree with the statement.We can conclude that 67.0% of the respondents agree that the code-mixing might attract their attention. Affective Component Item 11, has 34.0% of the respondents who agree and 21.8% who strongly agree with the statement.On the other hand, there is an almost equal number of responses to this statement, with 30.4% of respondents disagree and 13.8% of respondents strongly disagree.Mainly, we can conclude that 55.8% of the respondents are the dominant one for this statement where they mostly agree.On item 12, 29.0% of the respondents agree on this statement and 42.4% strongly agree with the statement.It can be concluded that the respondents (71.4%) tend to agree that they do mix Indonesian with English due to the lack of Indonesian equivalents. Item 13, has 32.2% of the respondents who choose to agree with and 29.4% of them chose to strongly agree with the statement.To sum up, the respondents (61.6%) consider using English-Indonesian code-mixing if there are complex words in their native language. Last, item 14, there are 54.2% of respondents who admit that they do, with the following split: 32.6% of them agree and 21.6% chose to strongly agree.Here, it can be concluded that only about half of the respondents admitted that they mix English and Indonesian in discussing lessons, lectures, and exams. Average Result For the whole result of score, it shows that the respondents (as Twitter users) indicate they tend to agree and have positive attitude towards the English-Indonesian code-mixing with the details are as follows: Strongly Agree (4) has 2365 count score (33.8%) from all 14 items of the questionnaire and followed by Agree (3) with 2285 count score (32.6%), while Disagree (2) only has a count of 1410 score (20.1%), the Strongly Disagree (1) has count 940 score (13.4%). For the result of each aspect, each aspect shows a quite positive indication (cognitive aspect 2.92, affective aspect 2.75, and conative 2.79).For the result of three aspects being summed up, the attitude has a mean score 2.82. Discussion This research sought to find out Indonesian Twitter users' language attitude towards English-Indonesian code-mixing.From the age of the respondents, we can see that the respondents from late teenagers to young adults are the ones that participate the most.The young generations are digital natives, as they grew up surrounded by digital technologies, they are native speakers of the digital language of computers, video games, and also the Internet (Prensky, as cited in Jongbloed-Faber et al., 2016).We Are Social (wearesocial.com, in Ahmad and Nurhidaya, 2020) found that there was an increase in the use of social media compared to 2018 and its use was dominated by young people in Indonesia's generation Y (millennials) and Z, namely between 18-34 years old. The overall mean indicated that the users are positive towards the codemixing, yet if we break down more on the specific statements, there are few statements that are not agreed by the respondents in this finding.There are also items that get an almost balanced response. We can start discussing the findings that are related to communication.They showed how English-Indonesian code-mixing might help the respondents to communicate better, either about improving communication skills, letting them understand more easily, conveying new words regarding current hot issues, the lack of equivalent words in source language to express ideas more precisely or due to the lack of equivalent words in native language or mother tongue. These findings of this research have several items that related to communication performance with a high percentage of positive values.Thus, this research supports various studies conducted in the sociolinguistics domain, especially language alternation.Mujiono et al. (2017), conducting research about how code-mixing can be a communication strategy performed by Outbound Call (OBC) Center Agents.In their research, they found out that English-Indonesian code-mixing happened in different situations in order to appreciate the customer, to attract customer's attention or to persuade the customer, and to explain about the weakness of the products.This is in-line with what Kim (2006) described about the reasons or the social factors that influence the usage of code-mixing, such as participant roles and the relationship, situational factors, message-intrinsic factors, and as well as attitudes, dominance, and security.This is also in line with Bhatia and Ritchie (2008), who described backgrounds and relationships of participants, topic or content, and when and where a speech act occurs can be the triggers for people to mix their languages. Furthermore, nowadays we know we have sophisticated mobile phones with decent and accessible internet connection for all.The internet makes it easy for anybody to access English-language based content, across countries, which means it also opens opportunities for people to be exposed to English.We also know that English has become the language of instruction in social media.Then, this can be the cause of why English and the mixing is inevitably used in various social media, especially Twitter. Twitter only allows the users to write status (called as tweet) with limited characters.This way makes Indonesians consider switching some words in Indonesian language with words in English language or vice-versa.This is in-line with Abdurahman et al. (2018), where the study found out that the code-mixing enabled participants to write a shorter status on Twitter.Alternating with English can also be the reason why this language is chosen when it comes to avoiding complexity, especially conveying new words they have just recognized, that are related to technology, politics, or medical, such as gadget, social justice warrior (SJW), and swab test. We can also begin to talk about the items in cognitive components that talked about language prestige.They somehow get polarised in a negative direction.The users largely do not consider the English language as EFL in Indonesia as an education benchmark.The majority think that knowing English and mixing English words is not a guarantee to be perceived as a well-educated person.This is in-line with a study by Abdurahman (2020), where only standardised English is found to be perceived as intelligent, not the mixed variety.On the other hand, the codemixing English-Indonesian is found to be perceived as socially attractive instead. The users also do not consider that English-Indonesian code-mixing can show them as prestigious person.This result is similar to Al-Qaysi (2016) and Wirojwaranurak (2017) where the participants, fairly half or less, disagree with English and the mixing variety on certain prestige or pride.At the same time, this finding does not support several studies about Jaksel, Keminggris, or Indoglish variety as a form of English-Indonesian code-mixing, that has proven the existence of language prestige towards English, especially in urban areas or cities will increase the tendencies to code-mix both languages (Rakhmawati et al., 2016;Saddhono and Sulaksono, 2018;Oktavia, 2019;Rachman, 2019).In this study, the writer assumes that this can happen since Twitter is seen as a very informal social media.Also, the English language has been considered as a common language on the internet.There is likely for the mixed variety to be perceived so commonly, especially on social media since code-mixing of Indonesian and any local languages such as Javanese, Bataknese, Sundanese, or Bugis also happened on Twitter.Everyone also can write as they please.People can also write as freely as possible, with their own style of writing and expression, where they feel like no need to worry too much about the language choice and how other users perceive their identity on the internet especially since we can make a profile on Twitter where we do not need to upload our identity of who we are.Twitter then remains as a personal blog. For the discussion of the items in the affective domain, we can conclude that they have a fairly high value where the respondents mostly agree that English-Indonesian code-mixing can attract their attention and also, they feel confident towards it.Code-switching in a study by Mukti and Muljani (2016) shows that students are more interested and pay attention to teachers when explaining lessons in class when the teachers perform the code-mixing.Then, code-mixing in a study by Leung (2010) about code-mixing in advertisements found that codemixing alone can attract the attention of readers and listeners, let alone the audience who has a higher educational background.Regarding the educational background, this is also in line with research by Itmeizeh and Badah (2021) where they found people with higher educational backgrounds who feel confident with code-mixing, which is also influenced by one's language competence or acquisition. Last, the statements that have the nexus of language attitudes to language learning.This topic becomes a cross-over with language aptitude.According to Stansfield (1989), language aptitude can refer to prediction of how well a person's ability to learn a foreign language in a given period of time and under given conditions.These items show very high indication that the respondents--when it comes to English--consider the code-mix as a process of learning, improving language skills, and practicing the second language and English language at once.Having positive attitudes towards a language, often leads to a significant increase in language skills.Gardner (2014) stated there are two types of motivations related to language attitude, namely integrative and instrumental motivation.The statements mentioned about learning language are in parts of integrative and instrumental motivation.Integrative orientation deals with the reason suggesting that an individual learned a particular language in order to learn about or interact with a community or other people or users on the internet who happened to use the same variety.Surely this will help the process of language acquisition.Jismulatif (2018) stated that successful language learning, especially English, is heavily influenced by the positive attitude towards the language itself, besides that motivation also has a very important role in realising success in learning a language.He also stated that the attitudes towards language learning itself contributes to the success rate.This is in line with study by Oroujlou and Vahedi (2011), where they also stated that in learning a language, it often helps to have a positive attitude towards that language.Riyanto et al. (2015) also found that one of the factors that influence students' vocabulary understanding is the attitude towards language learning.research also seems to support Henni (2017) where in her study, she found out that positive views towards code-switching seems to have a fruitful role in the learning process.Last, this supports the study by Omar and Ilyas (2018).They found out that attitude towards code-switching affected learners' academic performance since the learners' attitude towards Arabic and English language contributed to their learning and knowledge acquisition. Conclusion Based on the research finding, the Indonesian Twitter users indicate that they have quite a positive attitude towards the code-mixing of English-Indonesian.Each aspect of attitude also shows quite a positive attitude towards the code-mixing of English-Indonesian.The users mainly agree that the English-Indonesia codemixing can enhance the understandability in communication.Regarding language prestige, the users mainly disagree that the English-Indonesia code-mixing makes them feel more prestigious.The users agree that the English-Indonesian codemixing can draw their attention.They also tend to agree that they feel confident and comfortable towards code-mixing.The users when it comes to English mainly consider the English-Indonesian code-mixing as a process of learning, improving language skills, and practicing the second language and English language (as EFL in Indonesia) at once.This research focuses on the attitudes of Indonesia Twitter users towards English-Indonesian code-mixing.This study does not really dig further into the respondents' language proficiency.Regarding language proficiency, this study does not aim to find how language proficiency might influence the attitude or vice-versa.This study also does not find the language choice (reasons or factors) and the language attitude towards each language or variety to be compared, whether it is in daily interaction or in Twitter per se.So, for future studies, the writer hopes for the other researchers to conduct the same topics with more participants in the different domains or specific groups of population (i.e the people who do not perform English-Indonesian code-mixing) to gain different findings from different perspectives and comparing between Indonesian, English, and English-Indonesian code-mixing.Besides that, it is also better if the next researchers can link the study to the latest sociolinguistic theories to ensure the research's recency.Furthermore, future researchers can use more various data collecting techniques or methods and instruments, such as interviews and focus group discussion.Last, the writer also really hopes for the next researchers to be able to construct more precise items with larger concepts and compare the attitudes of groups belonging to different generations, gender, educational background, and occupations to make the data richer. 57 Chart 3 Usage Frequency pf the Respondents' Code-mixing Table 1 Cognitive Aspect of Respondents
2024-03-12T16:24:49.716Z
2024-02-03T00:00:00.000
{ "year": 2024, "sha1": "6fcec9b942eafea0cdd52eb2339f00c9e3d92356", "oa_license": "CCBYSA", "oa_url": "https://ejournal.iainpalopo.ac.id/index.php/ideas/article/download/2340/2726", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4427354835c71dc242bc5158c3839c63eb906366", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
248808138
pes2o/s2orc
v3-fos-license
Spectral and Angular Characteristics of the High-Contrast Dielectric Grating under the Resonant Interaction of a Plane Wave and a Gaussian Beam The resonant interaction of a plane wave and a one-dimensional Gaussian beam with a high-contrast dielectric grating was analyzed. Rigorous coupled wave analysis (RCWA) was used to numerically model the diffraction of a plane wave by the grating. RCWA, a discrete Fourier transform at the fulfillment (of the conditions) of the sampling theorem, was used to study diffraction of the Gaussian beam. The grating can be considered as a one-dimensional photonic crystal along which the waveguide mode propagates under resonance. The corresponding photonic crystal has both allowed and forbidden photonic bands for the propagating waveguide mode under resonance due to the high-contrast dielectric permittivity. There is no significant difference between the spectral and angular characteristics under the interaction of the plane wave or the Gaussian beam with grating, if the waveguide mode is in the forbidden photonic bandgap. The reflection coefficient from the grating is practically equal to unity for both cases. Resonant spectral and angular characteristics become wider at the Gaussian beam diffraction compared to the resonance curves for the plane wave in the case when the waveguide mode is in the allowed photon bandgap. The reflection coefficient from the grating becomes less than unity and its value tends to unity when the Gaussian beam width increases. Introduction In recent years, intensive studies of refractive index sensors based on the dielectric grating on dielectric substrates have been carried out [1][2][3][4]. These sensors are based on the waveguide mode resonance [5][6][7] by the dielectric gratings on the dielectric substrates. The reflection coefficient from a periodic structure is equal to unity under the resonant interaction of an incident plane wave and grating [8][9][10]. The operation principle of such sensors is based on a change in the resonant wavelength of the incident wave or the resonant angle of the beam incidence by the gratings with the change in the refractive index of the studied surrounding medium [3,10,11]. The predominantly numerical studies of the diffraction of a plane wave by the grating were carried out using rigorous coupled wave analysis (RCWA) [12] with numerically stable algorithms [13][14][15]. The RCWA is asymptotically accurate [13] and converges faster than the other methods for dielectric periodic structures. It is used to analyze various periodic structures [16,17], including photonic crystals [18]. However, the results obtained using RCWA for the resonant interaction of a plane wave with a dielectric grating often did not coincide with the experimental data [2,19]. It is Materials 2022, 15, 3529 2 of 10 obvious that the grating was irradiated not by a plane wave but by a finite cross-section beam in the experiments. It should be noted that the resonant wavelengths, experimentally determined and numerically predicted, coincide with satisfactory accuracy. At the same time, the reflection coefficients determined experimentally are significantly less than unity, but significantly higher than the values (R > 0.1), which can be explained by Fresnel reflection. Simultaneously, the widths of the experimentally measured spectral resonance curves increased. This discrepancy between the numerically predicted and experimental results can be explained by the following reasons: (a) there is no strict repeatability of the groove shape and the period of the grating in the real grating, which was especially important in [19]; (b) the grating was irradiated by a finite cross-section beam during the experimental studies. The beam width can be much less than the distance over which the resonant waveguide mode propagates into the gratings. It is important for a slight refractive index modulation of the grating medium [2,16]. Therefore, it is important to numerically study the interaction of a finite cross-section beam with the dielectric grating, under waveguide resonance. This problem has been analyzed in a relatively small number of scientific works. Apparently, this is due to the fact that the result of the diffraction of a finite cross-section beam by the grating almost coincides with the result of the diffraction of the plane wave in the absence of resonance [20]. The studies of the diffraction of the finite size beam from three to 20 periods of using the finite-difference frequency-domain method were presented in [21]. The influence of beams limited in transverse size by the rigorous boundary element method was studied using the rigorous boundary element method in [22]. According to the simple scalar theory of diffraction, it was explained that the angular anomaly of the reflected beam from the grating is a direct result of the finite size of the beam [23]. In [24], a study of beam diffraction using a metal grating of finite size, under excitation of the surface plasmonpolariton resonances, was carried out. Theoretical modeling suggests an expansion of the resonances when the grating size decreases. The resonant filter for telecommunications based on gratings finite in size has been developed, which is tuned due to the change in the incidence angle of the beam by the gratings [25]. It was shown in [26] that the fields of Gaussian beams scattered by reflective gratings differ markedly from those predicted by geometrical considerations. A corrected theory of diffraction [27] by a finite volume grating, which is rather complicated for practical use, was proposed. The influence of the finite size of the incident Gaussian beam on the spectrum of anomalous reflection and the shape of the energy distribution in the reflected beam from the waveguide with the grating was analyzed using the developed approximate theory [28]. The authors of [29] attempted to extend the RCWA method for a grating with a finite number of periods using supercells. Guizal et al. [30] developed a method called aperiodic RCWA, in which the permittivity of the finite grating is represented by a Fourier integral leading to an integral-differential equation. Lalanne and coworkers [31,32] introduced the use of absorbing boundary conditions and ideal layer matching at the unit cell ends, to numerically analyze the finite periodic structures. The simplest and rather effective theory of the reflection of the finite size beam from the grating was given in [33], where the solution was provided in the analytical form. This theory consists of decomposing the limited beam into plane waves. The direct and inverse transformations are used in the analysis. This theory does not imply a search for the spatial distribution of the amplitude and, accordingly, the power of the wave passes through the gratings. Therefore, it is impossible to check whether the law of energy conservation is fulfilled during diffraction by the purely phase grating. Further development of the theory of limited light beam diffraction by the dielectric grating was presented in [34]. The method is based on the representation of such a beam as an expansion into plane waves using the Fourier transform. Then, the amplitude reflection and transmission coefficients of RCWA are determined for each plane wave. The field distribution of the reflected and transmitted beams is determined by the inverse discrete Fourier transform [35]. The number of plane waves into which a limited light beam is decomposed must correspond to the sampling theorem [36,37]. The developed method [34] corresponds to the energy conservation law, if the grating is dielectric [38] at the diffraction of the finite cross section beam. Thus, the sum of the powers of reflected and transmitted beams is equal to the power of the incident beam on the grating. Results of studies of finite beam diffraction by non-absorbing gratings obtained by the holographic method using the photosensitive media were obtained using this method [20,34]. Such gratings are characterized by the low modulation of the grating medium refractive index. In [20], diffraction analysis of the one-dimensional Gaussian beam and the beam described by the rect(x) [35] function was performed. Experimental data [20] of the reflection coefficient dependence on the beam width coincided well with the corresponding dependence obtained by the numerical method, according to the developed theory. A sensor based on a relief dielectric grating with a rectangular bar cross-section was characterized by unique properties [4]. The lower refractive index was 1.333 (the refractive index of the tested medium), and the higher refractive index was 2.0 at the fill factor F = 0.5 in this grating. It turned out that such a structure had some unexpected properties. The full width at half maximum (FWHM) decreased sharply (approximately from 20 nm to 0.15 nm) [4] at almost constant sensitivity S ∼ 250 nm/RIUU and accordingly increased the figure of merit (FOM), which can be defined as FOM = S/FWHM [39] for certain grating parameters (grating period and thickness, wavelength). The FOM increased from 14 to 1620. Such features of this periodic structure were explained in [40], in which an analysis of diffraction was performed using numerical methods for the plane wave and the Gaussian beam. A comparison of the obtained results allowed us to conclude that the grating under the waveguide mode resonance, with respect to the high reflection coefficient from the grating, should be considered as a one-dimensional photonic crystal [41]. Such a highcontrast photonic crystal can have both forbidden and allowed photonic bandgaps [41,42]. There is no bandgap [41] in the case of low dielectric contrast in the photonic crystal. Thus, such sensors will have a high FOM [2,20,34]. It was found that FOMs are equal in wavelength and angle, both for a plane wave [2] and for the limited beam in cross-section [20,34] for sensors based on the holographic gratings (small modulation of the grating medium refractive index). In addition, it was shown that FWHM for the plane wave is uniquely related by the analytical expression linking wavelength, grating period, beam angle, and wavelength attenuation constant of waveguide mode at the propagation by the grating [20]. Moreover, the constant attenuation was determined at the grating irradiation by the Gaussian beam. However, it is not certain that these provisions will be valid for gratings with high-contrast changes in dielectric constant over a period. Allowed and forbidden photonic bandgaps are possible in such gratings in one-dimensional photonic crystals. Therefore, it is desirable to continue numerous studies in its irradiation with the plane wave and the Gaussian beam. These studies should be aimed at obtaining the spectral and angular dependences of the reflection coefficient from the grating, with various parameters (L, d, θ, λ), as well as the changes in the resonant wavelength and the resonant angle, upon changes in the surrounding medium refractive index. According to the results of these studies, it is important to determine the angular and spectral sensitivity, as well as FWHM and FOM, and how they depend on the Gaussian beam width L, for the allowed and forbidden photonic bandgaps. Results of Numerical Modeling and Discussions The method described in detail in [20,34,40] was used to analyze the diffraction of the Gaussian beam by the dielectric grating. The notation of physical quantities in this paper is the same as in [40]. The researched periodic structure with the corresponding symbols is shown in Figure 1. The red arrow in the figure schematically shows the waveguide mode propagating from right to left, which loses its energy when interacting with the grating; as a result, the reflected and missed beams are formed. Results of Numerical Modeling and Discussions The method described in detail in [20,34,40] was used to analyze the diffraction of the Gaussian beam by the dielectric grating. The notation of physical quantities in this paper is the same as in [40]. The researched periodic structure with the corresponding symbols is shown in Figure 1. The red arrow in the figure schematically shows the waveguide mode propagating from right to left, which loses its energy when interacting with the grating; as a result, the reflected and missed beams are formed. The graphical materials ( Figure 2) and the corresponding numerical data given in [40] were used for our study. Each grating thickness has its own resonant wavelength at which the reflection coefficient is equal to unity, as follows from Figure 2a. It can be seen that the resonant wavelengths are higher than at the normal incidence of the plane wave at the incidence angle of the plane wave = 18 ⁄ . The graphical materials ( Figure 2) and the corresponding numerical data given in [40] were used for our study. The method described in detail in [20,34,40] was used to analyze the diffraction of the Gaussian beam by the dielectric grating. The notation of physical quantities in this paper is the same as in [40]. The researched periodic structure with the corresponding symbols is shown in Figure 1. The red arrow in the figure schematically shows the waveguide mode propagating from right to left, which loses its energy when interacting with the grating; as a result, the reflected and missed beams are formed. The graphical materials ( Figure 2) and the corresponding numerical data given in [40] were used for our study. Each grating thickness has its own resonant wavelength at which the reflection coefficient is equal to unity, as follows from Figure 2a. It can be seen that the resonant wavelengths are higher than at the normal incidence of the plane wave at the incidence angle of the plane wave = 18 ⁄ . Figure Each grating thickness d has its own resonant wavelength λ rez at which the reflection coefficient is equal to unity, as follows from Figure 2a. It can be seen that the resonant wavelengths are higher than at the normal incidence of the plane wave at the incidence angle of the plane wave = π/18. Figure 2b indicates that the reflection coefficient P r takes on minimum values at certain thicknesses corresponding to λ rez according to Figure 2a: d = 0.65 µm and d = 1.288 µm for θ = 0 and d = 0.78 µm and d = 1.52 µm for θ = π/18. Therefore, it can be argued that the corresponding λ rez values are in the photonic allowed bandgap according to the theory of photonic crystals. At the same time, reflection coefficient P r is practically equal to unity for a wide range of thicknesses d, even for the Gaussian beam width L = 0.1 mm. That is, these thicknesses and the corresponding resonant wavelengths are in the forbidden photonic bandgap. The spectral dependences of the reflection coefficient on the grating are shown in Figure 3. The reflection spectrum for the Gaussian beam, i.e., the solid red curve, coincides with the blue circles, corresponding to the plane wave at the thickness d = 1 µm (Figure 3a, photonic bandgap). The spectral dependences of the reflection coefficient on the grating are shown in Figure 3. The reflection spectrum for the Gaussian beam, i.e., the solid red curve, coincides with the blue circles, corresponding to the plane wave at the thickness = 1 m ( Figure 3a, photonic bandgap) The spectral curves under conditions when the waveguide mode is in the allowed photon bandgap and can propagate a considerable distance in the grating are shown in the Figure 3b-d. It can be concluded that the reflection coefficient ( ) increases when increases and approaches the spectral curve at = ∞ (Figure 3b). There is also a clear correlation between the reflection coefficient at resonance for the Gaussian beam in accordance with Figure 2a and the width of the spectral curve for the plane wave. That is, a smaller reflection coefficient for a Gaussian beam results in a smaller width of the spectral curve for the plane wave. The widths of the spectral resonance curves at the level of 0.5 at the incidence of the plane wave and the Gaussian beam ( ≡FWHM) were determined using Figure 3. The attenuation indices at the propagation of the resonant waveguide mode in the grating It can be concluded that the reflection coefficient P r (λ) increases when L increases and approaches the spectral curve at L = ∞ (Figure 3b). There is also a clear correlation between the reflection coefficient at resonance for the Gaussian beam in accordance with Figure 2a and the width of the spectral curve for the plane wave. That is, a smaller reflection coefficient P r for a Gaussian beam results in a smaller width of the spectral curve for the plane wave. The widths of the spectral resonance curves at the level of 0.5 at the incidence of the plane wave and the Gaussian beam (δλ ≡ FWHM) were determined using Figure 3. The attenuation indices γ at the propagation of the resonant waveguide mode in the grating were determined for the linear sections of the curves ln|r 0 (x)|. These data are presented in Table 1 (columns 4 and 5, respectively). The angular dependences of the reflection coefficient for the plane wave and the Gaussian beam are shown in Figure 4. It can be seen that angular dependence has a somewhat flat vertex (Figure 4a, red curve) at the normal incidence of the plane wave. However, it is absent for other cases. The widths of the angular dependences δθ are given in column 6 of Table 1. There is a clear correlation between the widths δλ and δθ for the plane wave and the attenuation index γ; a smaller γ results in a narrower δλ and δθ for the plane wave. were determined for the linear sections of the curves ln| ( )|. These data are presented in Table 1 (columns 4 and 5, respectively). The angular dependences of the reflection coefficient for the plane wave and the Gaussian beam are shown in Figure 4. It can be seen that angular dependence has a somewhat flat vertex (Figure 4a, red curve) at the normal incidence of the plane wave. However, it is absent for other cases. The widths of the angular dependences are given in column 6 of Table 1. There is a clear correlation between the widths and for the plane wave and the attenuation index ; a smaller results in a narrower and for the plane wave. The dependences of the change in the resonant wavelength Δ on the change in the refractive index Δ for the Gaussian beam and the plane wave at other constant parameters are also interesting. The corresponding dependencies are shown in Figure 5. These dependences are linear in nature, and they can be used to determine the spectral sensitivity = Δ Δ . ⁄ It can be argued that the sensitivities are the same for the Gaussian beam and the plane wave since the red and green circles lie on the same straight line (see Figure 5). This is consistent with the findings in [34], where the same result was obtained but for a low modulation dielectric constant of the grating medium. It can be concluded on the basis of the data in Table 1 that values are slightly larger at the normal incidence of the Gaussian beam or plane wave compared to the angle of incidence 18 ⁄ . The spectral sensitivities for some cases are shown in Table 1 The dependences of the change in the resonant wavelength ∆λ rez on the change in the refractive index ∆n 1 for the Gaussian beam and the plane wave at other constant parameters are also interesting. The corresponding dependencies are shown in Figure 5. These dependences are linear in nature, and they can be used to determine the spectral sensitivity S λ = ∆λ rez /∆n 1 . It can be argued that the sensitivities S λ are the same for the Gaussian beam and the plane wave since the red and green circles lie on the same straight line (see Figure 5). This is consistent with the findings in [34], where the same result was obtained but for a low modulation dielectric constant of the grating medium. It can be concluded on the basis of the data in Table 1 that S λ values are slightly larger at the normal incidence of the Gaussian beam or plane wave compared to the angle of incidence π/18. Materials 2022, 15, x FOR PEER REVIEW 7 of 11 Figure 5. Dependence of Δ on the change of refractive index Δ for the Gaussian beam and the plane wave at the normal incidence of the Gaussian beam and the plane wave (a), and for the Gaussian beam and the plane wave incidence at the angle of 18 ⁄ (b). Straight red lines are drawn between the two extreme points corresponding to the minimum and maximum value of . The dependences of the change in the resonance angle Δ on the change in the refractive index Δ for the Gaussian beam and the plane wave at other constant parameters are shown in Figure 6. The nature of the corresponding curves significantly depends on the initial value of the angle. If the angle of incidence of the beams at = 1.333 is zero, then the corresponding dependence is nonlinear (Figure 6a). If the angle of incidence of the beams is equal to 18, ⁄ , then the corresponding dependence is linear in the range of Δ from −0.002 to 0.005, allowing us to calculate = − Δ Δ . ⁄ Results of the calculation of for the angle of 18 ⁄ are included in column 9 of Table 1. The ratio FOM = ⁄ can be calculated knowing the values , which are given in column 10 of Table 1. It can be expressed that ⁄ ≈ ⁄ . (1) Equation (1) was obtained in [11] analytically for dielectric gratings based on photopolymer compositions, which are characterized by insignificant modulation of the refractive index of the grating medium. Equation (1) is also true for sensors based on a prism structure and based on metal gratings on the metal substrate in which surface plasmonpolariton waves are excited [11]. Moreover, Equation (1) is valid both for the plane wave (columns 8 and 10, lines 1 and 3) and for the cross-section limited beam (columns 8 and 10, lines 2 and 4). However, we can consider the at the beam incidence angle on the grating significantly different from zero, as evidenced by Figure 6. Therefore, some of the cells of Table 1 are not filled for cases where the normal incidence of the plane wave or the Gaussian beam is on the grating. The spectral sensitivities S λ for some cases are shown in Table 1, column 7. Knowing S λ and δλ, we determined FOM = S λ /δλ, as presented in Table 1 (column 8). It can be seen that FOMs are mostly larger among the studied cases at the beam angle of incidence on the grating of π/18. The dependences of the change in the resonance angle ∆θ rez on the change in the refractive index ∆n 1 for the Gaussian beam and the plane wave at other constant parameters are shown in Figure 6. The nature of the corresponding curves significantly depends on the initial value of the angle. If the angle of incidence of the beams at n 1 = 1.333 is zero, then the corresponding dependence is nonlinear (Figure 6a). If the angle of incidence of the beams is equal to π/18, then the corresponding dependence is linear in the range of ∆n 1 from −0.002 to 0.005, allowing us to calculate S θ = −∆θ/∆n 1 . Results of the calculation of S θ for the angle of π/18 are included in column 9 of Table 1. The ratio FOM θ = S θ /δθ can be calculated knowing the values S θ , which are given in column 10 of Table 1. It can be expressed that S λ /δλ ≈S θ /δθ. Equation (1) was obtained in [11] analytically for dielectric gratings based on photopolymer compositions, which are characterized by insignificant modulation of the refractive index of the grating medium. Equation (1) is also true for sensors based on a prism structure and based on metal gratings on the metal substrate in which surface plasmonpolariton waves are excited [11]. Moreover, Equation (1) is valid both for the plane wave (columns 8 and 10, lines 1 and 3) and for the cross-section limited beam (columns 8 and 10, lines 2 and 4). However, we can consider the S θ at the beam incidence angle on the grating significantly different from zero, as evidenced by Figure 6. Therefore, some of the cells of Table 1 are not filled for cases where the normal incidence of the plane wave or the Gaussian beam is on the grating. It follows from Table 1 (columns 8 and 10) that the FOM in the transition from the plane wave to the Gaussian beam decreases several times due to increasing δλ and δθ at constant sensitivities S λ and S θ (columns 7 and 9, respectively). However, the FOM is large enough at the L shown in column 11, except for the data of rows 9 and 10, which correspond to the photon bandgap. In this case, the FOM is very small (41.7) due to the high value of δλ = 11.5 nm. Materials 2022, 15, x FOR PEER REVIEW 8 of 11 Figure 6. Dependence of Δ on the change in refractive index Δ for the Gaussian beam and the plane wave at the normal incidence of the Gaussian beam and the plane wave (a), and for the Gaussian beam and the plane wave incidence at the angle of 18 ⁄ (b). A straight red line is drawn between the two extreme points corresponding to the minimum and maximum value of . It follows from Table 1 (columns 8 and 10) that the FOM in the transition from the plane wave to the Gaussian beam decreases several times due to increasing and at constant sensitivities and (columns 7 and 9, respectively). However, the FOM is large enough at the shown in column 11, except for the data of rows 9 and 10, which correspond to the photon bandgap. In this case, the FOM is very small (41.7) due to the high value of = 11.5 nm. Analytical expressions defining and for the plane wave for gratings with small modulation of the refractive index through Λ, , , and especially , which is determined for the Gaussian beam on the linear part of the ln| ( )| dependence, were presented in [33]. The corresponding equations are as follows: (2) ≈ 2 cos . (3) Figure 6. Dependence of ∆θ rez on the change in refractive index ∆n 1 for the Gaussian beam and the plane wave at the normal incidence of the Gaussian beam and the plane wave (a), and for the Gaussian beam and the plane wave incidence at the angle of π/18 (b). A straight red line is drawn between the two extreme points corresponding to the minimum and maximum value of n 1 . Analytical expressions defining δλ and δθ for the plane wave for gratings with small modulation of the refractive index through Λ, λ, θ, and especially γ, which is determined for the Gaussian beam on the linear part of the ln|r 0 (x)| dependence, were presented in [33]. The corresponding equations are as follows: Numerical experiments have confirmed the validity of these equations for threedimensional phase gratings with low modulation of the grating medium refractive index [20,34]. However, these relations are not fulfilled for our case when n 1 = 1.333 and n 2 = 2.0. Here, δλ = 0.037 nm, δθ = 0.0624 mrad, and δθ = 0.0624 mrad according to row 1 of Table 1 and δλ = 0.021 nm and δθ = 0.031 mrad according to Equations (2) and (3). However, Equations (2) and (3) can be useful in another aspect. If certain grating parameters (n 1 , n 2 , F, Λ, wavelength, and beam incident angle on the grating) correspond to a certain reflection coefficient, then the reflection coefficient does not change upon changing the wavelength λ, period Λ, grating thickness d, or beam width L K times. The value of K can be either larger or smaller than one. This statement is true for both the plane wave (L = ∞) and the beam of width L. However, numerical experiments have shown that γ will change 1/K times. Therefore, we can assume that, when the parameters change K times, δλ will also change K times, and δθ will be unchanged in accordance with Equations (2) and (3). This was confirmed by our numerical experiments for data rows 1-8 of Table 1. Numerical experiments also showed that S λ also will change K times, while S θ will remain unchanged. Therefore, the FOM with such a change in parameters will remain unchanged, which is consistent with Equation (1). The obtained relations can be called rules of similarity. They can be useful in determining the grating parameters to obtain the resonance at the certain wavelength generated by the laser, if the resonance conditions are known at the specific wavelength. Accordingly, the coefficient of change K will be equal to the ratio of the two wavelengths. Conclusions The obtained numerical results confirm the concept of the resonant waveguide mode propagation in the grating of a one-dimensional photonic crystal. The values of FWHM for the plane wave and the Gaussian beam (rows 9 and 10, column 4 of Table 1) are the same and relatively wide (Figure 3a) if the waveguide mode is in the forbidden photon bandgap. Otherwise, when the waveguide mode is within the allowed photon bandgap, FWHM decreases with increasing L and approaches FWHM for the plane wave. Numerical studies have shown that the dependences λ rez and θ rez are linear to change n 1 (see Figures 5b and 6b) at the beam incidence angle of θ = π/18 rad. However, θ rez is nonlinear on n 1 at the initial angle θ = 0 (see Figure 6a). The corresponding sensitivities S λ and S θ , as well as FOM for both cases, which satisfy Equation (1), can be calculated on the basis of these linear dependences. However, Equations (2) and (3) are not valid for a large contrast of changes in the dielectric permittivity in the grating. Nevertheless, the right and left parts of these relations are within the same order of magnitude; in this particular case, they differ by about two times. It is shown that the similarity rule is also valid for the limited beam in width. According to this rule, the reflection coefficient will not change when the wavelength λ, the period Λ, the grating thickness d, and the beam width L change K times. Therefore, δλ will change K times and δθ will remain unchanged, while S λ will also change K times and S θ will remain unchanged. Therefore, the FOM will remain unchanged with such a change in parameters, which is consistent with Equation (1).
2022-05-16T15:03:22.297Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "2f06a3d9dc34910a9f74a4eb5f190a6a6f8d4049", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/10/3529/pdf?version=1652506820", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "803f620e81d25c4226eaae8fb7cd7e0c221cd726", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
248000422
pes2o/s2orc
v3-fos-license
Can gesture input support toddlers ’ fast mapping? Forty-eight toddlers participated in a word-learning task to assess gesture input on mapping nonce words to unfamiliar objects. Receptive fast mapping and expressive naming for target object-word pairs were tested in three conditions – with a point, with a shape gesture, and in a no-gesture, word-only condition. No statistically significant effect of gesture for receptive fast-mapping was found but age was a factor. Two year olds outperformed one year olds for both measures. Only one girl in the one-year-old group correctly named any items. There was a significant interaction between gesture and gender for expressive naming. Two-year-old girls were six times more likely than two-year-old boys to correctly name items given point and shape gestures; whereas, boys named more items taught with the word only than with a point or shape gesture. The role of gesture input remains unclear, particularly for children under two years and for toddler boys. Introduction Gesture and language development are "tightly coupled."(Iverson & Thelen, 1999, p. 20), and the parallel unfolding of gesture development and spoken language development may lie in their shared symbolism (Capone & McGregor, 2004).Gesture initially grounds spoken language through sensorimotor experiences (Perniss & Vigliocco, 2014).The emergence of specific gesture types in later infancy and early toddlerhood precedes children's language production milestones including the onset of single words and two-word combinations (e.g., Crais, Watson & Baranek, 2009;Iverson & Goldin-Meadow, 2005).This is a developmental period during which children move from saying their first words to rapid vocabulary growth.After age 12 months and the onset of first words, children gradually add new words at a rate of about one to two new words weekly, and after 24 months of age, word learning accelerates with children producing 10 new words within a 14-day period (e.g., Hirsh-Pasek, Golinkoff & Hollich, 2000;Mervis & Bertrand, 1994). One explanation for children's rapidly accelerating word production is fast mapping, the process whereby children encode beginning, incomplete word representations from brief exposures and incidental mappings of novel words to referents (Carey & Bartlett, 1978;Dollaghan, 1987;Gershkoff-Stowe & Hahn, 2007;Swingley, 2010).Children as young as 13 months show fast mapping, and there is substantial evidence indicating that typically developing toddlers are fast mapping successfully by age 2 years (Hiebeck & Markman, 1987;Spiegel & Halberda, 2011).Fast mapping, however, is just one step in a word learning process that may not involve the same mechanisms needed for children's development of full lexical representations (Bion, Borovsky & Fernald, 2013;Carey, 2010;Horst & Samuelson, 2008).Word learning can be considered a continuum starting with a person's initial exposures and building to well-established understanding and use of words for effective communication.Fast mapping occurs early in word learning and can result from a single exposure to a word and referent incidentally.Following limited exposure, often in experimental tasks, fast mapping is usually assessed by immediate, forced-choice recognition or receptive identification.Next steps in word learning can be termed slow mapping.Slow mapping is defined by repeated linkages between a referent's semantic information and the word form.Slow mapping or extended word learning may be assessed in recognition tasks following a time delay varying from minutes to days or weeks.Likewise, expressive naming can be viewed as evidence of slow mapping because it requires activating and speaking a stored representation (Capone & McGregor, 2005). Our aim in this investigation was to test the role of gesture input in support of young children's word learning.Early gestures in the baby's environment, showing and pointing, can provide a foundation for first words.Caregivers engage in showing by shaking an object or moving an object up and down in front of the infant's face while synchronously naming the object (Matatyaho & Gogate, 2008), and researchers have reported a type of gestural motherese consisting of pointing gestures paired with talking (Iverson, Capirci, Longobardi & Caselli, 1999;Zammit & Schafer, 2010).By 13 months of age, typically developing infants demonstrate understanding that the deictic, point gestures produced by adults reference objects in the environment (Gliga & Csibra, 2009).Pointing can harness an infant's joint attention with the adult and an object, supporting the child's mapping of the word spoken by the parent to the object that has been ostensively indicated.In this sense, point gestures are considered a type of social/pragmatic cue, indicating the pointer's intended referent (Capone Singleton, 2012).Pointing by Italian mothers when interacting with their children aged 1;4 was positively correlated with their children's vocabulary skills at 1;8 (Iverson et al., 1999). In addition to deictic gestures such as showing and pointing, gestures can be iconic.Iconic gestures reflect a characteristic of a concrete referent.Iconic gestures manually represent some element of meaningthe shape, action, or function features of the referent (Capone Singleton & Saks, 2015;Goldin-Meadow & Alibali, 2013).An example is when a caregiver moves his hand to his mouth to represent eat when asking a toddler, "Do you want to eat?" Gestures most often co-occur with speech, and the meaning conveyed by the gesture is often redundant with speech (Capone Singleton & Saks, 2015;Hostetter & Mainela-Arnold, 2015;Iverson et al., 1999).Zammit and Schafer (2010) reported an association between children's comprehension of target words (aged 11 months) and their mothers' verbal labeling of the items paired with iconic gestures at a time when the children (aged 9 months) had not yet acquired the words.In a metaanalysis of gesture studies, Hostetter (2011) concluded that listeners had better comprehension of speech when accompanied by gestures, but age was one of several moderating factors.Children benefitted more from gesture than adolescents or adults, and individuals who were considered less verbally proficient (i.e., listeners with Down syndrome or autism) were more likely to benefit from gestures than unimpaired learners.Also, the positive effects of an iconic gesture were measured when motoric or spatial information was being conveyed but not for abstract information (Hostetter, 2011). There is widespread belief in popular culture as well as some research support that children benefit from baby sign language or gesture instruction in the environment (Goodwyn, Acredolo & Brown, 2000;Lederer & Battaglia, 2015), and this has led to the development and marketing of Baby Signs ® (see http://www.babysignstoo.com/),one of now several similar programs that encourage use of manual signs with typically developing infants and toddlers to increase their expressive communication abilities with caregivers as their speech skills develop.Founders of Baby Signs ® , Goodwyn et al. (2000) reported that children whose parents were trained to combine words with iconic gestures outperformed a control group with no training for expressive and receptive language measures.No differences in language development were found for a no-training control group compared with a verbal training group whose parents were trained to increase speech-only labeling.Studies by Goodwyn andAcredolo (1993, 1998) using the same cohort as Goodwyn et al. (2000) were critiqued by Johnston, Durieux-Smith, and Bloom (2005), who conducted a systematic review of the baby sign language research literature.Johnston et al. (2005) pointed out that Goodwyn et al. did not test and report comparisons between the sign-training group and the verbal-training group.Additionally, Goodwyn et al.'s finding of differences was statistically significant at only ages 15 and 24 months of age, not at 19, 30, or 36 months.Based on their review of this study and others, Johnston et al. (2005) concluded that there were no advantages in adding gestural communication to parental input beyond 24 months.Kirk, Howlett, Pine, and Fletcher (2013) reported no language development differences for children followed from ages 8 to 20 months whose mothers used baby sign language compared to control groups consisting of symbolic gesture, verbal training or no intervention.Authors cautiously reported that there were three boys with relatively low ability whose expressive language learning appeared facilitated by participation in a group with sign language or symbolic gestures.Clearly, not all research has supported the premise that gesture enhances word learning.In a looking-paradigm study examining the fast-mapping abilities of infants, Puccini and Liszkowski (2012) found that children (aged 1;3) did not map words to referents accurately in the word-plus-gesture and gesture-alone conditions.The only statistically significant effect was found for the word-only condition.Authors concluded that spoken words alone are the optimal input for hearing children. Differences among studies led us to conclude that any benefits derived by adding gesture to speech input may depend on the age and skills of the children as well as the type of gesture and its relationship to the referent.For example, in addition to the young age of their participants, Puccini and Liszkowski (2012) included American Sign Language gestures for yes and no.These gestures are arbitrary, not iconic, because there were no recognizable associations between the gestures and the study referents.Iconic gestures, as opposed to arbitrary gestures, are assumed to support word learning because they map semantic elements that can facilitate children's representation of the referent.Several studies, however, have indicated that children under the ages of 3;6 to 4 years cannot easily engage iconic information (Lüke & Ritterfeld, 2014;Namy, 2008;Tolar, Lederberg, Gokhale & Tomasello, 2008).Lüke and Ritterfeld (2014) found no significant differences in typically developing preschoolers' receptive fast mapping of novel cartoon character labels paired with iconic gestures versus arbitrary gestures, but word learning was supported by both gesture conditions when compared to a no-gesture condition.Namy (2008) found that children at 14, 18, and 22 months did not consistently recognize iconic action gestures when selecting target objects in target trials, and at 14 months, the children did not choose target objects more often than expected by chance.Only one age group, children who were 26 months, showed consistent selection of target objects above chance levels when the stimulus was an action gesture that matched conventional actions performed on objects (e.g., spinning a top-like novel object or scooping with a familiar object, a spoon).Namy concluded that gesture iconicity was fragile and fluctuating for children under age 2 years.Tolar et al. (2008) also concluded that iconicity recognition was fragile for children age 3 years and younger.They studied the recognition of iconicity for sign language signs with hearing children ages 2;6 to 4;6 years.Only in the age groups 3;6, 4;0 and 4;6 did 50% of the children successfully identify pictures based on iconic signs.Children at ages 2;6 and 3;0 did not correctly associate iconic signs with pictures.Stimuli in their study varied iconicity such that some signs were considered pantomime or actions associated with the referent (e.g., "baby," "write") versus perceptual aspects (e.g., "house," "tornado").Perniss and Vigliocco (2014) argued that the type of iconicityaction-based signs such as for the word "push" versus a perception-based sign for "deer"is a factor in language development and language processing.Capone and McGregor (2005) compared iconic shape gestures versus iconic function gestures in a fast-mapping investigation with typically developing children 2;3 to 2;6 years.They hypothesized that an early visual or perceptual aspect such as the shape of a referent might be easily recognized and improve fast mapping compared to an action or function gesture that could require additional representational learning.Their hypothesis drew from literature proposing a shape bias as a mechanism supporting young children in learning words (Landau, Smith & Jones, 1988;Smith, 2000).Diesendruck and Bloom (2003) proposed that children's attention to perceptual aspects such as the shape of objects is not a specific linguistic mechanism but rather a more general means of concept creation for a category or kind.Toddlers' attention and selectivity to the perceptual feature of shape increases between 2;0 to 3;1 for generalizing novel object labels and shape continues to be a significant factor for word learning beyond age 4 years (Davidson, Rainey, Vanegas & Hilvert, 2018;Diesendruck & Bloom, 2003;Landau et al., 1988;Landau, Smith & Jones, 1988;Smith, 2000).To test iconic shape gestures, Capone and McGregor (2005) contrasted three fast-mapping conditions: nonce words paired with shape gestures; nonce words paired with function gestures; and nonce words only as a nogesture control condition.Results indicated that children fast mapped at levels above chance when the word was paired with a shape gesture: 68% of the novel item/nonce word pairs were fast mapped.In the function gesture and no-gesture conditions, performance was at chance.Retrieval for the labels for the novel items trained in both gesture conditions, shape or function iconic gestures, required fewer cues than for labels trained in the no-gesture condition.Capone Singleton (2012) extended these findings regarding shape cues to children who were two and three years old.When three novel words were taught in three gesture conditionswith a shape gesture, with a function gesture, and with a pointchildren's naming of words taught with shape gestures was significantly more frequent compared to the other conditions and resulted in better categorization and naming of untaught exemplars.It was iconic shape gestures, not deictic gestures such as pointing, that enhanced semantic representations underlying fast mapping and slow mapping processes for object naming. Despite the evidence of gesture influences on language learning in typically developing toddlers, several unknowns remain.Given the conflicting findings of studies, one unknown is the extent to which an iconic gesture versus a deictic gesture might aid toddlers in fast mapping.A second unknown is whether fast mapping by children younger than 2;3 would be improved given adult input that combines spoken word labels with gesture aids.A clearer understanding of these factors is needed when advising parents of typically developing children regarding the use of gestural techniques to promote language learning.Of greater importance are implications for the use of gesture-speech input for language development and word learning in clinical populations with limited expressive language, including children with autism, Down syndrome, and even late talkers (Capone & McGregor, 2004;Capone Singleton & Anderson, 2020;Capone Singleton & Saks, 2015;Caselli, Vicari, Longobardi, Lami, Pizzoli & Stella, 1998;Özçalişkan, Adamson, Dimitrova, Bailey & Schmuck, 2016;Thal & Tobias, 1992;Vogt & Kauschke, 2017;Wang, Bernas & Eberhard, 2001;Ellis Weismer & Hesketh, 1993). Research questions Our purpose was to determine whether gesture input combined with speech facilitated toddlers' fast mapping nonce words to unfamiliar objects.We examined how participants from two age groups, children aged 1;4-1;8 and children aged 2;0-2;4, responded in three input conditionsan iconic shape gesture combined with speech, a deictic point gesture combined with speech, and a speech-only, no gesture control condition.The following research questions and hypotheses were posed: 1. Is there a significant effect of gesture input on receptive fast mapping of unfamiliar target objects by toddlers?We hypothesized that participants would demonstrate more correct responses in the gesture conditions than in the speech-only control condition.We also hypothesized that the iconic shape gesture condition would have a greater proportion of accurate responses than the point gesture condition.2. Is there a significant effect of participant demographic variables on the receptive fast mapping skills in toddlers?We expected that the older toddler groups would have more accurate responses than the younger toddler groups.3. Is there a significant effect of gesture condition on expressive naming of unfamiliar objects following a brief word learning task?We hypothesized that gesture input would support successfully naming newly learned objects.In particular, we expected that the shape gesture would support mapping some semantic information thereby increasing the likelihood of successful encoding and retrieval for the newly learned name.4. Is there a significant effect of participant demographic variables on toddlers' accurate expressive naming of nonce words paired with unfamiliar objects immediately following a brief word learning task?As with the receptive task, we anticipated that older toddlers would show more accurate naming than younger toddlers. Participants Recruitment and enrollment proceeded after Institutional Review Board human-subjects approval.Participants were 48 children from the northern Gulf Coast region of the United States who met eligibility criteria: ≥ 10 th percentile on the MacArthur-Bates Communicative Development Inventories (MBCDI; Fenson, Marchman, Thal, Dale & Reznick, 2007); a reported gestation age of ≥ 37 weeks; monolingual English-speaking parents/ caregivers; and no known hearing impairments.Sixty-one toddlers were initially seen, but 13 (21%) were not enrolled.Nine had MBCDI percentile scores <10 th percentile; three did not complete the experimental task; and one was not included due to investigator error.Two participant groups were formed: 24 toddlers (14 boys, 10 girls) ages 1;4 to 1;8 in the Younger Toddler group and 24 toddlers (10 boys, 14 girls) ages 2;0 to 2;4 years in the Older Toddler group.See Table 1 for group demographic information.A one-way analysis of variance (ANOVA) was performed and a significant difference for MBCDI Words Produced was revealed for Gender and Age (F [3, 48] = 41.98,p < .001).Post hoc Note.MBCDI = MacArthur-Bates Communicative Developmental Inventories -Words and Sentences (Fenson et al., 2007). analyses using Bonferroni procedures indicated that the mean raw score for Older Toddler girls (M = 515.36,SD = 113.14)was significantly higher than for Older Toddler boys (M = 344.20,SD = 134.90)which was significantly higher than for Younger Toddler boys (M = 64.07,SD = 60.65) and Younger Toddler girls (M = 157.10,SD = 144.319).Differences between Younger Toddler boys and girls were nonsignificant.Because gender differences were significant, gender was added as a participant demographic factor to our analyses.A one-way ANOVA revealed no significant differences for maternal education for the age groups or for gender. Stimuli creation Nine white objects (see Table 2)six unfamiliar objects and three familiar objectswere selected to be perceptually similar in color and general size except for their distinctive shapes.The six unfamiliar objects were divided into two subsets.Three unfamiliar objects were targets paired with iconic shape gestures: a triangular holder, an over-the-door hanger, and part of an onion blossom maker.To provide names for the unfamiliar target objects, investigators studied monosyllabic, consonant-vowel-consonant nonce words from prior studies of word learning and fast mapping.Three nonce words, "tull" /tʌl/, "fim" /fɪm/, and "sep" /sɛp/, were selected because they had no phonemes in common with the familiar object names and they had high phonotactic probability (Vitevitch & Luce, 2004).Each word was randomly assigned to one unfamiliar target object (see Table 3).Iconic shape gestures for each unfamiliar target object (see Table 3) were created with two hands making contact and presented statically in similar gesture spaces.Three unfamiliar objects served as foils: a pastry blender, a plastic ridged tube, and a cable T-fitting. Five of our six unfamiliar objects were previously established as unfamiliarthat is, un-nameable objects (i.e., Beverly & Estis, 2003).Pilot testing of the shape gesture to object mapping occurred prior to object selection.Specifically, 20 consented adults were asked to identify an object when presented with its associated shape gesture in an array of seven unfamiliar objects.The three items selected as the unfamiliar target objects were the only three items of the seven selected by 100% of the participants: the triangular holder, the over-the-door hanger, and the onion blossom maker part.Unfamiliar foil objects and the familiar objectskeys, a cup, and a sockwere white and sized to be similar to the target unfamiliar objects.Also, familiar objects were selected using a word frequency program based on lexical development data, the Lex2005 Database (Dale & Fenson, 1996), that generated proportion ratings indicating that >80% of toddlers are reported to comprehend these. Experimental design and procedures The within-subjects design consisted of three word-learning conditions: Point, Shape, and Control.In the Point condition, the investigator (first author) pointed to the unfamiliar target object while saying the associated nonce word.The investigator pointed with the index finger of the right hand extended within approximately six inches of the object.In the Shape condition, the investigator produced the iconic shape gesture next to and within six inches of the unfamiliar target object while saying the nonce word.In the Control condition, the investigator said the nonce word for the unfamiliar target object but with no gesture.Objects were grouped and presented in a consistent orderthe familiar object, the unfamiliar target object, and then the unfamiliar foil (see Figure 1).The administration of conditions (i.e., A = Point, B = Shape, and C = Control) associated with the unfamiliar target word-object pairing was systematically varied across participants using six unique sequences to attain complete counterbalancing: ABC, BCA, CAB, CBA, BAC, and ACB.The nonce wordstull, sep, and fimeach labeling an unfamiliar object were presented in the same order but the six presentation lists resulted in nine word-gesture condition pairings: "tull" þ Point; "tull" þ Shape; "tull" Control; "sep" þ Point; "sep" þ Shape; "sep" Control; "fim" þ Point; "fim" þ Shape; and "fim" Control.In this manner, gesture condition differences would not be due to unexpected item effects. Experimental sessions were scripted (see Figure 1) and conducted live by the first author in one of three settings (i.e., 65% in participants' homes; 23% in a preschool/ daycare; and 13% in the university-based lab setting) with a familiar adult present in the room.Participants were seated in a highchair, a booster seat, or in their mother's lap (See Figure 2).A brief 2-to 5-minute play period was used to establish rapport and to determine participants' ability to follow simple commands.The experimental procedure consisted of two phases, the fast-mapping phase and the testing phase, repeated for each of the three conditions.Within each fast-mapping phase, the investigator's utterances were scripted (see Figure 1) and included producing the object namefamiliar or unfamiliar targeta total of four times.The fast-mapping script was designed to initially call attention to the novel object.Then, the participant manipulated each object for approximately 10 seconds before dropping it in a bucket.Once in the bucket, the investigator pretended to look for the object and then quickly found it, supporting one additional exposure. A testing phase immediately followed each fast-mapping phase.During testing, the three objects were arranged in a line on a red mat placed on the high-chair tray, the table, or the floor in front of the participant (See Figure 1).Object position was pre-determined and counterbalanced, such that the unfamiliar target object position varied in the three testing phases.First, the investigator instructed the participant to get the familiar object, and for this trial the investigator provided training to participants who did not correctly select the familiar object.Training consisted of repetition and scaffolded cues: holding one of the participant's hands to promote a single-object selection, moving the familiar object closer to the participant, manipulating the familiar object while naming it, and handover-hand assistance to select the familiar object.Once the familiar object was removed, testing proceeded from a field of two unfamiliar objects (the target and the foil).Receptive assessment for the unfamiliar target object consisted of the name only, no point or gesture.Noncontingent positive reinforcement was provided following each selection to promote continued participation.Lastly, an opportunity for naming was provided: the investigator held up the unfamiliar target object and asked, "What's this?" Upon completion of the testing phase, the next fast-mapping phase for the second condition in the counterbalanced sequence was conducted, followed by the associated testing phase.The third fast-mapping and testing phases completed the experimental procedure, which lasted approximately 8 minutes. Receptive trials were scored as correct if the participant accurately selected the target object given the unfamiliar object name without prompting.Receptive trials were scored via the video recording, and reliability checks for 50% of the data revealed 100% interjudge agreement and 100% intra-judge agreement.Expressive trials were scored as correct if the participant said the correct nonce word for the unfamiliar target objects.This was completed live and the participant's responses were transcribed if needed.Reliability using the video recordings was 97% (35 of 36 reviewed decisions) for the inter-judge agreement and 97% for the intra-judge agreement. Statistical analyses The dependent variables were the fast-mapping responsesreceptive fast mapping and expressive naming.These were dichotomous, categorical variables with correct responses coded as 1 and incorrect responses coded as 0. Participants contributed one receptive fastmapping score and one expressive naming score for each of the three conditions: Point, Shape, and Control.Mean proportions and standard deviations were computed for the number of participants responding correctly in each gesture condition for each measure, receptive fast mapping and expressive naming. To address the research questions, the factors of gesture input, age, and gender were examined.The generalized estimating equation (GEE; Liang & Zeger, 1986) approach in IBM SPSS (IBM Corp, 2021, Version 28) was used.GEE is subsumed under generalized linear mixed models and offers a framework for examining repeated categorical outcome variables that are nested within participants who are selected at random (Heck, Thomas & Tabata, 2012).A two-level data hierarchy was constructed in which receptive and expressive responses for three repeated gesture conditions (Level 1) were nested within each of the 48 toddlers (Level 2).Toddler participants were from a random sample of the population, and, therefore, are considered as random effects for the model.The intercept of the random effect was allowed to vary freely.Several models were constructed with main effect and interaction terms, including a full factorial framework.The models were compared using the quasilikelihood under independence model criterion (QIC) to determine which model had the best quality of fit.QIC statistics with smaller values indicate a more superior model fit than higher values; therefore, we used the statistical models that demonstrated the lowest QIC for each analysis.Six models were specified for the two dependent variables, receptive fast mapping and expressive naming.Table 4 displays the models, the fixed effects, and the QIC statistics for each. Receptive fast mapping In a repeated fashion, the 48 participants contributed one receptive fast-mapping score for each of the three conditions: Point, Shape, and Control.This resulted in a total of 144 receptive fast-mapping responses, 1 or 0 for correct and incorrect, respectively.In addition to the planned investigation of gesture conditions and participant age groups, data are depicted with age groups divided by gender because of a significant gender difference for the participant groups on the eligibility language assessment, the MBCDI.Figure 3 displays the mean proportions of correct fast mapping responses over the gesture conditions, age groups, and genders.Group means and standard deviations indicated that Older Toddler girls were accurate for receptive fast mapping, and Older Toddler boys' receptive fast mapping was greatest for objects exposed in the Shape condition. To test for statistically significant differences, GEE was conducted.Model 2 (see Table 4) demonstrated the lowest QIC indicating the best quality of fit.Using GEE, the effects of gesture condition, age, and gender on receptive fast mapping were evaluated.Age was a significant main effect, Wald χ 2 (1) = 5.551, p = .018,β = 1.047 (SE= .4446).The mean proportions of correct receptive fast-mapping responses in Older and Younger Toddlers were .74(SD = .05)and .49(SD = .10)respectively.When other factors were held constant, Older Toddlers were 1.5 times more likely to receptively fast map correctly than Younger Toddlers.The main effects of gesture input condition and gender were not statistically significant.The model was also repeated with the predictor factor of Presentation List to test for item effects.Presentation List was nonsignificant and the significant main effect for age was unchanged. During each testing phase, toddlers were first asked to select a familiar object.This task consisted of a forced-choice field of two.Means and standard deviations for groups' selection of familiar objects is shown in Table 5 and compared with their means and standard deviations for correct selection of the unfamiliar target objects.Group differences were statistically significant based on independent t tests (p values < .01).Older Toddlers had more correct selections of familiar and unfamiliar objects than the Younger Toddlers did. Expressive naming Of the 48 total participants who were asked to expressively name newly learned unfamiliar object labels in three gesture input conditions, 8 responses were not collected, either To test for statistical significance of hypothesized factors on toddlers' expressive naming, Model 5 demonstrated the best quality of fit based on the lowest QIC (See Table 4).In the GEE approach, the 10 missing data points are assumed to be random.Despite this limitation that can impact model efficiency and parameter estimates (Heck et al., 2012), the selected GEE model adequately accommodated the available expressive fast mapping data and imputations were unnecessary.To address our research questions, main effects of gesture condition, age, and gender as well as the interaction of gesture condition and gender were evaluated.To rule out any item effects, the model was repeated with Presentation List tested.Results indicated two statistically significant main effects: age, Wald χ 2 (1) = 9.369, p = .002,β = 3.145 (SE = 1.0276), and gender, Wald χ 2 (1) = 7.761, p = .005,β = 2.683 (SE = 1.1605).A main effect of gesture condition was not statistically significant; however, an interaction between gesture condition and gender was statistically significant, Wald χ 2 (1) = 9.822, p = .007,β = 2.388 (SE = 1.1857).Compared to boys, girls were nine times more likely to expressively name newly learned words in the Point and Shape conditions.While girls and boys performed similarly in the Control condition in which speech with no accompanying gestures was used to teach the new words, boys expressively named more words in the Control condition than in the two gesture conditions.In contrast, girls expressively named fewer words in the Control condition than in the two gesture conditions.Because of the homogeneity in responding by the younger toddler boys (i.e., no younger toddler boys named any items correctly), a separate analysis of the expressive naming by the Older Toddlers only was conducted.There were two instances of missing data resulting in 70 expressive naming responses from 24 participants available for analysis.A GEE approach (QIC = 93.556)was used to test the main effects of gesture condition and gender as well as an interaction between gesture condition and gender.Results revealed a statistically significant main effect of gender: Wald χ 2 (1) = 6.286, p = .012,β = 2.485 (SE = 1.1844).A main effect of gesture condition was not statistically significant; however, the interaction between gesture condition and gender was statistically significant, Wald χ 2 (1) = 8.142, p = .017,β = 2.234 (SE = 1.2320).These findings mirrored those in the previous analysis: Older Toddler girls were six times more likely to expressively name newly learned words in the Point and Shape gesture conditions than the Older Toddler boys.The boys' and girls' performances were similar in the Control condition, but Older Toddler boys expressively named more words in the Control condition than in the two gesture conditions. Demographic factors Additional demographic factors were tested using the GEE approach.Analyses for the receptive fast-mapping responses and expressive naming (Models 2 and 5, respectively) were repeated with exposure to gesture communication (i.e., baby signs) and language skill (measured by MBCDI percentile score for words produced) added as covariates.Neither factor was statistically significant in the models, and the significant findings of main effects and interactions for hypothesized predictors remained constant. Discussion In this experimental study of fast mapping by young toddlers, we systematically manipulated gestures provided with the linguistic input.A group of typically developing one year olds was compared with a group of young typically developing two year olds for mapping object names to novel referents given brief interactions with an unfamiliar adult.Our research questions addressed the effect of gesture combined with speech on the word learning skills of the toddlers. The first dependent measure was receptive fast mapping, and all participants contributed three data points for each unfamiliar object selection given the nonce words and paired gesture conditions.There was no statistically significant effect of gesture condition for the two toddler groups' recognition of the target objects.We had hypothesized that participants would demonstrate more correct responses in the gesture conditions than the word-only, no gesture condition, and we proposed that the shape gesture would have the greatest proportion of correct fast mapping.This hypothesized pattern for the shape condition emerged for the two-year-old boys, but this was not a significant finding.Only a main effect of age was found to be a significant predictor of the receptive fast-mapping selections. Our second dependent measure was expressive naming of the target unfamiliar objects.Again, we hypothesized that gesture input would support successful naming.In particular, we expected that the shape gesture would support this mapping through at least partial encoding of some semantic information, the shape of the novel object.Assuming mapping of the salient shape feature would occur, then the likelihood of successful encoding and retrieval for the newly learned name might increase.Again, there was no significant main effect of gesture input.Instead, age and gender were both significant predictors.There was a gesture-gender interaction primarily characterized by more correct naming by two-year-old boys in the word-only, no gesture condition. Toddler age and fast mapping Age was a significant predictor of both receptive fast mapping and expressive naming.The two year olds outperformed the one year olds for both measures.The older toddlers were 1.5 times more likely to correctly select the target object receptively, and 12 of the 24 older toddlers receptively identified all three novel referents correctly regardless of the gesture condition.Like other researchers who report that typically developing toddlers fast map successfully by age two (Hiebeck & Markman, 1987;Spiegel & Halberda, 2011), our results did not uncover fast mapping by children under 20 months. The nature of our fast-mapping task and the testing phases may be factors impacting our findings.The linguistic context was ostensive (e.g., "It's a X; "See the X") with words and phrases that serve a deictic function, and each nonce word and unfamiliar object pairing included four explicit labels, not just one or two exposures.The task also was multimodal in several ways.In addition to hearing the name and seeing the object, the investigator moved the target object to several locations (table, tray and bucket) and allowed the child to briefly handle each object.Any benefits from this ostensive teaching and engaging interaction, however, may have been off set by the total number of targets presented in a sequence of brief exposures.That is, the fast-mapping task exposed the young toddlers to nine white objects (three named familiar objects, three target objects paired with nonce words and gesture conditions, and three foil objects labeled "it").For each of the objects, the investigator engaged in a scripted interaction during which she said the labels for the nine objects four times resulting in 36 labels total in the approximately eight-minute interaction.Given the very young age of the toddlers, this could be considered a challenging task compared to fast mapping studies with fewer targets. The first of our two measures, the receptive fast-mapping measure, was a forcedchoice, recognition task from a field of two that took place immediately after the brief interactive exposures for three objects.Naturalistic behavioral methods that relied on toddlers' object selections may have limited our findings.Specifically, the group of younger toddlers, aged 1;4 to 1;8, showed inconsistent skills for this assessment even when asked to select familiar objects with known names.Only 21% of the one year olds accurately selected all three familiar objects.As a group, the younger toddlers averaged less than two correct selections out of three opportunities, which was significantly fewer correct selections of familiar objects than the toddlers in the older group.Namy (2008) also found that toddlers at 14, 18, and 20 months of age demonstrated inconsistent performance compared to toddlers at 26 months of age in a gesture recognition study using direct, manual object selection.Studies that implement looking paradigms may be more effective than behavioral studies when it comes to assessing fast mapping in toddlers younger than 2;0 (Gliga & Csibra, 2009;Puccini & Liszkowski, 2012).Bion et al. (2013) suggested that the forced-choice paradigm is truly a disambiguation task that may not be indicative of word learning.They reported looking-paradigm data differentiating the development of skills for disambiguation separate from word-object learning and retention for children from 1;6 to 2;6.Bion and colleagues concluded that recognition of familiar words as well as disambiguating and learning new words are skills that develop gradually. Our second measure, expressive naming, was expected to be more challenging than the receptive fast-mapping task, because toddlers had to encode and retrieve the phonological elements of the nonce word to correctly name the object.This was still an immediate assessment, not a test of retention with any time delay.Across gesture conditions, older toddlers named more of the newly learned nonce words than younger toddlers.Only one toddler in the younger group (n = 24) named the nonce label when presented with the associated object, and she named two out of three of the objects.None of the one-year-old boys correctly named any target objects. Interestingly, several older toddlers named the unfamiliar object, the folder holder, a "triangle," during the fast-mapping phase and again during the expressive naming test.This was despite their correct selection of the object named "tull" during the receptive fast-mapping test.If children already had a name for this object, "triangle," then a mutual exclusivity assumption could interfere with mapping a new name to the object.Mutual exclusivity, the idea that an object has only one label, is one process hypothesized to support fast mapping new names to novel objects (Beverly & Estis, 2003). Gender and word learning Gender was found to be a significant predictor for toddlers' expressive naming of the newly learned object labels.Sex differences for early language development with girls outpacing boys is not an uncommon research finding, despite boys meeting age-level language expectations.Female toddlers achieve language milestones such as vocabulary and syntax use at earlier ages than males (Fenson, Dale, Reznick, Bates, Thal, Pethick, Tomasello, Mervis & Stiles, 1994), and Özçalişkan and Goldin-Meadow (2010) reported that girls' gesture productions, like their spoken language skills, emerged ahead of boys' gesture use.This was despite no significant differences in the gesture input by mothers of the children (Özçalişkan & Goldin-Meadow, 2010).Our findings suggested that female toddlers have a referent mapping advantage that promotes word learning.Female toddlers over the age of two are geared to learn new words.They are flexible fast mappers whose performance may have been enhanced by, but was not dependent upon, gestural cues combined with linguistic input.Girls' and boys' vocabulary sizes become more similar in the preschool years; and yet, the fast-mapping advantage of female toddlers may be a factor that undergirds female language skills through adolescence and into adulthood (Özçalişkan & Goldin-Meadow, 2010). In addition to a main effect of gender with girls outperforming boys, there was a significant interaction between gender and gesture condition for the older toddlers.Twoyear-old girls were six times more likely than the boys to correctly name objects that were paired with point and shape gestures in the exposure task; however, naming by the boys and girls for the word-only, no gesture condition was similar.The boys were more accurate namers in the word-only, no-gesture condition.In fact, the gesture input appeared to interfere with the boys' naming.Puccini and Liszkowski (2012) concluded that multimodal input in the form of an arbitrary gesture paired with a spoken word is unnecessary for word learning and potentially disruptive for their participants who were 1;3.They questioned whether children under the age of 2;2 to 2;6 can benefit from multimodal input for word learning, particularly when the gesture is not a deictic point that can support joint attention processes.Puccini and Liszkowski hypothesized that mapping gesture-speech input with a novel referent is more complex than mapping speech to a referent.The multimodal input results in a word plus gesture plus object or three-way mapping compared with the word plus object, a simpler two-term mapping.Furthermore, encoding a representational gesture with an object requires coordinating competing visual information, whereas the spoken word in the auditory modality can be mapped synchronously to the visual referent in the environment. The nature of iconic gesture Iconicity is only one semantic feature that could support referent representation during fast mapping and word learning, but iconicity varies for gestures and signs such that some require understanding of the word and its referent (e.g., sign for "cat" that refers to a cat's whiskers) to understand the iconic relationship.The shape gesture, however, is an iconic cue consistent with research suggesting that young toddlers attend to the shape of objects in the process of rapidly learning new words (Landau et al., 1988;Namy, 2008;Smith, 2000) including some studies supporting the facilitative effect of a shape gesture for word learning (Capone & McGregor, 2005;Capone Singleton, 2012).Our results did not show a clear benefit for these toddlers from the co-occurring shape gesture.In addition to the younger ages of our participants, there were several study differences.Perhaps the most important one was the extended word learning or slow mapping nature of the Capone Singleton investigations.If, rather than being all-or-nothing, recognition of iconicity is developmental requiring repeated associations, then a fast-mapping task might not capture its impact on semantic encoding. In language, iconicity is contrasted with arbitrariness, a critical aspect of language symbolism (Nielsen & Dingemanse, 2021).And yet, words that have more iconic sounds (e.g., roar, choo-choo) are often learned earlier by young children than words with sounds that have no relationship to the referent.Iconicity is assumed to provide some perceptualmotor grounding or imagistic information (Nielsen & Dingemanse, 2021).In this sense, we hypothesized that iconicity depicted by hand shapes that mimicked object shapes would be facilitative for very young children.Hodges, Özçalişkan, and Williamson (2018), however, found that toddlers (mean age of 2;8) did not match one subtype of iconic gestures, attribute gestures, to object photos, but three year olds did recognize iconic attribute gestures significantly more often than predicted by chance.Hearing three year olds who participated in a study by Magid and Pyers (2017) of iconic shape gestures did not reliably map shape gestures to referents.Children who were Deaf learners of sign language recognized shape gestures matched to referents at age three.Novack, Filippi, Golding-Meadow, and Woodward (2018) conducted a series of studies investigating interpretation of iconic gestures by two year olds.They found that the toddlers correctly interpreted different handshape gestures in a reach gesture but not when the same handshapes were gestured without the extended arm in a reach toward the object referents.Investigators concluded that children have difficulty interpreting shape gestures as representational of objects. Gesture and language learning Multimodal motherese has been described in natural and experimental contexts for parents from several cultures (Cheung, Hartley & Monaghan, 2021;Gogate, Bahrick & Watson, 2000;Gogate, Maganti & Bahrick, 2015).This multimodal input, however, is characterized by gestures paired with showing, shaking, and moving unfamiliar objects including sometimes touching the child with the object and the use of deictic gestures for scaffolding joint attention in the environment.This synchronized movement has been proposed to reduce cognitive load for preverbal children particularly when there is referent ambiguity; however, these studies have not assessed the benefits of the multimodal motherese.After all, typically developing children effectively learn language when their parents do not specifically use gesture to enhance the spoken input. What is the role of gesture?A question often raised by parents and professionals is whether gesture programs, such as Baby Signs® and others, should be used to enhance word learning for typically developing toddlers.For spoken language learners, we know that gesture cannot be wholly sufficient for mapping words to referents.Children need linguistic input to hear and then learn vocabulary.We had anticipated that gesture in the form of a deictic point would direct attention to the object for mapping, and that the shape gesture would encode some representational information supporting mapping words to target objects.Our results, however, did not support a facilitative effect of gesture input for very young toddlers.Gesture input appeared beneficial to the two-year-old girls; however, these female toddlers were generally better at word learning than their male counterparts.So, although they were significantly more likely to successfully name the novel objects in the gesture conditions compared to the boys, their performance in the control condition was not significantly different.Instead, the two-year-old boys named more items taught in the word-only condition than in the gesture conditions.These findings are indicative of disrupted word learning for the multimodal input.As suggested in a body of work by Goldin-Meadow and colleagues (e.g., Breckinridge Church & Goldin-Meadow, 1986;Goldin-Meadow & Alibali, 2013;Goldin-Meadow, Kim & Singer, 1999;Goldin-Meadow, Nusbaum, Kelly & Wagner, 2001), gesture appears to lessen the cognitive burden when children are in the process of acquiring language.Our results, however, indicate that the role of gesture is mitigated by children's age-related language development and gender.Gesture is not necessary or sufficient; and very young language learners may not be able to capitalize on gesture cues in fast-mapping paradigms. Limitations and implications Primary limitations were the sample size based on the number of total participants from a relatively homogenous background and the few binomial data points evoked by our study design.Additionally, an experimental word learning study such as this is limited by the nature of the experimental tasks and assessments that differ from naturally occurring exposures and interactions.During our study, participants were seated in a child chair with a tray that restricted their movement, and the investigator introduced the objects and associated language in a highly scripted manner.Any effect of the object position to the side of the face-to-face interaction is unknown.Similarly, the shape gesture was produced next to the object rather than directly between the child and adult or closer to the adult's body as typically produced in sign languages.Another limitation was the lack of a familiarization phase.That is, participants' first exposures to novel objects were during the experimental task.This lack of hands-on exposure to the objects could have interfered with participants' attention to the added gesture. The question of clinical importance that remains is whether word learning by young children at-risk for or exhibiting language disorders could be aided by spoken language combined with gesture.Ellis Weismer and Hesketh (1993) reported that manual gestures representing spatial concepts supported nonce word learning by kindergartners with specific language impairment.Vogt and Kauschke (2017) found that preschool children with specific language impairment performed similarly to typically developing children who were matched for age when novel words presented in a storybook were accompanied by gesture.In a single-subject intervention with four toddlers who were late talkers, Capone Singleton and Anderson (2020) showed that shape gestures paired with taught words increased learning for these words and supported generalization to untaught exemplars compared to words taught with deictic gestures of touching, showing or eye gaze.These studies provide an emerging evidence base needing larger, well-designed investigations of gesture treatments for children with language disorders. Conclusion Results suggested that the role of gesture input was circumscribed.There was no statistically significant effect of gesture input when toddlers' receptive fast-mapping responses were measured.Only an effect of age emerged, such that the older toddlers outperformed the younger toddlers for identification of novel objects taught with nonce labels.When expressive naming was assessed, only one girl and no boys in the younger toddler group named any target objects.Gesture input significantly interacted with gender.Gesture was facilitative for the two-year old girls but conversely appeared to interfere with naming for two-year-old boys.There was not a statistically significant unique effect of the iconic shape gesture compared to a deictic point for the older toddler girls, and the boys demonstrated more naming for the labels taught in the speech-only condition than in the two gesture-speech combined conditions.There remain unlimited opportunities for further investigation into the complex nature of language development in conjunction with child development factors, learning contexts, and the role of gesture input. Figure 1 . Figure 1.One example sequence is shown depicting the experimental procedure, including the three fastmapping phases (one for each condition), the three testing phases (one for each condition), and the corresponding scripted input. Figure 2 . Figure2.Experimental task arrangement depicting the unfamiliar object and shape gesture.Note the gesture presented near the object to support visual attention to the iconic relationship between the gesture and the object.The investigator's eye gaze remained predominantly on the child throughout the exposure and testing interactions. Figure 4 . Figure 4.The mean proportions of correct expressive naming responses over the gesture conditions, age groups, and genders. Table 1 . Participant Characteristics for Total Sample and by Age Group Table 2 . Familiar objects, unfamiliar target objects, and unfamiliar foils Table 3 . Unfamiliar target objects paired with nonce word labels, shape gestures, and point gestures Table 4 . QIC Statistics for Six Models and Fixed Effects for Receptive Fast Mapping and Expressive Naming Note.QIC statistics were labeled as Failed when statistical analysis procedures did not execute.This was due to one participant group, the younger male toddlers, all having 0 values (i.e., no correct responses) for expressive naming in all gesture conditions. Table 5 . The mean proportions of correct fast mapping responses over the gesture conditions, age groups, and genders.Familiar and Unfamiliar Object Selections by Older and Younger Toddler Groups Note.Means represent the group averages for correct object selection in three opportunities.due to toddler fatigue or investigator error.Older Toddler girls contributed 41 responses with 1 missing response in the Control condition.Older Toddler boys contributed 29 responses with 1 missing response in the Point condition.Younger Toddler girls contributed 27 responses with 1 missing response in Shape and 2 missing responses in Control.Younger Toddler boys contributed 37 responses with 2 missing responses in the Point condition, 1 missing in the Shape condition, and 2 missing Control responses.This resulted in a total of 134 expressive naming responses, 0 or 1 for correct and incorrect, available for analysis.Figure4displays the proportions of correct expressive naming responses over the gesture conditions, age groups, and genders.Similar to the receptive fast-mapping responses, Older Toddler girls outperformed the other groups.The Older Toddler boys had a higher proportion of correctly named objects in the Control condition.
2022-04-08T06:22:45.301Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "55e3e137dde0028794ec6c8aa40668e43afc0f33", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1F7D2ED3C2D8C0D85A5746E312C42F19/S0305000922000149a.pdf/div-class-title-can-gesture-input-support-toddlers-fast-mapping-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "be3d3ded0969df02cfce0e9c7eea77d7c2f411e9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
211835953
pes2o/s2orc
v3-fos-license
Active inference under visuo-proprioceptive conflict: Simulation and empirical results It has been suggested that the brain controls hand movements via internal models that rely on visual and proprioceptive cues about the state of the hand. In active inference formulations of such models, the relative influence of each modality on action and perception is determined by how precise (reliable) it is expected to be. The ‘top-down’ affordance of expected precision to a particular sensory modality is associated with attention. Here, we asked whether increasing attention to (i.e., the precision of) vision or proprioception would enhance performance in a hand-target phase matching task, in which visual and proprioceptive cues about hand posture were incongruent. We show that in a simple simulated agent—based on predictive coding formulations of active inference—increasing the expected precision of vision or proprioception improved task performance (target matching with the seen or felt hand, respectively) under visuo-proprioceptive conflict. Moreover, we show that this formulation captured the behaviour and self-reported attentional allocation of human participants performing the same task in a virtual reality environment. Together, our results show that selective attention can balance the impact of (conflicting) visual and proprioceptive cues on action—rendering attention a key mechanism for a flexible body representation for action. The above findings and their interpretation can in principle be accommodated within a hierarchical predictive coding formulation of active inference as a form of Bayes-optimal motor control, in which proprioceptive as well as visual prediction errors can update higher-level beliefs about the state of the body and thus influence action [32][33][34] . Hierarchical predictive coding rests on a probabilistic mapping from unobservable causes (hidden states) to observable consequences (sensory states), as described by a hierarchical generative model, where each level of the model encodes conditional expectations ('beliefs') about states of the world that best explains states of affairs encoded at lower levels (i.e., sensory input). The causes of sensations are inferred via model inversion, where the model's beliefs are updated to accommodate or 'explain away' ascending prediction error (a.k.a. Bayesian filtering or predictive coding [35][36][37]. Active inference extends hierarchical predictive coding from the sensory to the motor domain, in that the agent is now able to fulfil its model predictions via action 38 . In brief, movement occurs because high-level multi-or amodal beliefs about state transitions predict proprioceptive and exteroceptive (visual) states that would ensue if a particular movement (e.g. a grasp) was performed. Prediction error is then suppressed throughout the motor hierarchy 3,39 , ultimately by spinal reflex arcs that enact the predicted movement. This also implicitly minimises exteroceptive prediction error; e.g. the predicted visual action consequences 34,[40][41][42] . Crucially, all ascending prediction errors are precision-weighted based on model predictions (where precision corresponds to the inverse variance), so that a prediction error that is expected to be more precise has a stronger impact on belief updating. The 'top-down' affordance of precision has been associated with attention [43][44][45] . This suggests a fundamental implication of attention for behaviour, as action should be more strongly informed by prediction errors 'selected' by attention. In other words, the impact of visual or proprioceptive prediction errors on multisensory beliefs driving action should not only depend on factors like sensory noise, but may also be regulated via the 'top-down' affordance of precision; i.e., by directing the focus of selective attention towards one or the other modality. Here, we used a predictive coding scheme 6,38 to test this assumption. We simulated behaviour (i.e., prototypical grasping movements) under active inference, in a simple hand-target phase matching task ( Fig. 1) during which conflicting visual or proprioceptive cues had to be prioritized. Crucially, we included a condition in which proprioception had to be adjusted to maintain visual task performance and a converse condition, in which proprioceptive task performance had to be maintained in the face of conflicting visual information. This enabled us to address the effects reported in the visuo-motor adaptation studies reviewed above and studies showing automatic biasing of one's own movement execution by incongruent action observation 46,47 . In our simulations, we asked whether changing the relative precision afforded to vision versus proprioception-corresponding to attention-would improve task performance (i.e., target matching with the respective instructed modality, vision or proprioception) in each case. We implemented this 'attentional' manipulation by adjusting the inferred precision of each modality, thus changing the degree with which the respective prediction errors drove model updating and action. We then compared the results of our simulation with the actual behaviour and subjective ratings of attentional focus of healthy participants performing the same task in a virtual reality environment. We anticipated that participants, in order to comply with the task instructions, would adopt an 'attentional set' [48][49][50] prioritizing the respective instructed target tracking modality over the task-irrelevant one [51][52][53] , by means of internal precision adjustment-as evaluated in our simulations. Results Simulation results. We based our simulations on predictive coding formulations of active inference 6,13,38,45 . In brief (please see Methods for details), we simulated a simple agent that entertained a generative model of its environment (i.e., the task environment and its hand), while receiving visual and proprioceptive cues about hand posture (and the target). Crucially, the agent could act on the environment (i.e., move its hand), and thus was engaged in active inference. The simulated agent had to match the phasic size change of a central fixation dot (target) with the grasping movements of the unseen real hand (proprioceptive hand information) or the seen virtual hand (visual hand information). Under visuo-proprioceptive conflict (i.e., a phase shift between virtual and real hand movements introduced via temporal delay), only one of the hands could be matched to the target's oscillatory phase (see Fig. 1 for a detailed task description). The aim of our simulations was to test whether-in the above manual phase matching task under perceived visuo-proprioceptive conflicts-increasing the expected precision of sensory prediction errors from the instructed modality (vision or proprioception) would improve performance, whereas increasing the precision of prediction errors from the 'distractor' modality would subvert performance. Such a result would demonstrate that-in an active inference scheme-behaviour under visuo-proprioceptive conflict can be augmented via top-down precision control; i.e., selective attention [43][44][45] . In our predictive coding-based simulations, we were able to test this hypothesis by changing the precision afforded to prediction error signalsrelated to visual and proprioceptive cues about hand posture-in the agent's generative model. Figures 2 and 3 show the results of these simulations, in which the 'active inference' agent performed the target matching task under the two kinds of instruction (virtual hand or real hand task; i.e., the agent had a strong prior belief that the visual or proprioceptive hand posture would track the target's oscillatory size change) under congruent or incongruent visuo-proprioceptive mappings (i.e., where incongruence was realized by temporally delaying the virtual hand's movements with respect to the real hand). In this setup, the virtual hand corresponds to hidden states generating visual input, while the real hand generates proprioceptive input. Under congruent mapping (i.e., in the absence of visuo-proprioceptive conflict) the simulated agent showed near perfect tracking performance (Fig. 2). We next simulated an agent performing the task under incongruent mapping, while equipped with the prior belief that its seen and felt hand postures were in fact unrelated, i.e., never matched. Not surprisingly, the agent easily followed the task instructions and again showed near perfect tracking with vision or proprioception, under incongruence (Fig. 2). However, as noted above, it is reasonable to assume that human participants would have the strong prior belief-based upon life-long learning and association-that their manual actions generated matching seen and felt postures (i.e., a prior belief that modality specific sensory consequences have a common cause). Our study design assumed that this association would be very hard to update, and that consequently performance could only be altered via adjusting expected precision of vision vs proprioception (see Methods). Therefore, we next simulated the behaviour (during the incongruent tasks) of an agent embodying a prior belief that visual and proprioceptive cues about hand state were in fact congruent. As shown in Fig. 3a, this introduced notable inconsistencies between the agent's model predictions and the true states of vision and proprioception, resulting in elevated prediction error signals. The agent was still able to follow the task instructions, i.e., to keep the (instructed) virtual or real hand more closely matched to the target's oscillatory phase, but showed a drop in performance compared with the 'idealized' agent (cf. Fig. 2). We then simulated the effect of our experimental manipulation, i.e., of increasing precision of sensory prediction errors from the respective task-relevant (constituting increased attention) or task-irrelevant (constituting increased distraction) modality on task performance. We expected this manipulation to affect behaviour; namely, by how strongly the respective prediction errors would impact model belief updating and subsequent performance (i.e., action). The results of these simulations (Fig. 3a) showed that increasing the precision of vision or proprioception-the respective instructed tracking modality-resulted in reduced visual or proprioceptive prediction errors. This can be explained by the fact that these 'attended' prediction errors were now more strongly accommodated by model Figure 1. Task design and behavioural requirements. We used the same task design in the simulated and behavioural experiments, focusing on the effects of attentional modulation on hand-target phase matching via prototypical (i.e., well-trained prior to the experiment, see Methods) oscillatory grasping movements at 0.5 Hz. Participants (or the simulated agent) controlled a virtual hand model (seen on a computer screen) via a data glove worn on their unseen right hand. The virtual hand (VH) therefore represented seen hand posture (i.e., vision), which could be uncoupled from the real hand posture (RH; i.e., proprioception) by introducing a temporal delay (see below). The task required matching the phase of one's right-hand grasping movements to the oscillatory phase of the fixation dot ('target'), which was shrinking-and-growing sinusoidally at 0.5 Hz. In other words, participants had to rhythmically close the hand when the dot shrunk and to open it when the dot expanded. Our design was a balanced 2 × 2 factorial design: The task was completed (or simulated) under congruent or incongruent hand movements: the latter were implemented by adding a lag of 500 ms to the virtual hand movements (Factor 'congruence'). Furthermore, the participants (or the simulated agent) performed the task with one of two goals in mind: to match the movements of the virtual hand (VH) or of those of the real hand (RH) to the phase of the dot (Factor 'instructed modality'; written instructions were presented before each trial, and additionally represented by the fixation dot's colour). Note that whereas in the congruent conditions (VH cong, RH cong) both hand positions were identical, and therefore both hands' grasping movements could simultaneously be matched to the target's oscillatory phase (i.e., the fixation dot's size change), only one of the hands' (virtual or real) movements could be phase-matched to the target in the incongruent conditionsnecessarily implying a phase mismatch of the other hand's movements. In the VH incong condition, participants had to adjust their movements to counteract the visual lag; i.e., they had to phase-match the virtual hand's movements (i.e., vision) to the target by shifting their real hand's movements (i.e., proprioception) out of phase with the target. Conversely, in the RH incong condition, participants had to match their real hand's movements (i.e., proprioception) to the target's oscillation, and therefore had to ignore the fact that the virtual hand (i.e., vision) was out of phase. The curves show the performance of an ideal participant (or simulated agent). www.nature.com/scientificreports www.nature.com/scientificreports/ belief updating (about hand posture). Conversely, one can see a complementary increase of prediction errors from the 'unattended' modality. The key result, however, was that the above 'attentional' alterations substantially influenced hand-target phase matching performance (Fig. 3b). Thus, increasing the precision of the instructed task-relevant sensory modality's prediction errors led to improved target tracking (i.e. a reduced phase shift of the instructed modality's grasping movements from the target's phase). In other words, if the agent attended to the instructed visual (or proprioceptive) cues more strongly, its movements were driven more strongly by vision (or proprioception)-which helped it to track the target's oscillatory phase with the respective modality's grasping movements. Conversely, increasing the precision of the 'irrelevant' (not instructed) modality in each case impaired tracking performance. The simulations also showed that the amount of action itself was comparable across conditions (blue plots in Figs. 2 and 3; i.e., movement of the hand around the mean stationary value of 0.05), which means that the kinematics of the hand movement per se were not biased by attention. Action was particularly evident in the initiation phase of the movement and after reversal of movement direction (open-to-close). At the point of reversal of movement direction, conversely, there was a moment of stagnation; i.e., changes in hand state were temporarily suspended (with action nearly returning to zero). In our simulated agent, this briefly increased uncertainty about hand state (i.e., which direction the hand was moving), resulting in a slight lag before the agent picked up its movement again, which one can see reflected by a small 'bump' in the true hand states (Figs. 2 and 3). These effects were somewhat more pronounced during movement under visuo-proprioceptive incongruence and prior belief in congruence-which indicates that the fluency of action depended on sensory uncertainty. In sum, these results show that the attentional effects of the sort we hoped to see can be recovered using a simple active inference scheme; in that precision control determined the influence of separate sensory modalities-each of which was generated by the same cause, i.e., the same hand-on behaviour by biasing action towards cues from that modality. Empirical results. Participants practiced and performed the same task as in the simulations (please see Methods for details). We first analysed the post-experiment questionnaire ratings of our participants (Fig. 4) to the following two questions: "How difficult did you find the task to perform in the following conditions?" (Q1, answered on a 7-point visual analogue scale from "very easy" to "very difficult") and "On which hand did you focus your attention while performing the task?" (Q2, answered on a 7-point visual analogue scale from "I focused on my real hand" to "I focused on the virtual hand"). For the ratings of Q1, a Friedman's test revealed a significant Simulated behaviour of an agent performing the hand-target phase matching task under ideally adjusted model beliefs. Each pair of plots shows the simulation results for an agent with a priori 'ideally' adjusted (but unrealistic, see text for details) model beliefs about visuo-proprioceptive congruence; i.e., in the congruent tasks, the agent believed that its real hand generated matching seen and felt postures, whereas it believed that the same hand generated mismatching postures in the incongruent tasks. Note that these simulations are unrealistic in that the agent would not perceive visuo-proprioceptive conflicts at all. Each pair of plots shows the simulation results for one grasping movement in the VH and RH tasks under congruence or incongruence; the left plot shows the predicted sensory input (solid coloured lines; yellow = target, red = vision, blue = proprioception) and the true, real-world values (broken black lines) for the target and the visual and proprioceptive hand posture, alongside the respective sensory prediction errors (dotted coloured lines; blue = target, green = vision, purple = proprioception); the right plot (blue line) shows the agent's action (i.e., the rate of change in hand posture, see Methods). Note that target phase matching is near perfect and there is practically no sensory prediction error (i.e., the dotted lines stay around 0). Figure 3. Simulated behaviour of a 'realistic' agent performing the hand-target phase matching task. Here we simulated an agent performing the incongruent tasks under the prior belief that its hand generated matching visual and proprioceptive information; i.e., under perceived visuo-proprioceptive conflict. (a) The plots follow the same format as in Fig. 2. Note that, in these results, one can see a clear divergence of true from predicted visual and proprioceptive postures, and correspondingly increased prediction errors. The top row shows the simulation results for the default weighting of visual and proprioceptive information; the middle row shows the same agent's behaviour when precision of the respective task-relevant modality (i.e., vision in the VH task and proprioception in the RH task) was increased (HA: high attention); the bottom row shows the analogous results when the precision of the respective other, irrelevant modality was increased (HD: high distraction). Note how in each case, increasing (or decreasing) the log precision of vision or proprioception resulted in an attenuation (or enhancement) of the associated prediction errors (indicated by green and purple arrows for vision and proprioception, respectively). Crucially, these 'attentional' effects had an impact on task performance, as evident by an improved hand-target tracking with vision or proprioception, respectively. This is shown in panel (b): The curves show the tracking in the HA conditions. The bar plots represent the average deviation (phase shift or lag, in seconds) of the real hand's (red) or the virtual hand's (blue) grasping movements from the target's oscillatory size change in each of the simulations shown in panel (a). Note that under incongruence (i.e., a constant delay of vision), reducing the phase shift of one modality always implied increasing the phase shift of the other modality (reflected by a shift of red and blue bars representing the average proprioceptive and visual phase shift, respectively). Crucially, in both RH and VH incong conditions, increasing attention (HA; i.e., in terms of predictive coding: the precision afforded to the respective prediction errors) to the task-relevant modality enhanced task performance (relative to the default setting, Def.), as evident by a reduced phase shift of the respective modality from the target phase. The converse effect was observed when the agent was 'distracted' (HD) by paying attention to the respective task-irrelevant modality. (2020) 10:4010 | https://doi.org/10.1038/s41598-020-61097-w www.nature.com/scientificreports www.nature.com/scientificreports/ difference between conditions (χ 2 (3,69) = 47.19, p < 0.001). Post-hoc comparisons using Wilcoxon's signed rank test showed that, as expected, participants reported finding both tasks more difficult under visuo-proprioceptive incongruence (VH incong > VH cong, z (23) = 4.14, p < 0.001; RH incong > RH cong, z (23) = 3.13, p < 0.01). There was no significant difference in reported difficulty between VH cong and RH cong, but the VH incong condition was perceived as significantly more difficult than the RH incong condition (z (23) = 2.52, p < 0.05). These results suggest that, per default, the virtual hand and the real hand instructions were perceived as equally difficult to comply with, and that in both cases the added incongruence increased task difficulty-more strongly so when (artificially shifted) vision needed to be aligned with the target's phase. For the ratings of Q2, a Friedman's test revealed a significant difference between conditions (χ 2 (3,69) = 35.83, p < 0.001). Post-hoc comparisons using Wilcoxon's signed rank test showed that, as expected, participants focussed more strongly on the virtual hand during the virtual hand task and more strongly on the real hand during the real hand task. This was the case for congruent (VH cong > RH cong, z (23) = 3.65, p < 0.001) and incongruent (VH incong > RH incong, z (23) = 4.03, p < 0.001) movement trials. There were no significant differences between VH cong vs VH incong, and RH cong vs RH incong, respectively. These results show that participants focused their attention on the instructed target modality, irrespective of whether the current movement block was congruent or incongruent. This supports our assumption that participants would adopt a specific attentional set to prioritize the instructed target modality. Next, we analysed the task performance of our participants; i.e., how well the virtual (or real) hand's grasping movements were phase-matched to the target's oscillation (i.e., the fixation dot's size change) in each condition. Note that under incongruence, better target phase-matching with the virtual hand implies a worse alignment of the real hand's phase with the target, and vice versa. We expected (cf. Fig. 1; confirmed by the simulation results, Figs. 2 and 3) an interaction between task and congruence: participants should show a better target phase-matching of the virtual hand under visuo-proprioceptive incongruence, if the virtual hand was the instructed target modality (but no such difference should be significant in the congruent movement trials, since virtual and real hand movements were identical in these trials). All of our participants were well trained (see Methods), therefore our task focused on average performance benefits from attention (rather than learning or adaptation effects). The participants' average tracking performance is shown in Fig. 5. A repeated-measures ANOVA on virtual hand-target phase-matching revealed significant main effects of task (F (1,22) = 31.69, p < 0.001) and congruence (F (1,22) = 173.42, p < 0.001) and, more importantly, a significant interaction between task and congruence (F (1,22) = 50.69, p < 0.001). Post-hoc t-tests confirmed that there was no significant difference between the VH cong and RH cong conditions (t (23) = 1.19, p = 0.25), but a significant difference between the VH incong and RH incong conditions (t (23) = 6.59, p < 0.001). In other words, in incongruent conditions participants aligned the phase of the virtual hand's movements significantly better with the dot's phasic size change when given the 'virtual hand' than the 'real hand' instruction. Furthermore, while the phase shift of the real hand's movements was larger during VH incong > VH cong (t (23) = 9.37, p < 0.001)-corresponding to the smaller phase shift, and therefore better target phase-matching, of the virtual hand in these conditions-participants also exhibited a significantly larger shift of their real hand's movements during RH incong > RH cong (t (23) = 4.31, p < 0.001). Together, these results show that participants allocated their attentional resources to the respective instructed modality (vision or proprioception), and that this was accompanied by significantly better target tracking in each case-as expected based on the active inference formulation, and as suggested by the simulation results. Discussion We have shown that behaviour in a hand-target phase matching task, under visuo-proprioceptive conflict, benefits from adjusting the balance of visual versus proprioceptive precision; i.e., increasing attention to the respective task-relevant modality. Our results generally support a predictive coding formulation of active inference, where visual and proprioceptive cues affect multimodal beliefs that drive action-depending on the relative . Self-reports of task difficulty and attentional focus given by our participants. The bar plots show the mean ratings for Q1 and Q2 (given on a 7-point visual analogue scale), with associated standard errors of the mean. On average, participants found the VH and RH task more difficult under visuo-proprioceptive incongruence-more strongly so when artificially shifted vision needed to be aligned with the target's phase (VH incong, Q1). Importantly, the average ratings of Q2 showed that participants attended to the instructed modality (irrespective of whether the movements of the virtual hand and the real hand were congruent or incongruent). Scientific RepoRtS | (2020) 10:4010 | https://doi.org/10.1038/s41598-020-61097-w www.nature.com/scientificreports www.nature.com/scientificreports/ precision afforded to each modality 6,45 . Firstly, a simulated agent exhibited better hand-target phase matching when the expected precision of the instructed 'task-relevant' modality (i.e., attention to vision or proprioception) was increased relative to the 'task-irrelevant' modality. This effect was reversed when attention was increased to the 'task-irrelevant' modality, effectively corresponding to cross-modal distraction. These results suggest that more precise sensory prediction errors have a greater impact on belief updating-which in turn guides goal-directed action. Our simulations also suggest that the effects of changing precision were related to a perceived visuo-proprioceptive conflict-based on a prior belief that one's hand movements should generate matching visual and proprioceptive sensations. In an agent holding the unrealistic belief that visual and proprioceptive postures were per default unrelated, no evidence for an influence of visuo-proprioceptive conflict on target tracking was observed. Secondly, the self-report ratings of attentional allocation and the behaviour exhibited by human participants performing the same task, in a virtual reality environment, suggest an analogous mechanism: Our participants reported shifting their attention to the respective instructed modality (vision or proprioception)-and they were able to correspondingly align either vision or proprioception with an abstract target (oscillatory phase) under visuo-proprioceptive conflict. A noteworthy result of the behavioural experiment was a more pronounced shift of real hand movements in the 'real hand' condition (i.e., participants partly aligned the delayed virtual hand with the target's phase). This behaviour resembled that of our simulated agent under 'high distraction'; i.e., under increased precision of task-irrelevant visual hand cues. This suggests that, in the RH incong condition, participants may have been distracted by (attending to) vision. Interestingly, however, our participants reported attentional focus on their real hand and even found the 'real hand' task easier than the 'virtual hand' task under visuo-proprioceptive incongruence. This suggests that they did not notice their 'incorrect' behavioural adjustment. One interpretation of this seemingly 'automatic' visual bias is suggested by predictive coding formulations of shared body representation and self-other distinction; namely, the balance between visual and proprioceptive prediction errors to disambiguate between 'I am observing an action' or 'I am moving' 3,13 . Generally, visual prediction errors have to be attenuated during action observation to prevent actually realising the observed movement (i.e., mirroring) 13 . However, several studies have demonstrated 'automatic' imitative tendencies during action observation, reminiscent of 'echopraxia' , which are extremely hard to inhibit. For example, seeing an incongruent finger or arm movement biases movement execution 46,47 . In a predictive coding framework, this can be formalized as an 'automatic' update of multimodal beliefs driving action by precise (i.e., not sufficiently attenuated) visual body information 3,13,30 . Such an interpretation would be in line with speculations that participants in visuo-motor conflict tasks attend to vision, rather than proprioception, if not instructed otherwise 48,51,53,54 . Our simulation results suggest that altered precision expectations may mediate these effects. Our results offer new insights into the multisensory mechanisms of a body representation for action, complementing existing theoretical and empirical work. Generally, our results support the notion that an endogenous In the congruent conditions, the virtual hand's and the real hand's movements were identical, whereas the virtual hand's movements were delayed by 500 ms in the incongruent conditions. Right: The bar plot shows the corresponding average deviation (lag in seconds) of the real hand (red) and the virtual hand (blue) from the target in each condition, with associated standard errors of the mean. Crucially, there was a significant interaction effect between task and congruence; participants aligned the virtual hand's movements better with the target's oscillation in the VH incong > RH incong condition (and correspondingly, the real hand's movements in the RH incong > VH incong condition), in the absence of a significant difference between the congruent conditions. Bonferroni-corrected significance: **p < 0.01, ***p < 0.001. Scientific RepoRtS | (2020) 10:4010 | https://doi.org/10.1038/s41598-020-61097-w www.nature.com/scientificreports www.nature.com/scientificreports/ 'attentional set' [48][49][50] can influence the precision afforded to vision or proprioception during action, and thus prioritize either modality for the current behavioural context. Several studies have shown that visuo-proprioceptive recalibration is context dependent; in that either vision or proprioception may be the 'dominant' modality-with corresponding recalibration of the 'non-dominant' modality 9,10,48,51-53,55-57 . Thus, our results lend tentative support to arguments that visuo-proprioceptive (or visuo-motor) adaptation and recalibration can be enhanced by increasing the precision of visual information (attending to vision 48,51 ). Notably, our results also suggest that the reverse can be true; namely, visuo-proprioceptive recalibration can be counteracted by attending to proprioception. In other words, our results suggest that updating the predictions of a 'body model' affects goal-directed action. Crucially, the qualitative similarity of simulated and empirical behaviour provides a mechanistic explanation for these processes, which is compatible with a neurobiological implementation (i.e., predictive coding). Previous work on causal inference suggests that Bayes-optimal cue integration can explain a variety of multisensory phenomena under intersensory conflict; including the recalibration of the less precise modality onto the more precise one 7,14,15,[58][59][60][61][62][63] . However, in the context of multisensory integration for body (upper limb) representation, the focus of previous models was on perceptual (causal) inference 59,62,64 or on adaptation or learning 2,12 . Our work advances on these findings by showing that adjusting the precision of two conflicting sources of bodily information (i.e., seen or felt hand posture, which were expected to be congruent based on fixed prior model beliefs in a common cause) enhances the accuracy of goal-directed action (i.e., target tracking) with the respective 'attended' modality. By allowing our agent to move and optimising sensory precision, this work goes beyond modelling perceptual (causal) inference to consider active inference; where the consequences of action affect perceptual inference and vice versa. Specifically, we showed that action (in our case, hand-target tracking) was influenced by attentional allocation (to visual or proprioceptive cues about hand position), via augmentation of the impact of sensory prediction errors on model belief updating from the 'attended' modality relative to the 'unattended' one. In other words, we showed an interaction between sensory attention and (instructed) behavioural goals in a design that allowed the agent (or participant) to actively change sensory stimuli. In short, we were able to model the optimisation of precision-that underwrites multisensory integration-and relate this to sensory attention and attenuation during action. These results generalise previous formulations of sensorimotor control 12,64 to address attentional effects on action. The relevance of our model-and the simulation results-also stems from the fact that it is based on a 'first principles' approach that, in contrast to most work in this area, commits to a neurobiologically plausible implementation scheme; i.e., predictive coding [35][36][37] . The model can therefore be thought of as describing recurrent message passing between hierarchical (cortical) levels to suppress prediction error. Model beliefs, prediction errors, and their precision can therefore be associated with the activity of specific cell populations (deep and superficial pyramidal cells; see Methods) 6,40,45 . This means that, unlike most normative models, the current model can, in principle, be validated in relation to evoked neuronal responses-as has been demonstrated in simulations of oculomotor control using the same implementation of active inference 65,66 . There are some limitations of the present study that should be addressed by future work. Firstly, our task design focuses on prototypical movements and average phase matching. Our results should therefore be validated by designs using more complicated movements. The main aim of our simulations was to provide a proof-of-concept that attentional effects during visuo-motor conflict tasks were emergent properties of active inference formulation. Therefore, our simulations are simplified approximations to a complex movement paradigm with a limited range of movements and deviations. This simplified setup allowed us to sidestep a detailed consideration of forward or generative models for the motor plant-to focus on precision or gain control. In the future, we plan to fit generative models to raw complicated movements. The aim here is to develop increasingly realistic generative models, whose inversion will be consistent with known anatomy and physiology. Lastly, our interpretation of the empirical results-in terms of evidence for top-down attention-needs to be applied with some caution, as we can only infer any attentional effects from the participants' self-reports; and we can only assume that participants monitored their behaviour continuously. Future work could therefore use explicit measures of attention, perhaps supplemented by forms of external supervision and feedback, to validate behavioural effects. Beyond this, our results open up a number of interesting questions for future research. It could be established whether the effects observed in our study can have long-lasting impact on the (generalizable) learning of motor control 67 . Another important question for future research is the potential attentional compensation of experimentally added sensory noise (e.g., via jittering or blurring the visual hand or via tendon vibration; although these manipulations may in themselves be 'attention-grabbing' 68 ). Finally, an interesting question is whether the observed effects could perhaps be reduced by actively ignoring or 'dis-attending' 69,70 away from vision. An analogous mechanism has been tentatively suggested by observed benefits of proprioceptive attenuation-thereby increasing the relative impact of visual information-during visuo-motor adaptation and visuo-proprioceptive recalibration 10,24,26,31,[71][72][73] . These questions should best be addressed by combined behavioural and brain imaging experiments, to illuminate the neuronal correlates of the (supposedly attentional) precision weighting in the light of recently proposed implementations of predictive coding in the brain 37,40,74 . To conclude, our results suggest a tight link between attention (precision control), multisensory integration, and action-allowing the brain to choose how much to rely on specific sensory cues to represent its body for action in a given context. Methods Task design. We used the same task design in the simulations and the behavioural experiment (see Fig. 1). For consistency, we will describe the task as performed by our human participants, but the same principles apply to the simulated agent. We designed our task as a non-spatial modification of a previously used hand-target tracking task 29,30 . The participant (or simulated agent) had to perform repetitive grasping movements paced by sinusoidal fluctuations in the size of a central fixation dot (sinusoidal oscillation at 0.5 Hz). Thus, this task was effectively a phase matching task, which we hoped to be less biased towards the visual modality due to a more abstract target quantity (oscillatory size change vs spatially moving target, as in previous studies). The fixation dot was chosen as the target to ensure that participants had to fixate the centre of the screen (and therefore look at the virtual hand) in all conditions. Participants (or the simulated agent) controlled a virtual hand model via a data glove worn on their unseen right hand (details below). In this way, vision (seen hand position via the virtual hand) could be decoupled from proprioception (felt hand position). In half of the movement trials, a temporal delay of 500 ms between visual and proprioceptive hand information was introduced by delaying vision (i.e., the seen hand movements) with respect to proprioception (i.e., the unseen hand movements performed by the participant or agent). In other words, the seen and felt hand positions were always incongruent (mismatching; i.e., phase-shifted) in these conditions. Crucially, the participant (agent) had to perform the hand-target phase matching task with one of two goals in mind: to match the target's oscillatory phase with the seen virtual hand movements (vision) or with the unseen real hand movements (proprioception). This resulted in a 2 × 2 factorial design with the factors 'visuo-proprioceptive congruence' (congruent, incongruent) and 'instructed modality' (vision, proprioception). Predictive coding and active inference. We based our simulations on predictive coding formulations of active inference as situated within a free energy principle of brain function, which has been used in many previous publications to simulate perception and action 6,13,38,45,65,66 . Here, we briefly review the basic assumptions of this scheme (please see the above literature for details). Readers familiar with this topic should skip to the next section. Hierarchical predictive coding rests on a probabilistic mapping of hidden causes to sensory consequences, as described by a hierarchical generative model, where each level of the model encodes conditional expectations ('beliefs'; here, referring to subpersonal or non-propositional Bayesian beliefs in the sense of Bayesian belief updating and belief propagation; i.e., posterior probability densities) about states of the world that best explains states of affairs encoded at lower levels or-at the lowest level-sensory input. This hierarchy provides a deep model of how current sensory input is generated from causes in the environment; where increasingly higher-level beliefs represent increasingly abstract (i.e., hidden or latent) states of the environment. The generative model therefore maps from unobservable causes (hidden states) to observable consequences (sensory states). Model inversion corresponds to inferring the causes of sensations; i.e., mapping from consequences to causes. This inversion rests upon the minimisation of free energy or 'surprise' approximated in the form of prediction error. Model beliefs or expectations are thus updated to accommodate or 'explain away' ascending prediction error. This corresponds to Bayesian filtering or predictive coding 35-37 -which, under linear assumptions, is formally identical to linear quadratic control in motor control theory 75 . Importantly, predictive coding can be implemented in a neurobiologically plausible fashion 6,[35][36][37]40 . In such architectures, predictions may be encoded by the population activity of deep and superficial pyramidal cells, whereby descending connections convey predictions, suppressing activity in the hierarchical level below, and ascending connections return prediction error (i.e., sensory data not explained by descending predictions) 37,40 . Crucially, the ascending prediction errors are precision-weighted (where precision corresponds to the inverse variance), so that a prediction error that is afforded a greater precision has more impact on belief updating. This can be thought of as increasing the gain of superficial pyramidal cells 38,43 . Active inference extends hierarchical predictive coding from the sensory to the motor domain; i.e., by equipping standard Bayesian filtering schemes (a.k.a. predictive coding) with classical reflex arcs that enable action (e.g., a hand movement) to fulfil predictions about hidden states of the world. In brief, desired movements are specified in terms of prior beliefs about state transitions (policies), which are then realised by action; i.e., by sampling or generating sensory data that provide evidence for those beliefs 38 . Thus, action is also driven by optimisation of the model via suppression of prediction error: movement occurs because high-level multi-or amodal prior beliefs about behaviour predict proprioceptive and exteroceptive (visual) states that would ensue if the movement was performed (e.g., a particular limb trajectory). Prediction error is then suppressed throughout a motor hierarchy; ranging from intentions and goals over kinematics to muscle activity 3,39 . At the lowest level of the hierarchy, spinal reflex arcs suppress proprioceptive prediction error by enacting the predicted movement, which also implicitly minimises exteroceptive prediction error; e.g. the predicted visual consequences of the action 5,33,40 . Thus, via embodied interaction with its environment, an agent can reduce its model's free energy ('surprise' or, under specific assumptions, prediction error) and maximise Bayesian model evidence 76 . Following the above notion of active inference, one can describe action and perception as the solution to coupled differential equations describing the dynamics of the real world (boldface) and the behaviour of an agent (italics) 6,38 .˙~~ω The first pair of coupled stochastic (i.e., subject to random fluctuations ω x , ω ν ) differential equations describes the dynamics of hidden states and causes in the world and how they generate sensory states. Here, (s, x, ν, a) denote sensory input, hidden states, hidden causes and action in the real world, respectively. The second pair of equations corresponds to action and perception, respectively-they constitute a (generalised) gradient descent on variational free energy, known as an evidence bound in machine learning 77 . The differential equation describing perception corresponds to generalised filtering or predictive coding. The first term is a prediction based upon a differential operator D that returns the generalised motion of conditional (i.e., posterior) expectations about states of the world, including the motor plant (vector of velocity, acceleration, jerk, etc.). Here, the variables (  µ s a , , ) correspond to generalised sensory input, conditional expectations and action, respectively. Generalised (2020) 10:4010 | https://doi.org/10.1038/s41598-020-61097-w www.nature.com/scientificreports www.nature.com/scientificreports/ coordinates of motion, denoted by the ~ notation, correspond to a vector representing the different orders of motion (position, velocity, acceleration, etc.) of a variable. The differential equations above are coupled because sensory states depend upon action through hidden states and causes (x, ν) while action a(t) = a(t) depends upon sensory states through internal states  µ . Neurobiologically, these equations can be considered to be implemented in terms of predictive coding; i.e., using prediction errors on the motion of hidden states-such as visual or proprioceptive cues about hand position-to update beliefs or expectations about the state of the lived world and embodied kinematics. By explicitly separating hidden real-world states from the agent's expectations as above, one can separate the generative process from the updating scheme that minimises free energy. To perform simulations using this scheme, one solves Eq. 1 to simulate (neuronal) dynamics that encode conditional expectations and ensuing action. The generative model thereby specifies a probability density function over sensory inputs and hidden states and causes, which is needed to define the free energy of sensory inputs: , ω ν (i) ) on the motion of hidden states and causes. These play the role of sensory noise or uncertainty about states. The precisions of these fluctuations are quantified by (Π x (i) , Π ν (i) ), which are the inverse of the respective covariance matrices. Given the above form of the generative model (Eq. 2), we can now write down the differential equations (Eq. 1) describing neuronal dynamics in terms of prediction errors on the hidden causes and states as follows: The above equation (Eq. 3) describes recurrent message passing between hierarchical levels to suppress free energy or prediction error (i.e., predictive coding 36,37 ). Specifically, error units receive predictions from the same hierarchical level and the level above. Conversely, conditional expectations ('beliefs' , encoded by the activity of state units) are driven by prediction errors from the same level and the level below. These constitute bottom-up and lateral messages that drive conditional expectations towards a better prediction to reduce the prediction error in the level below-this is the sort of belief updating described in the introduction. Now we can add action as the specific sampling of predicted sensory inputs. As noted above, along active inference, high-level beliefs (conditional expectations) elicit action by sending predictions down the motor (proprioceptive) hierarchy to be unpacked into proprioceptive predictions at the level of (pontine) cranial nerve nuclei and spinal cord, which are then 'quashed' by enacting the predicted movements.ã Simulations of hand-target phase matching. In our case, the generative process and model used for simulating the target tracking task are straightforward (using just a single level) and can be expressed as follows: www.nature.com/scientificreports www.nature.com/scientificreports/ The first pair of equations describe the generative process; i.e., a noisy sensory mapping from hidden states and the equations of motion for states in the real world. In our case, the real-world variables comprised two hidden states x t (the state of the target) and x h (the state of the hand), which generate sensory inputs; i.e., proprioceptive s p and visual s v cues about hand posture, and visual cues about the target's size s t . Note that to simulate sinusoidal movements-as used in the experimental task-sensory cues pertaining to the target and hand are mapped via sine functions of the respective hidden states (plus random fluctuations). Both target and hand states change linearly over time, and become sinusoidal movements via the respective sensory mapping from causes to sensory data. We chose this solution in our particular case for a straightforward implementation of phase shifts (visuo-proprioceptive incongruence) via subtraction of a constant term from the respective sensory mapping (v, see below). Thus, the target state x t is perturbed by hidden causes at a constant rate (t t = 1/40), i.e., it linearly increases over time. This results in one oscillation of a sinusoidal trajectory via the sensory mapping sin(x t )-corresponding to one growing-and-shrinking of the fixation dot, as in the behavioural experiment-during 2 seconds (the simulations proceeded in time bins of 1/120 seconds, see Fig. 2). The hand state is driven by action a with a time constant of t a =16.67 ms, which induced a slight 'sluggishness' of movement mimicking delays in motor execution. Action thus describes the rate of change of hand posture along a linear trajectory-at a rate of 0.05 per time bin-which again becomes an oscillatory postural change (i.e., a grasping movement) via the sinusoidal sensory mapping. The hidden cause v modelled the displacement of proprioceptive and visual hand posture information in a virtue of being subtracted within the sinusoidal sensory mapping from the hidden hand state to visual sensory information sin(x t −v). In other words, v = 0 when the virtual hand movements were congruent, and v = 0.35 (corresponding to about 111 ms delay) when the virtual hand's movements were delayed with respect to the real hand. Note that random fluctuations in the process generating sensory input were suppressed by using high precisions on the errors of the sensory states and motion in the generative process (exp(16) = 8886110). This can be thought of as simulating the average response over multiple realizations of random inputs; i.e., the single movement we simulated in each condition stands in for the average over participant-specific realizations, in which the effects of random fluctuations are averaged out 38,65,66 . This ensured that our simulations reflect systematic differences depending on the parameter values chosen to reflect alterations of sensory attention via changing parameters of the agent's model (as described below). The parameter values for the precision estimates are somewhat arbitrary and were adopted from previous studies using the same predictive coding formulation to simulate similar (oculomotor) tasks 38,65,66 . The key thing is that changing these parameters (i.e., the precision estimates for visual and proprioceptive cues) resulted in significant changes in simulated behaviour. The second pair of equations describe the agent's generative model of how sensations are generated using the form of Eq. 2. These define the free energy in Eq. 1 and specify behaviour (under active inference). The generative model has the same form as the generative process, with the important exceptions that there is no action and the state of the hand is driven by the displacement between the hand and the target x t − x h . In other words, the agent believes that its grasping movements will follow the target's oscillatory size change, which is itself driven by some unknown force at a constant rate (and thus producing an oscillatory trajectory as in the generative process). This effectively models (the compliance with) the task instruction, under the assumption that the agent already knows about the oscillatory phase of the target; i.e., it is 'familiar with the task' . Importantly, this formulation models the 'real hand' instruction; under the 'virtual hand' instruction, the state of the hand was driven by x t − (x h − v), reflecting the fact that any perceived visual delay (i.e., the inferred displacement of vision from proprioception v) should now also be compensated to keep the virtual hand aligned with the target's oscillatory phase under incongruence; the initial value for v was set to represent the respective information about visuo-proprioceptive congruence, i.e., 0 for congruent movement conditions and 0.35 for incongruent movement conditions. We defined the agent's model to entertain a prior belief that visual and proprioceptive cues are normally congruent (or, for comparison, incongruent). This was implemented by setting the prior expectation of the cause v to 0 (indicating congruence of visual and proprioceptive hand posture information), with a log precision of 3 (corresponding to about 20.1). In other words, the hidden cause could vary, a priori, with a standard deviation of about exp(−3/2) = 0.22. This mimicked the strong association between seen and felt hand positions (under a minimal degree of flexibility), which is presumably formed over a lifetime and very hard to overcome and underwrites phenomena like the 'rubber hand illusion' 16 (see Introduction). Crucially, the agent's model included a precision-weighting of the sensory signals-as determined by the active deployment of attention along predictive coding accounts of active inference. This allowed us to manipulate the precision assigned to proprioceptive or visual prediction errors (Π p , Π v ) that, per default, were given a log precision of 3 and 4, respectively (corresponding to 20.1 and 54.6, respectively). This reflects the fact that, in hand position estimation, vision is usually afforded a higher precision than proprioception 7,51 . To implement increases in task-related (selective) attention, we increased the log precision of prediction errors from the instructed modality (vision or proprioception) by 1 in each case (i.e., by a factor of about 2.7); in an alternative scenario, we tested for the effects of 'incorrect' allocation of attention to the non-instructed or 'distractor' modality by increasing the precision of the appropriate prediction errors. We did not simulate increases in both sensory precisions, because our study design was tailored to investigate selective attention as opposed to divided attention. Note that in the task employed, divided attention was precluded, since attentional set was induced via instructed task-relevance; i.e., attempted target phase-matching. In other words, under incongruence, only one modality could be matched to the target. The ensuing generative process and model are, of course, gross simplifications of a natural movement paradigm. However, this formulation is sufficient to solve the active inference scheme in Eq. 1 and examine the agent's behaviour under the different task instructions and, more importantly, under varying degrees of selectively enhanced sensory precision afforded by an attentional set. range = 19-37, all with normal or corrected-to-normal vision) participated in the experiment, after providing written informed consent. Two participants were unable to follow the task instructions during training and were excluded from the main experiment, resulting in a final sample size of 24. The experiment was approved by the local research ethics committee (University College London) and conducted in accordance with the usual guidelines. During the experiment, participants sat at a table wearing an MR-compatible data glove (5DT Data Glove MRI, 1 sensor per finger, 8 bit flexure resolution per sensor, 60 Hz sampling rate) on their right hand, which was placed on their lap under the table. The data glove measured the participant's finger flexion via sewn-in optical fibre cables; i.e., each sensor returned a value from 0 to 1 corresponding to minimum and maximum flexion of the respective finger. These raw data were fed to a photorealistic virtual right hand model 29,30 , whose fingers were thus moveable with one degree of freedom (i.e., flexion-extension) by the participant, in real-time. The virtual reality task environment was instantiated in the open-source 3D computer graphics software Blender (http:// www.blender.org) using a Python programming interface, and presented on a computer screen at about 60 cm distance (1280 × 1024 pixels resolution). The participants' task was to perform repetitive right-hand grasping movements paced by the oscillatory size change of the central fixation dot, which continually decreased-and-increased in size sinusoidally (12% size change) at a frequency of 0.5 Hz; i.e., this was effectively a phase matching task (Fig. 1). The participants had to follow the dot's size changes with right-hand grasping movements; i.e., to close the hand when the dot shrunk and to open the hand when the dot grew. In half of the movement trials, an incongruence between visual and proprioceptive hand information was introduced by delaying the virtual hand's movements by 500 ms with respect to the movements performed by the participant. The virtual hand and the real hand were persistently in mismatching postures in these conditions. The delay was clearly perceived by all participants. Participants performed the task in trials of 32 seconds (16 movement cycles; the last movement was signalled by a brief disappearance of the fixation dot), separated by 6 second fixation-only periods. The task instructions ('VIRTUAL'/'REAL') were presented before each respective movement trials for 2 seconds. Additionally, participants were informed whether in the upcoming trial the virtual hand's movements would be synchronous ('synch. ') or delayed ('delay'). The instructions and the fixation dot in each task were coloured (pink or turquoise, counterbalanced across participants), to help participants remember the current task instruction, during each movement trial. Participants practised the task until they felt confident, and then completed two runs of 8 min length. Each of the four conditions 'virtual hand task under congruence' (VH cong), 'virtual hand task under incongruence' (VH incong), 'real hand task under congruence' (RH cong), and 'real hand task under incongruence' (RH incong) was presented 3 times per run, in randomized order. To analyse the behavioural change in terms of deviation from the target (i.e., phase shift from the oscillatory size change), we averaged and normalized the movement trajectories in each condition for each participant (raw data were averaged over the four fingers, no further pre-processing was applied). We then calculated the phase shift as the average angular difference between the raw averaged movements of the virtual or real hand and the target's oscillatory pulsation phase in each condition, using a continuous wavelet transform. The resulting phase shifts for each participant and condition were then entered into a 2 × 2 repeated measures ANOVA with the factors task (virtual hand, real hand) and congruence (congruent, incongruent) to test for statistically significant group-level differences. Post-hoc t-tests (two-tailed, with Bonferroni-corrected alpha levels to account for multiple comparisons) were used to compare experimental conditions. After the experiment, participants were asked to indicate-for each of the four conditions separately-their answers to the following two questions: "How difficult did you find the task to perform in the following conditions?" (Q1, answered on a 7-point visual analogue scale from "very easy" to "very difficult") and "On which hand did you focus your attention while performing the task?" (Q2, answered on a 7-point visual analogue scale from "I focused on my real hand" to "I focused on the virtual hand"). The questionnaire ratings were evaluated for statistically significant differences using a nonparametric Friedman's test and Wilcoxon's signed-rank test (with Bonferroni-corrected alpha levels to account for multiple comparisons) due to non-normal distribution of the residuals.
2020-03-04T15:44:01.599Z
2020-03-04T00:00:00.000
{ "year": 2020, "sha1": "c8f10feee23573f3f5c80293baf35f8ebe880385", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-61097-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8f10feee23573f3f5c80293baf35f8ebe880385", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
189897732
pes2o/s2orc
v3-fos-license
Unsupervised Video Interpolation Using Cycle Consistency Learning to synthesize high frame rate videos via interpolation requires large quantities of high frame rate training videos, which, however, are scarce, especially at high resolutions. Here, we propose unsupervised techniques to synthesize high frame rate videos directly from low frame rate videos using cycle consistency. For a triplet of consecutive frames, we optimize models to minimize the discrepancy between the center frame and its cycle reconstruction, obtained by interpolating back from interpolated intermediate frames. This simple unsupervised constraint alone achieves results comparable with supervision using the ground truth intermediate frames. We further introduce a pseudo supervised loss term that enforces the interpolated frames to be consistent with predictions of a pre-trained interpolation model. The pseudo supervised loss term, used together with cycle consistency, can effectively adapt a pre-trained model to a new target domain. With no additional data and in a completely unsupervised fashion, our techniques significantly improve pre-trained models on new target domains, increasing PSNR values from 32.84dB to 33.05dB on the Slowflow and from 31.82dB to 32.53dB on the Sintel evaluation datasets. Introduction With the advancement of modern technology, consumergrade smartphones and digital cameras can now record videos at high frame rates (e.g. 240 frames-per-second). However, achieving this comes at the cost of high power consumption, larger storage requirements, and reduced video resolution. Given these limitations, regular events are not typically recorded at high frame rates. Yet, important life events happen unexpectedly and hence tend to be recorded at standard frame rates. It is thus greatly desirable to have the ability to produce arbitrarily high FPS videos from low FPS videos. Video frame interpolation addresses this need by gener- * Currently affiliated with Google. Ground Truth Intermediate Baseline Supervised (SuperSloMo) Proposed Unsupervised Fine-tuning (SuperSloMo) ating one or more intermediate frames from two consecutive frames. Increasing the number of frames in videos essentially allows one to visualize events in slow motion and appreciate content better. Often, video interpolation techniques are employed to increase the frame rate of already recorded videos, or in streaming applications to provide a high refresh rate or a smooth viewing experience. Video interpolation is a classical vision and graphics problem [3,21,27] and has recently received renewed research attention [10,16,11,14]. Particularly, supervised learning with convolutional neural networks (CNNs) has been widely employed to learn video interpolation from paired input and ground truth frames, often collected from raw video data. For instance, recent CNN-based approaches such as [6] and [16], trained with large quantities of public high FPS videos, obtain high quality interpolation results when the test videos are similar to the training ones. However, these methods may fail if the training data differ from the target domain. For instance, the target domain might be to slow down videos of fish taken underwater, but available training data only contains regular outdoor scenes, thus leading to a content gap. Additionally, there might be more subtle domain gaps due to differences such as camera parameters, encoding codecs, and lighting, leading to the well-known co-variate shift problem [18]. It is impractical to address the issue by collecting high FPS videos covering all possible scenarios, because it is expensive to capture and store very high FPS videos, e.g., videos with more than 1000-fps at high spatial resolutions. In this work, we propose a set of unsupervised learning techniques to alleviate the need for high FPS training videos and to shrink domain gaps in video interpolation. Specifically, we propose to learn video frame interpolation, without paired training data, by enforcing models to satisfy a cycle consistency constraint [2] in the time. That is, for a given triplet of consecutive frames, if we generate the two intermediate frames between the two consecutive frames, and generate back their intermediate frame, the resulting frame must match the original input middle frame (shown schematically in Fig. 3). We show such simple constraint alone is effective to learn video interpolation, and achieve results that compete with supervised approaches. In domain adaptation applications, where we have access to models pre-trained on out-of-domain datasets, but lack ground truth frames in target domains, we also propose unsupervised fine-tuning techniques that leverage such pretrained models (See Fig. 2). We fine-tune models on target videos, with no additional data, by optimizing to jointly satisfy cycle consistency and minimize the discrepancy between generated intermediate frames and corresponding predictions from the known pre-trained model. We demonstrate our joint optimization strategy leads to significantly superior accuracy in upscaling frame rate of target videos than fine-tuning with cycle consistency alone or directly applying the pre-trained models on target videos (see Fig. 1). Cycle consistency has been utilized for image matching [25], establishing dense 3D correspondence over object instances [23], or in learning unpaired image-to-image translation in conjunction with Generative Adversarial Networks (GANs) [26]. To the best of our knowledge, this is the first attempt to use a cycle consistency constraint to learn video interpolation in a completely unsupervised way. To summarize, the contributions of our work include: Given the recent rise in popularity of deep learning methods, several end-to-end trainable methods have been proposed for video interpolation. Specifically, these methods can be trained to interpolate frames using just input and target frames and no additional supervision. Liu et al. [9] and Jiang et al. [6] both indirectly learn to predict optical flow using frame interpolation. Works such as [15,16] are similarly end-to-end trainable, but instead of learning optical flow vectors to warp pixels, they predict adaptive convolutional kernels to apply at each location of the two input frames. Our work presents unsupervised techniques to train or fine-tune any video interpolation model, for instance the Super SloMo [6], which predicts multiple intermediate frames, or the Deep Voxel Flow [9], which predicts one intermediate frame. Cycle Consistency: One of the key elements of our proposed method is the use of a cycle consistency constraint. This constraint encourages the transformations predicted by a model to be invertible, and is often used to regularize the model behavior when direct supervision is unavailable. Cycle consistency has been used in a variety of applications, including determining the quality of language translations [2], semi-supervised training for image-description generation [13], dense image correspondences [24], identifying false visual relations in structure from motion [22], and image-to-image translation [26], to name a few. A cycle consistency constraint, in the context of video interpolation, means that we should be able to reconstruct the original input frames by interpolating between predicted intermediate frames at the appropriate time stamps. Most related to our work is [8], which uses such a constraint to regularize a fully supervised video interpolation model. Our work differs in several critical aspects. First, our method is based on the Super SloMo [6] architecture, and is thus ca-pable of predicting intermediate frames at arbitrary timestamps, whereas [8] is specifically trained to predict the middle timestamp. Next, and most critically, our proposed method is fully unsupervised. This means that the target intermediate frame is never used for supervision, and that it can learn to produce high frame rate interpolated sequences from any lower frame rate sequence. Method In this work, we propose to learn to interpolate arbitrarily many intermediate frames from a pair of input frames, in an unsupervised fashion, with no paired intermediate ground truth frames. Specifically, given a pair of input frames I 0 and I 1 , we generate intermediate frameÎ t , aŝ where t ∈ (0, 1) is time, and M is a video frame interpolation model we want to learn without supervision. We realize M using deep convolutional neural networks (CNN). We chose CNNs as they are able to model highly non-linear mappings, are easy to implement, and have been proven to be robust for various vision tasks, including image classification, segmentation, and video interpolation. Inspired by the recent success in learning unpaired image-to-image translation using Generative Adversarial Networks (GAN) [26], we propose to optimize M to maintain cycle consistency in time. Let I 0 , I 1 and I 2 are a triplet of consecutive input frames. We define the time-domain cycle consistency constraint such that for generated intermediate frames at time t between (I 0 , I 1 ) and between (I 1 , I 2 ), a subsequently generated intermediate frame at time (1 − t) between the interpolated results (Î t ,Î t+1 ) must match the original middle input frame I 1 . Mathematically, a cycle reconstructed frame using M is given by, We then optimize M to minimize the reconstruction error betweenÎ 1 and I 1 , as arg min Figure 3 schematically presents our cycle consistency based approach. A degenerate solution to optimizing equation 3 might be to copy the input frames as the intermediate predictions (i.e. outputs). However, in practice this does not occur. In order for M to learn to do copy frames in this way, it would have to learn to identify the input's time information within a single forward operation (eq. 2), as I 1 is a t = 1 input in the first pass, and I 1 is a t = 0 input in the second pass. This is difficult, since the same weights of M are used in both መ = ℳ( 0 , 1 , ) passes. We support this claim in all of our experiments, where we compared our learned approach using equation 3 with the trivial case of using inputs as intermediate prediction. It is true that triplets of input frames could be exploited directly. For example, the reconstruction error between M(I 0 , I 2 , t = 0.5) and I 1 could be used without cycle consistency. However, our experiments in Section 4.4.2 suggest that larger time-step lead to significantly worse accuracy if used without cycle consistency. Optimizing M to the satisfy cycle consistency (CC) constraint in time, as will show in our experiments in Sections 4.2 and 4.3, is effective and is able to generate arbitrarily many intermediate frames that are realistic and temporally smooth. It also produces results that are competitive with supervised approaches, including the same model M, but trained with supervision. In this work, we also propose techniques that can make unsupervised fine-tuning processes robust. It is quite common to have access to out-of-domain training videos in abundance or access to already pre-trained interpolation models. On the other hand, target domain videos are often limited in quantity, and most critically, lack ground truth intermediate frames. We aim to optimize M in target videos to jointly satisfy cycle consistency as defined in equation 3 and also learn to approximate a known pre-trained interpolation model, denoted as F. Mathematically, our modified objective is given as, whereÎ 1 is the cycle reconstructed frame given by equation 2,Î t andÎ t+1 are given by equation 1, and θ (M) are the parameters of M that our optimization processes update. The added objective function to approximate F, help regularize M to generate realistic hidden intermediate framesÎ t anÎ t+1 by constraining them to resemble predictions of a known frame interpolation model, F. As optimization progresses and M learns to pick-up interpolation concepts, one can limit the contribution of the regularizing "pseudo" supervised (PS) loss and let optimizations be guided more by the cycle consistency. Such a surrogate loss term, derived from estimated intermediate frames, can make our training processes converge faster or make our optimization processes robust by exposing it to many variations of F. In this work, for the sake of simplicity, we chose F to be the same as our M, but pre-trained with supervision on a disjoint dataset that has ground-truth high frame rate video, and denote it as M pre . Our final objective can be given by, where λ rc and λ rp are weights of CC and PS losses. As we will show in the experiments, optimizing equation 5 by relying only on the PS loss, without cycle consistency, will teach M to perform at best as good as M pre , i.e., the model used in the same PS loss. However, as we show in Section 4.4.1, by weighting cycle consistency and PS losses appropriately, we achieve frame interpolation results that are superior to those obtained by learning using either CC or PS losses alone. Finally, we implement our M using the Super SloMo video interpolation model [6]. Super SloMo is a state of the art flow-based CNN for video interpolation, capable of synthesizing an arbitrary number of high quality and temporally stable intermediate frames. Our technique is not limited to this particular interpolation model, but could be adopted with others as well. In the subsequent subsections we provide a short summary of the Super SloMo model, our loss functions, and techniques we employed to make our unsupervised training processes stable. Video Interpolation Model To generate one or more intermediate framesÎ t from a pair of input frames (I 0 , I 1 ), first the Super SloMo model estimates an approximate bi-directional optical flow from any arbitrary time t to 0, F t→0 , and from t to 1, F t→1 . Then, it generates a frame by linearly blending the input frames after they are warped by the respective estimated optical flows, aŝ where T is an operation that bilinearly samples input frames using the optical flows, and α weighs the contribution of each term. The blending weight α models both global property of temporal consistency as well as local or pixel-wise occlusion or dis-occlusion reasoning. For instance, to maintain temporal consistency, I 0 must contribute more toÎ t when t is close to 0. Similarly, I 1 contributes more toÎ t , when t is close to 1. To cleanly blend the two images, an important property of video frame interpolation is exploited, i.e. not all pixels at time t are visible in both input frames. Equation 6 can thus be defined by decomposing α to model both temporal consistency and occlusion or de-occlusions, aŝ where V t←0 and V t←0 are visibility maps, and Z = (1 − t)V t←0 + tV t←1 is a normalisation factor. For a pixel p, V t←0 (p) ∈ [0, 1] denotes visibility of p at time t (0 means fully occluded or is invisible at t). The remaining challenge is estimating the intermediate bi-direction optical flows (F t→0 , F t→1 ) and the corresponding visibility maps (V t←0 , V t←1 ). For more information, we refer the reader to [6]. Training and Loss Functions We train M to generate arbitrarily many intermediate frames {Î ti } N i=1 without using the corresponding groundtruth intermediate frames {I ti } N i=1 , with N and t i ∈ (0, 1) being frame count and time, respectively. Specifically, as described in Section 3, we optimize M to (a) minimize the errors between the cycle reconstructed frameÎ 1 and I 1 and (b) to minimize the errors between the intermediately predicted framesÎ t andÎ t+1 and the corresponding estimated or pseudo ground-truth frames M pre (I 0 , I 1 , t) and M pre (I 1 , I 2 , t). Note that, during optimization a cycle reconstructed frameÎ 1 can be obtained via arbitrarily many intermediately generated frames {Î ti ,Î ti+1 } N i=1 . Thus, many reconstruction errors can be computed from a single triplets of training frames {I 0 , I 1 , I 2 }. However, we found doing so makes optimizations unstable and often unable to converge to acceptable solutions. Instead, we found establishing very few reconstruction errors per triplet to make our training stable and generate realistic intermediate frames. In our experiments, we calculate one reconstruction error per triplet, at random time t i ∈ (0, 1). Our training loss functions are given by, where L rc , defined as, models how good the cycle reconstructed frame is, and L rp , defined as, models how close the hidden intermediate frames are to our pseudo intermediate frames. L p models a perceptual loss defined as the L 2 norm on the high-level features of VGG-16 model, pre-trained on ImageNet, and is given as, with Ψ representing the conv4 3 feature of the VGG-16 model. Our third loss, L w is a warping loss that make optical flow predictions realistic, and is given by, In a similar way as the Super SloMo framework, we also enforce a smoothness constraint to encourage neighbouring optical flows to have similar optical flow values, and it is given as, where F t→t+1 and F t+1→t are the forward and backward optical flows between the the intermediately predictedÎ t andÎ t+1 frames. Finally, we linearly combine our losses using experimentally selected weights: λ rc = 0.8, λ rp = 0.8, λ p = 0.05, λ w = 0.4, and λ s = 1, see Section 4.4.1 for details on weight selection. Implementation Details We use Adam solver [7] for optimization with β 1 = 0.9, β 2 = 0.999, and no weight decay. We train our models for a total of 500 epochs, with a total batch size of 32 on 16 V100 GPUs, using distributed training over two nodes. Initial learning rate is set to 1e −4 , and then scaled-down by 10 after 250, and again after 450 epochs. and YouTube-240fps [6] (296.4K frames) for supervised training to establish baselines. For unsupervised training, we considered low FPS Battlefield1-30fps videos [17] (320K frames), and Adobe-30fps (9.5K frames), obtained by temporally sub-sampling Adobe-240fps videos, by keeping only every other 8th frame. We chose game frames because they contain a large range of motion that could make learning processes robust. We used UCF101 [19] datasets for evaluation. Datasets and Metrics To study our unsupervised fine-tuning techniques in bridging domain gaps, we considered two particularly distinct, high FPS and high resolution, target video datasets: Slowflow-240fps and Sintel-1008fps [5]. Slowflow is captured from real life using professional high speed cameras, whereas Sintel is a game content. We split Slowflow dataset into disjoint low FPS train (3.4K frames) and a high FPS test (414 frames) subsets, see Table 1. We create the test set by selecting nine frames in each of the 46 clips. We then create our low FPS train subset by temporally sub-sampling the remaining frames from 240-fps to 30-fps. During evaluation, our models take as input the first and ninth frame in each test clip and interpolate seven intermediate frames. We follow a similar procedure for Sintel-1008fps [5], but interpolate 41 intermediate frames, i.e., conversion of frame rate from 24-to 1008-fps. To quantitatively evaluate interpolations we considered Peak-Signal-To-Noise (PSNR), the Structural-Similarity-Image-Metric (SSIM), and the Interpolation-Error (IE) [1], which is calculated as the root mean-squared-error between generated and ground truth frames. High PSNR and SSIM scores indicate better quality, whereas for IE score it is the opposite. Large-Scale Unsupervised Training In this section, we consider the scenario where we do not have any high frame rate videos to train a base model, but we have abundant low frame rate videos. We test our models on UCF101 dataset; for every triplet of frames, the first and third ones are used as input to predict the second frame. Results are presented in Table 2. Our unsupervised technique trained on Adobe-30fps performs competitively with results obtained with supervision on Adobe-240fps, achieving PSNR of 34.47, and 34.63 respectively. Compared to the supervised training, our unsupervised training uses 1/8th of the frame count, and performs comparably to techniques trained with supervision. This shows the effectiveness of cycle consistency constraint alone in training models, from random initialization, for video frame interpolation. We further study the impact of frame count in unsupervised training. For this study, we used the low FPS Battlefield-1 sequences. The higher the frame count of low FPS frames, the better our unsupervised model performs, when evalu- ated on UCF101. Using Battlefield-30fps, at frame count four times larger than Adobe-240fps, we achieve results on par with supervised techniques, achieving IE of 5.38 and 5.48, respectively. Table 2 also presents results of trivial copy, which is the simple case of using inputs as predictions. Compared to cycle consistency, trivial prediction leads to significantly worse interpolation, further indicating our approach does in fact allow us to synthesize intermediate frames from unpaired raw video frames. Unsupervised Fine-tuning for Domain Transfer One particularly common situation in video frame interpolation is that we have access to pre-trained models or access to high FPS out-of-domain videos in abundance, but lack ground truth frames in target videos, which are also commonly limited in quantity. Our unsupervised techniques allow us to fine-tune pre-trained models directly on target videos without additional data, and demonstrate significant gain in accuracy in upscaling frame rates of target videos. First, we consider the scenario where train and test videos are collected with different camera set-ups. We assume we have access to high fps videos collected by handheld cameras, which is the Adobe-240fps, YouTube-240fps, UCF101 datasets, and consider the Slowflow dataset as our target, a particularly high resolution and high FPS video captured by high speed professional cameras in real life. Our baseline is a frame interpolation model trained with supervision. Specifically, we consider SuperSloMo and Deep Voxel Flow (DVF) [9]. DVF is another widely-used singleframe interpolation method. We apply our unsupervised fine-tuning directly on the low FPS train split of Slowflow, and evaluate on its test split. Table 4. Multi-frame interpolation results on Sintel for frame rate conversion from 24 to 1008 FPS, and domain transfer experiments using baselines obtained by pre-training with supervision on Adobe-or Adobe+YouTube-240fps. Adobe→Slowflow: Our unsupervised training with cycle consistency alone performs quite closely to the baseline (Super SloMo pre-trained with supervision), achieving PSNR of 32.35 and 32.84, respectively. While a total of 76.7K Adobe-240fps frames are used in supervised pretraining, our unsupervised training is performed with only 3K frames of Slowflow, which indicates the efficiency and robustness of our proposed unsupervised training technique. Furthermore, fine-tuning the pre-trained model by jointly optimizing to satisfy cycle consistency and to minimize our proposed pseudo supervised loss (CC + PS), we outperform the pre-trained baseline by a large margin, with PSNR of 33.05 vs. 32.84. The PS loss relies on the same pre-trained Figure 4. Visual results of a sample from Slowflow dataset. Baseline supervised model is trained with Adobe dataset and proposed unsupervised model is fine-tuned with Slowflow. Improvements seen as the person's back squeezed in supervised (middle) but preserved in unsupervised (right). On bottom row, although both techniques blur the regions surrounding the helmet, the shape of helmet is preserved in our proposed technique. baseline model, as discussed in Section 3, and regularizes our training process. If used alone, i.e without cycle consistency, it performs at best as good as the baseline pre-trained model, see Section 4.4.1 for more details. Adobe+YouTube→Slowflow: Here, our baseline model is pre-trained on a larger dataset Adobe+YouTube, total of 372.7K frames, and achieves better accuracy than pretraining on Adobe alone, achieving PSNR 33.13 vs. 32.84, when directly applied on Slowflow test videos. Even with improved pre-trained baseline, we observe consistent benefit with our proposed unsupervised fine-tuning, improving PSNR from 33.13 to 33.20. Another interesting observation from this study is that it takes an extra 296.K frames from YouTube-240fps to improve PSNR from 32.84 to 33.13 on Slowflow, via pretraining with supervision. We achieve a comparable improvement of PSNR from 32.84 to 33.05 by simply finetuning on the target low FPS frames in a completely unsupervised way. Sample interpolation results from this study can be found at Figure 1, where improvements on the bicycle tire and the shoe are highlighted, and at Figure 4, where improvements on the persons' back and helmet regions are highlighted. Table 5. Comparison of supervised training at quarter resolution (baseline) and unsupervised fine-tuning at full resolution (proposed) for frame rate upscaling from 30 to 240 FPS (Slowflow) and 24 to 1008 FPS (Sintel). UCF101→Slowflow: Table 6 and Figure 5 present results of fine-tuning DVF on Slowflow. We use an off-the-shelf implementation of DVF, pre-trained on UCF101 1 . Our unsupervised techniques improve the PSNR from 24.64dB to 28.38dB, demonstrating that our method generalizes well to different interpolation techniques, and is not limited to SuperSloMo. UCF101→Slowflow using DVF [9] Loss In our second domain transfer setting, we consider the scenario where target and test datasets share similarities in content and style but they are in different resolution. This is a very practical scenario given the scarcity of the highframe high-resolution videos. Therefore, it is highly desirable to learn from low resolution videos, and be able to interpolate higher resolutions. We establish a low resolution baseline by training with supervision on 240 fps Slowflowtrain dataset, after down-sampling its frames by 4 in each dimension. Our test video is Slowflow-test split at its original resolution. We repeat similar setting for Sintel. Figure 6. Visual results of a sample from Slowflow dataset. Baseline supervised model is trained with Slowflow dataset in quarter resolution and proposed unsupervised model is fine-tuned with full resolution Slowflow. The tree in the background is deformed, and also deformed in the supervised (middle), while it is predicted well in proposed (right). Bottom row, supervised shows blurriness in the grass, while it is crisper in the proposed. and from 29.14 to 29.71 for Sintel. Visual samples from this experiment can be found in Figure 6. Ablation Studies We conduct ablation studies to analyze various design choices of the proposed method. Figure 7 presents interpolation results in PSNR for our models trained with a range of PS weight, λ rp , values. We fix CC's weight, λ rc to 0.8, and vary λ rp in small increments from 0 to 64. When λ rp = 0, optimization is guided entirely by CC, it achieves PSNR of 32.35 for unsupervised Adobe+YouTube→Slowflow domain transfer. Interpolation accuracy gradually increases, and plateaus approximately after 0.8. Based on this, we select λ rp = 0.8, and fix its value for all our experiments. At large values of λ rp , the optimization is mostly guided by PS loss, and as such, trained models perform very similarly to the pretrained model that the PS loss depends on. Figure 7 shows this trend. Optimizing with optimally combined CC and PS losses on the other hand leads to results that are superior to those obtained using either loss alone. λ rp 1 Figure 7. Interpolation accuracy in PSNR versus λrp (PS weight) used in our proposed joint CC and PS optimization techniques, when applied for Adobe+YouTube→Slowflow unsupervised domain transfer. Large Time-step Supervision We study the effect of using loss terms, such as M(I 0 , I 2 , t = 0.5) − I 1 or its variations, defined over a longer time. Table 7 presents Adobe+YouTube→Slowflow fine-tuning with cycle consistency, the loss derived from two step interpolation alone or together with cycle consistency. Optimizing using losses derived from long step interpolation result in worse accuracy than optimizing with cycle consistency. When used with cycle consistency, we also did not find it to lead to notable improvement. We attribute this because the model's capacity might be spent to solve the harder problem of interpolating large steps, and provide little benefit to the task of synthesizing intermediate frames between consecutive frames. Table 7. Comparison of cycle consistency with objectives derived from longer time step interpolation. Conclusions We have presented unsupervised learning techniques to synthesize high frame rate videos directly from low frame rate videos by teaching models to satisfy cycle consistency in time. Models trained with our unsupervised techniques are able to synthesize arbitrarily many, high quality and temporally smooth intermediate frames that compete with supervised approaches. We further apply our techniques to reduce domain gaps in video interpolation by fine-tuning pre-trained models on target videos using a pseudo supervised loss term and demonstrate significant gain in accuracy. Our work shows the potential of learning to interpolate high frame rate videos using only low frame rate videos and opens new avenues to leverage large amounts of low frame rate videos in unsupervised training. For all models, interpolation accuracy decreases as time-points move away from t = 0 or t = 1. Compared to the baseline, our CC-based fine-tuning performs better at the end points (close to t = 0 or t = 1), and worse at midway points. On the other hand, our CC+PS-based unsupervised fine-tuning achieves the best of both CC and Baseline, performing better than both CC and Baseline at all time points. Table 8. Mean and Standard deviation of PSNR, SSIM, and IE for domain adaptation of upscaling frame rate from 30-to 240-fps for Adobe→or Adobe+YouTube→Slowflow. CC refers to cycle consistency, and PS pseudo supervised loss.
2019-06-13T21:04:10.000Z
2019-06-13T00:00:00.000
{ "year": 2019, "sha1": "d0cacb90967827cb7bf1876dc49e6bd8881e4d81", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1906.05928", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d0cacb90967827cb7bf1876dc49e6bd8881e4d81", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
9835763
pes2o/s2orc
v3-fos-license
Modified Balloon-Occluded Retrograde Transvenous Obliteration of Gastric Varices Transjugular intrahepatic portosystemic shunt (TIPS) has been the main alternative to treating bleeding related to esophageal or gastric varices in the context of portal hypertension. Recently, as a less-invasive alternative, balloon-occluded retrograde transvenous obliteration (BRTO) of gastric varices has been introduced to treat bleeding gastric varices, which are less amenable to endoscopic sclerotherapy and banding. We describe a successful case of modified BRTO in the acute setting of gastric variceal bleeding. Transjugular intrahepatic portosystemic shunt (TIPS) has been the main alternative to treating bleeding related to esophageal or gastric varices in the context of portal hypertension.Recently, as a less-invasive alternative, balloon-occluded retrograde transvenous obliteration (BRTO) of gastric varices has been introduced to treat bleeding gastric varices, which are less amenable to endoscopic sclerotherapy and banding.We describe a successful case of modified BRTO in the acute setting of gastric variceal bleeding. CASE PRESENTATION A 61-year-old woman presented to the emergency department at Adena Regional Medical Center in Chillicothe, Ohio, with upper gastrointestinal bleeding.She had established liver cirrhosis with portal hypertension.A CT scan obtained earlier showed gastric varices and a welldeveloped splenorenal shunt (Figures 1 and 2).Endoscopy was performed identifying mostly gastric varices with active bleeding.An attempt was made to place a clip across a bleeding varix, which achieved only temporary reprieve. The patient was determined to be a good candidate for BRTO, which was performed in the angiographic suite via a femoral vein approach.After establishing access into the femoral vein with a 5-F (1.67-mm) Cobra C2 catheter, the left renal vein was selectively catheterized, and renal venography was performed.A 7-F (2.33-mm) vascular sheath was introduced and, using a 5-F (1.67-mm) Berenstein catheter and a stiff hydrophilic guidewire, the splenorenal shunt was catheterized, and venography was performed (Figure 3).Results from case studies are not necessarily predictive of results in other cases.Results in other cases may vary. Modified Balloon-Occluded Retrograde Transvenous Obliteration of Gastric Varices The catheter was advanced further into the portion of the shunt closest to the varices, and a Berenstein 8.5/11.5-mmocclusion balloon was introduced over an Amplatz wire and was inflated with a 0.55-mL mixture of saline and contrast at 50% strength.A 10-mL mixture of contrast material, gelfoam, and 1% sodium tetradecyl sulfate was processed as a slurry in a 10-mL syringe and injected through the lumen of the occlusion balloon and was left in place for 15 minutes (Figure 4).Subsequently, coil embolization was performed through the same lumen while the balloon was still inflated.Two 8-mm X 40-cm (400-mm) and three 8-mm X 20-cm (200-mm) Interlock™-35 Coils were deployed (Figures 5 and 6) into the varices and splenorenal shunt to trap the sclerosing agent and prevent the possibility of migration into the systemic circulation and potentially the pulmonary arteries.The patient remained stable, and the access was removed safely.The patient was discharged from the hospital a day later. DISCUSSION This case exemplifies the value of a minimally invasive procedure such as BRTO, which takes advantage of the patient's anatomy in order to access and sclerose bleeding gastric varices without having to perform a TIPS with all its potential risks and complications.n Moni Stein, MD, FSIR, is Chief of Interventional Radiology at Columbus Radiology in Columbus, Ohio.He has disclosed that he received no compensation for this article and is not a consultant to Boston Scientific Corporation. Figure 1 Figure 1.A CT scan of the abdomen with contrast showed gastric varices but no esophageal varices (white arrow). Figure 3 . Figure 3. Catheter venography showed the splenorenal shunt (white arrow) with a flow from the gastric varices to the renal vein. Figure 2 . Figure 2. A CT scan of the abdomen with contrast showed the spontaneous splenorenal shunt (yellow arrow). Figure 5 . Figure 5.Following injection of the sclerosing agent, coils were deployed just distal to the balloon (white arrow). Figure 6 . Figure 6.Final image showed the Interlock™ Coils (white arrows) blocking the outflow from the gastric varices after sclerotherapy.Contrast was still noted in the varices, indicating flow stasis. Figure 4 . Figure 4. Catheter venography showed the inflated occlusion balloon (straight white arrow) and the stasis in the gastric varices distal to the balloon.Note the metallic clip placed during endo scopy (curved arrow).
2018-04-03T00:11:01.691Z
2016-04-28T00:00:00.000
{ "year": 2016, "sha1": "eb2036936d8ef781aa3761db5d5a42dcddc95646", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4329/wjr.v8.i4.390", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "336c0eb7080cfba35b5091e515db7a90cff4f4a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7228270
pes2o/s2orc
v3-fos-license
Cultured bovine granulosa cells rapidly lose important features of their identity and functionality but partially recover under long-term culture conditions Cell culture models are essential for the detailed study of molecular processes. We analyze the dynamics of changes in a culture model of bovine granulosa cells. The cells were cultured for up to 8 days and analyzed for steroid production and gene expression. According to the expression of the marker genes CDH1, CDH2 and VIM, the cells maintained their mesenchymal character throughout the time of culture. In contrast, the levels of functionally important transcripts and of estradiol and progesterone production were rapidly down-regulated but showed a substantial up-regulation from day 4. FOXL2, a marker for granulosa cell identity, was also rapidly down-regulated after plating but completely recovered towards the end of culture. In contrast, expression of the Sertoli cell marker SOX9 and the lesion/inflammation marker PTGS2 increased during the first 2 days after plating but gradually decreased later on. We conclude that only long-term culture conditions (>4 days) allow the cells to recover from plating stress and to re-acquire characteristic granulosa cell features. Electronic supplementary material The online version of this article (doi:10.1007/s00441-017-2571-6) contains supplementary material, which is available to authorized users. Introduction Granulosa cells are essentially involved in the production of various endocrine-and paracrine-acting hormones such as inhibin, follistatin and estradiol (E2). In order to analyze molecular processes within granulosa cells under diverse pathophysiological conditions, the establishment of appropriate culture models is required. During the last two decades, several different culture systems have been described and used for detailed molecular studies. In the bovine, Gutierrez et al. (1997) established a serum-free follicle-stimulating hormone (FSH)-responsive E2-producing granulosa cell culture system. Moreover, later studies clearly showed that considerable E2 secretion and the expression of corresponding key transcripts by these cells could only be observed under serumfree culture conditions plus FSH and insulin-like growth factor-1 (IGF-1) stimulation (Hamel et al. 2005;Silva and Price 2000). In another study, serum supplementation was demonstrated to induce proliferation but reduced or abolished steroid production and the expression of functionally important genes (Baufeld and Vanselow 2013). To validate a culture model, the functionality and identity of the cells need to be assessed based on physiological and molecular characteristics. Granulosa cells are of mesenchymal origin and express CDH2 (cadherin 2) and VIM (vimentin), whereas oocytes express the epithelial cell marker CDH1 (cadherin 1; Mora et al. 2012). During follicular growth and differentiation, the proper regulation of genes, such as CYP19A1 encoding cytochrome P450, family 19, subfamily A, polypeptide 1, CCND2 encoding cyclin-D2, FSHR encoding follicle-stimulating hormone receptor and LHCGR encoding luteinizing hormone/ choriogonadotropin receptor and of other genes that are important for granulosa cell function (Gonzalez-Robayna et al. 2000;Park et al. 2005;Law et al. 2013) is essential. Expression of CYP19A1, the key gene of estradiol production, Electronic supplementary material The online version of this article (doi:10.1007/s00441-017-2571-6) contains supplementary material, which is available to authorized users. is regulated by various factors such as FOXL2 encoding forkhead box protein L2 and NR5A2 encoding nuclear receptor subfamily 5, group A, member 2, together with FSH signaling (Sahmi et al. 2014). FOXL2 has emerged as a key factor of ovarian biology. Granulosa cells maintain their identity by expressing FOXL2 and repressing the Sertoli cell marker SOX9 encoding SRY-box 9 (Georges et al. 2014;Uhlenhaut et al. 2009). Knockdown of FOXL2 leads to the loss of granulosa cell identity and the gain of Sertoli cell properties (Uhlenhaut et al. 2009;Ottolenghi et al. 2007). Moreover, FOXL2 regulates the expression of other functionally important genes such as ESR2 encoding estrogen receptor 2 and FST encoding follistatin. Until now, our knowledge about the dynamics of changes that are induced by the dissociation, plating and culture of granulosa cells has been limited. This knowledge is however an important pre-requisite for appropriately designing in vitro experiments with cultured granulosa cells. Therefore, during the present study, we analyzed the progressive changes in the physiological and molecular characteristics in an estrogenactive granulosa cell culture model. The production of the steroids E2 and P4 (progesterone) and the expression of marker genes for granulosa cell functionality and identity were analyzed over 8 days and compared with those of freshly isolated cells. Culture of granulosa cells The chemicals and antibodies used are shown in the Electronic Supplementary Material (Materials and Methods S1). RNA isolation, cDNA synthesis and real-time reverse transcription polymerase chain reaction Fore RNA isolation, attached cells were washed once with phosphate-buffered saline (PBS) before lysis. Total RNA was isolated by the Nucleo Spin RNA II Kit (Macherey-Nagel, D üren, Germany) and quant ified w it h a NanoDrop1000 Spectrophotometer (Thermo Scientific, Bonn, Germany). The cDNA was prepared by using the SensiFAST cDNA Synthesis Kit from 200 ng RNA (Bioline, Luckenwalde, Germany). Transcript abundance levels were measured by real-time reverse transcription polymerase chain reaction (qPCR) and calculated relative to TBP (TATA-binding protein) housekeeping transcripts (Baddela et al. 2014) for normalization. qPCR was performed with SensiFast SYBR No-ROX (Bioline) and gene-specific primers (see Electronic Supplementary Material, Table S1) in a Light Cycler 96 instrument (Roche, Mannheim, Germany) as described previously (Baddela et al. 2014;Yenuganti et al. 2016). Normalized qPCR values were then expressed as fold changes relative to the respective transcript abundance found in freshly isolated cells. Western blotting After being washed twice with 500 μl PBS, the cells were scraped off from culture wells in 500 μl PBS, subsequently centrifuged at 135 relative centrifugal force (rcf) for 2 min, washed with PBS, collected in 500 μl PBS and centrifuged again at 135 rcf for 2 min. Cell pellets were re-suspended in 50 μl RIPA buffer and sonicated (LABSONIC M, Sartorius, Göttingen, Germany) at 0.5 cycles and an amplitude of 30% for 2 × 20 times with a few seconds break. The suspension was centrifuged at 18,400 rcf for 2 min and the protein concentration of the supernatants was measured by a Micro BCAProtein Assay Kit. Proteins were separated on 12.5% polyacrylamide gels (0.75 mm) by electrophoresis at 20 mA (stacking gel) and 30 mA (separating gel). The gels were electro-transferred to Immobilon-P Membrane for 60 min at 1 mA/cm 2 in a Pierce fast semi-dry blotter apparatus (Dreieich, Germany). The membranes were then washed with TBST (TRIS-buffered saline and Tween 20) containing 0.1% Tween, blocked in a SNAP protein detection system (Millipore) with 30 ml blocking solution (0.5% milk powder with TBST), washed three times with 30 ml TBST and incubated with FOXL2 (1.5 μg/ml) and SOX9 (1:1000 dilution) antibodies in TBST with 5% BSA and incubated at 4°C overnight. The membranes were then washed three times with 30 ml TBST each for 10 min, incubated with the secondary antibody in blocking solution (1:3000) for 90 min at room temperature and washed with 30 ml TBST (three times each) for 10 min. To detect the proteins, 5 ml Amershan ECL prime Western blotting detection reagent was added to the membranes, which were then incubated for 5 min and exposed to X-ray films. The films were developed for 1 min, subsequently washed for 30 s in water and fixed for 2 min. After a drying step, the images were captured by Raytest (Isotopenmeßgeräte, Staubenhardt, Germany). Quantification of E2 and P4 For the determination of steroids, conditioned media were removed from individual wells. Concentrations of E2 and P4 in conditioned culture media were quantified as described previously (Schneider and Brüssow 2006;Baufeld and Vanselow 2013;Yenuganti et al. 2016). Details of the assays are shown in the Electronic Supplementary Material (Materials and Methods S1). E2 and P4 values were expressed relative to the amounts of total RNA isolated from individual samples as a surrogate for the respective cell numbers. Statistical analysis All experiments were carried out three times independently and all data were analyzed by one-way analysis of variance (ANOVA) following Tukey's multiple comparison test by using GraphPad prism 5.0 software. Results Effects of plating on steroid hormone production Hormone analysis showed that relative E2 and P4 values were low up to day 4 but that they strongly and significantly increased from day 6 ( Table 1). The absolute E2 values in conditioned media after 8 days was 123.3 ± 5.7 ng/ml (mean ± SEM), which is similar to concentrations found in the follicular fluid of bovine dominant E2 active follicles (Nimz et al. 2009), whereas the respective P4 values (788.7 ± 38.9 ng/ml) were much higher than those found in vivo before the preovulatory luteinizing hormone (LH) surge. Effects of plating on abundance of functionally important transcripts To elucidate whether plating induced a mesenchymal to epithelial transition, we analyzed the expression of mesenchymal (CDH2 and VIM) and epithelial (CDH1) markers. CDH1 was down-regulated in cultured cells as compared with the freshly isolated cells with the lowest levels from day 6. The expression of CDH2 was up-regulated with the highest levels at day 1 and 2 but reduced levels similar to those found in freshly isolated cells were detected later on. In contrast, the levels of VIM were not regulated under culture conditions with almost similar levels in cultured and freshly isolated cells (Fig. 1a-c). An appropriate response to gonadotropins is essential for the function of granulosa cells. Therefore, we comparatively measured the expression levels of FSHR and LHGCR in freshly isolated and in plated granulosa cells. Levels of FSHR transcripts rapidly decreased after plating and continuously increased from day 2 but did not reach the levels found in freshly isolated cells. Moreover, the expression of LHCGR transcripts rapidly decreased after plating and started to increase later on. However, in contrast to FSHR, the abundance of LHCGR transcripts reached and even exceeded the level of that in freshly isolated cells after 6 days (Fig. 1d, e). As a paradigm for steroid hormone production, we analyzed the expression of steroidogenic acute regulatory protein (STAR) and of CYP19A1 (Fig. 1f, g). In addition, we analysed the expression of the nuclear receptor NR5A2, which regulates several functionally important genes in granulosa cells, such as CCND2, which is involved in the regulation of granulosa cell proliferation (Robker and Richards 1998), FST, which is involved in feedback regulation between brain and ovary and PTGS2, which is essential for ovulation. CYP19A1, NR5A2, CCND2 and FST were rapidly down-regulated, with CYP19A1 decreasing to nearly undetectable levels after cell plating (Fig. 1g-j). Subsequently, a continuous up-regulation of these transcripts was observed from days 2 or 4. However, in particular, CYP19A1 transcripts but also NR5A2 and FST transcripts, did not reach the original levels found in freshly isolated granulosa cells. In contrast, STAR and PTGS2 transcripts (Fig. 1f, k) showed higher expression in cultured granulosa cells. However, whereas the abundance of STAR transcripts continuously increased from day 4, the expression of PTGS2 was rapidly up-regulated after plating but subsequently declined. Estrogens and androgens exert their action on granulosa cells via various receptors, namely estrogen receptor 1 (ESR1), estrogen receptor 2 (ESR2) and androgen receptor (AR). The analysis of the respective transcripts showed an initial decrease and a continuous increase after plating (Fig. 1l-n). Expression of marker transcripts and proteins of granulosa and sertoli cell identity Because of the very low initial CYP19A1 expression and E2 production in cultured cells as compared with those in freshly isolated samples, we analyzed the mRNA and protein expression of markers of granulosa and sertoli cell identity, namely FOXL2 and SOX9, respectively. Interestingly, FOXL2 expression was reduced but that of SOX9 was increased after plating. However, whereas the expression of FOXL2 rapidly decreased and slowly recovered, thus reaching similar levels as those observed in freshly isolated granulosa cells, the levels of SOX9 were low in freshly isolated granulosa cells but increased after plating and decreased again towards the end of culture (Fig. 2). Discussion To our knowledge, the present study describes, for the first time, the dynamics of changes that are induced by the dissociation, plating and culture of granulosa cells based not only on physiological characteristics but also on numerous marker genes that are important for the identity, structure and functionality of granulosa cell. In contrast to previous studies, our presented set of data also allows a direct comparison of the in vitro data with the data derived from original in vivo material, i.e., freshly isolated cells. The present results showed that some plated granulosa cells strongly changed their expression levels of some marker genes. However, the decreased expression of the epithelial marker CDH1 but the constant or even increased expression of the mesenchymal markers VIM and CDH2, respectively, clearly suggest that aspirated granulosa cells maintain their mesenchymal character under the present culture conditions and do not shift towards an epithelial phenotype. In contrast, our data clearly indicate that plating severely and rapidly compromises granulosa cell functionality, in particular during the first few days in culture. The levels of gonadotropin receptor transcripts, namely FSHR and LHCGR and in particular those of CYP19A1 immediately and strongly decreased, with those of CYP19A1 being reduced to nearly negligible levels during the first 2 days in culture. Later on, the cells continuously started to re-express these genes but only in the case of LHCGR did the transcript levels reach and even exceed those found in freshly isolated granulosa cells. The dynamics of CYP19A1 expression is also clearly in agreement with E2 production showing an increase from very low levels directly after plating to its highest levels at the end of culture. Moreover, the levels of P4 gradually increased with time in culture. However, the final levels were much higher than the corresponding E2 levels. This is in concordance with the remarkably increased expression of STAR, which is involved in the first step of steroid hormone biosynthesis. On the other hand, this also suggests that the cells display some properties of luteinized granulosa cells, which are characterized by a huge increase of P4 production. However, the simultaneous ability of the cells to produce E2 at physiological levels clearly contradicts the notion that the cells have passed the folliculo-luteal transition phase but instead suggests that they might be arrested at an intermediate stage even after long-term culture. In bovine granulosa cells, the activation of CYP19A1 expression has been shown to require FSH stimulation and the presence of the transcription factors FOXL2 and NR5A2 (Sahmi et al. 2014). Interestingly, the expression of FSHR, NR5A2 and FOXL2 was also rapidly down-regulated after plating, thus suggesting that this might in turn cause the down-regulation of CYP19A1 transcripts and of E2 production. In addition, the levels of CCND2 and LHCGR that are also regulated by FSH signaling were transiently downregulated directly after plating. Previous reports have shown that FOXL2 is important for maintaining granulosa cell identity, whereas SOX9 is described as a Sertoli cell marker (Georges et al. 2014;Uhlenhaut et al. 2009). FOXL2 expression was low in our culture system directly after plating, with the lowest levels at day 2 and gradually increased from day 4. At the same time, the expression of SOX9 was stimulated after plating, thus reaching a maximum level at day 2, before it was subsequently down-regulated. This suggests that reduced levels of FOXL2 expression after plating allow an increased expression of SOX9 and thus might induce a transient loss of granulosa cell identity and a gain of Sertoli cell characteristics. In contrast, genes whose expression is usually stimulated by FOXL2, such as FST (Georges et al. 2014), were down-regulated. Estradiol exerts its actions via the nuclear receptors, ESR1 and ESR2. In the bovine, ESR2 mRNA and protein are present in granulosa cells (Rosenfeld et al. 1999). Our data showed that ESR2 was initially down-regulated after cell plating, with a minimum at day 2, before it was continuously up-regulated towards the end of culture. ESR1 expression rapidly decreased in plated granulosa cells but, in contrast to ESR2, did never reach its original levels as observed in freshly isolated cells. The temporal expression pattern of ESR2 paralleled the expression of FOXL2 but not that of ESR1. Georges et al. (2014) showed that ESR2 is essential for estradiol-mediated signaling in granulosa cells. These data suggest that the plating of bovine granulosa cells will, at least during the first few days in culture, reduce the E2 responsiveness of these cells. Moreover, the levels of AR were rapidly down-regulated after plating but subsequently recovered up to day 8. Sen and Hammes (2010) demonstrated that mice with granulosa-cell-specific AR knockout exhibit severe ovarian defects. This adds to our conclusion that granulosa cell functionality is severely compromised during the first few days after plating. Follistatin is a glycoprotein that binds to activin and inhibits its activity (Shimasaki et al. 1989), thus acting as an inhibitor of FSH secretion from the pituitary (Robertson et al. 1987;Ueno et al. 1987). In addition to these endocrine actions, follistatin is known to be able to act as a paracrine or autocrine antagonist of activin within the bovine follicle. FST transcripts are highly up-regulated in large estrogen-active follicles (Glister et al. 2011). The immediate down-regulation of FST transcript abundance after plating and slow recovery under long-term culture conditions again suggests that bovine granulosa cells rapidly lose important molecular properties under culture conditions but can recover these properties several days later. Interestingly, our results showed a rapid rise and subsequent decrease of PTGS2 expression with levels near to those found in freshly isolated cells. In vivo PTGS2 expression is considered as a reliable marker for approaching ovulation (Sirois 1994). PTGS2 encodes COX-2, which is the key enzyme in prostaglandin biosynthesis. In addition to its role in ovulation, the expression of this enzyme is induced after lesions or inflammatory reactions. Accordingly, the rapid and transient upregulation of PTGS2 after plating can be interpreted as the result of a cellular stress reaction attributable to the dissociation and plating procedure. Its down-regulation several days later might therefore, in turn, indicate that the cells have largely recovered from the initial stress. From our results, we conclude that, after being plated, granulosa cells rapidly but transiently, lose important features of their identity and functionality as indicated by the down-regulation of functionally important genes. Moreover, the transient up-regulation of the stress marker PTGS2 and of the Sertoli cell marker SOX9 supports this view. However, after long-term culture, those markers that are important for granulosa cell functionality gradually increase, whereas those that compromise granulosa cell function, such as the Sertoli cell and stress/ inflammation markers, gradually decrease towards day 8. Accordingly, under the present culture conditions, the cells need at least 4 days to recover from the plating stress thus suggesting that granulosa cells cultured for only 4 days or less might not be an appropriate in vitro model to analyze granulosa cell function. Fig. 2 Transcript and protein abundance of FOXL2 and SOX9 in freshly isolated and cultured granulosa cells. Transcript and protein abundance of FOXL2 (a, b) and of the Sertoli cell marker SOX9 (c, d) as determined by RT-qPCR (a, c) and immunoblotting (b, d; representative blots). Different letters indicate significant differences of fold changes (mean fold change ± standard error; P < 0.05; one-way ANOVA from three independent experiments) relative to freshly isolated cells
2017-08-02T20:17:21.718Z
2017-02-02T00:00:00.000
{ "year": 2017, "sha1": "27cd91f73cf110cedb610ff55c990aa6cfe5d4da", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00441-017-2571-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "84464ed62756b17750244c6caa62eb086651ac50", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6704304
pes2o/s2orc
v3-fos-license
Evaluation of Sympathetic Skin Response in Old-Polio Patients Background Many polio patients develop problems such as cold intolerance in the affected limbs, which seems to be due to sympathetic nervous system dysfunction. This study aimed to investigate whether there is a sympathetic system dysfunction in old-polio patients by means of the systematic skin response test. Method Forty old-polio patients and 20 healthy subjects were included in the study. Disease duration was 31.5 years (19-49 years) in the patient group. Sympathetic skin responses were obtained in all the subjects’ limbs. Thirteen patients had right lower limb paresis/paralysis, 14 had left limb paresis/paralysis and 13 had paresis/paralysis of both lower limbs. The upper limbs were unaffected in all the patients. Results Although there was no significant difference between sympathetic skin response latencies of the case and control groups, the amplitude values of the sympathetic skin response in the patient’s lower extremities were significantly lower than those in the control group. Conclusion There was a sympathetic nervous system dysfunction in some old-polio patients. This finding might be useful in evaluation and treatment of old-polio patients, developing new problems. Introduction MTPoliomyelitis is a viral disease resulting from polio viruses which in acute stages involves both motor neurons of the spinal cord and autonomic nervous system. Involvement of this system results in different degrees of increase in the blood pressure and increase or decrease of sweating in the affected segments. 1 There is little information regarding the persistence of dysfunction I nautonomous nervous system in old-polio patients and researchers are not in agreement in this field. 2,3 Many patients reported symptoms such as cold intolerance which could be attributed to sympathetic system disorder. 3 Sympathetic skin response test (SSR) is simple, quick, easy and non-invasive method which is often used in neurophysiology laboratories to examine the sympathetic system. This response results from synchronic activation of eccentric sweet glands, due to sympathetic afferent fibers activity. 4,5 SSR could be obtained with both internal stimulation as coughing, sneezing, deep breathing and peripheral nerve stimulation. 6,7 This research was done in order to examine the sympathetic system in old-polio patients by means of sympathetic skin response test. Materials and Methods Forty old-polio patients (12 males and 28 females) with a mean age of 33.7 years (20-28 years) and 20 healthy subjects (10 males and 10 females) with a mean age of 34.9 years (21-47 years) were included in the study. Disease duration was 31.5 years (19-49 years) in the patient group. Patient with diabetes, peripheral neuropathy, peripheral nerve damage and any other disease which could affect the sympathetic nervous system and SSR test and also all the individuals who used medicines which affected the sympathetic nerve system and SSR test were excluded from this study. All the candidates signed the informed consent form for participation in the research and all the information were confidential. Manual muscle testing was used to examine the muscle power and determine the involved limbs. Muscle groups to be checked up were flexor and extensors of the foot and hand fingers, wrist and ankle, knee, elbow shoulder, hip, elbow supinator and pronator of the elbow abductor of the shoulder and hip. SSR test was recorded by electromyograph device (Tonnies Multilinear, version 2.0 Electromygraph) with disk surface electrodes using sweep speed of 500 ms/div, sensitivity of 500-1000 Mic.v/div, filtering of 0.5 hz-2 khz and frequency of stimulation equal to 1 stimulus/20-30 sec. The study was conducted under standard condition, in a semi dark and quiet room at temperature of 22 to 24 centigrade. The skin temperature was maintained above 32 centigrade. For recording the median nerve SSR, the active electrode was placed on the base of the second finger on the palmer surface of the hand, reference electrode on the dorsum of the hand, grand electrode on the wrist and stimulator electrode on the wrist between palmaris lounges and flexor carpi radialis. To record SSR from the tibial nerve, the active electrode was placed on the sole of the foot with 3cm distance from first and the second finger, the reference electrode was placed on the dorsum of the foot and the grand electrode on the fifth metatarsus base on the sole of the foot. Also, the stimulator was placed on the tibial nerve on the back of the medial maleolluls. The stimulation was begun with 5 milliamper intensity and increased up to 30 milli-amper, if necessary. In order to avoid habituation, a 20-30 seconds pause between successive stimuli was considered. Wave latency was considered from the stimulus artifact to the beginning of the wave and the wave amplitude was calculated from the negative peak to positive one. To statistically analyze the data, t-test was used and p<0.05 was considered significant. According to Manual Muscle Test (MMT), none of the patients had problem in their upper extremities. Of the 80 lower examined limbs in the patients group, 35 limbs had problem (muscular weakness and atrophy) and 27 did not. Except one affected foot with no SSR response, the responses were recorded for all the other limbs. There was no significant difference between SSR latencies recorded from the upper and lower extremities of the patients when compared to the control group. The mean SSR amplitude recorded from the patient's hands was less than that of the control group, but their difference was not statistically significant. The mean amplitude of SSR recorded from the patients feet was less than that of the control group, and their difference was statistically significant (p=0.001; Table 1 and Table 2). Discussion To the best of our knowledge, no study has been done on SSR electro-diagnosis test in old poliomyelitis patients so far. Therefore, no study was found so that we could compare with the present study. Little information is available in medical texts regarding autonomous system dysfunction in old poliomyelitis patients. Examining cardiovascular autonomous responses in 20 old-polio patients, Brog and colleagues concluded that there was no obvious autonomic nervous system (sympathetic and parasympathetic) dysfunction. 2 On the other hand, Bruno et al. believed that these patients had sympathetic system dysfunction and this, resulted in a sympathetic output decline to skin veins, dilation of the surface veins, and as a result blood stasis which resulted in heat loss and coldness of the limbs, as commonly seen in these patients. 3 In this study, the mean amplitude of SSR recorded from the patient's feet (affected and non-affected) was less than that of the control group. The decrease in the amplitude was more in their affected feet than in the non-affected ones. SSR amplitude decline could be a reason for sympathetic system dysfunction in the examined patients because in some studies SSR ampli-tude decline was known as a reason for sympathetic system dysfunction. 8 It is hard to judge about the place and origin of the injury that caused a decline in SSR amplitude, but due to the amplitude decline recorded from the weak feet, injury to the distal part of SSR afferent nerves especially in the lateral horn of the spinal cord seems to be the reason. In conclusion, the results of this study suggest that there is sympathetic nervous system dysfunction in some old-polio patients. This finding might be useful in evaluation and treatment of old-polio patients, developing new problems. Nevertheless, more studies with larger sample size and patients with affected upper limbs is required so that more accurate conclusions regarding the sympathetic skin response in these patients could be achieved.
2016-09-24T08:40:55.662Z
2011-11-01T00:00:00.000
{ "year": 2011, "sha1": "9ef80cd9999cbda193d4ac3e8a2e451e08e15d6f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ef960df1ee16eb40eeae39ee176f8bfa399cca37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235861305
pes2o/s2orc
v3-fos-license
A randomized, controlled trial comparing the clinical outcomes of 3D versus 2D laparoscopic hysterectomy Introduction There have been a few clinical studies on the use of three-dimensional (3D) laparoscopy with different results. Aim To compare the surgical outcomes of 3D versus two-dimensional (2D) laparoscopic hysterectomy for benign or premalignant gynecologic diseases. Material and methods In this double-blind trial, 68 patients were randomly assigned to either the 3D or 2D groups at a 1 : 1 ratio. The only difference between the two groups was the laparoscopic vision system used. The primary outcome was operative blood loss and operative time. The other surgical outcomes including failure of the intended surgery, length of hospital stay, and operative complications were also assessed. Results The baseline characteristics did not statistically significantly differ between the groups. The mean operative blood loss was not significantly different between the 3D group (74.4 ±51.6 ml) and the 2D group (79.2 ±55.4 ml) (p = 0.743). The operative time was similar in both groups (84.5 ±20.5 min vs. 87.8 ±24.4 min, p = 0.452). Moreover, no differences were observed between the groups in other surgical outcomes. Conclusions The 3D imaging system had no surgical advantage in laparoscopic hysterectomy for benign or premalignant gynecologic diseases. However, 3D laparoscopy did not have any negative effects on surgical outcomes and did not increase the surgical risk. Introduction Laparoscopic hysterectomy is currently considered the gold standard for the treatment of benign uterine or premalignant diseases [1]. This surgery was performed for the first time by Harry Reich in 1988 [2] and in the past years, many studies have shown the advantages of the laparoscopic approach, with decreases in postoperative complication rates, less operative bleeding, less postoperative pain, and shorter postoperative hospital stays compared to abdominal hysterectomy [3]. However, laparoscopic surgery is more difficult to learn and requires different psychomotor skills than open surgery. In fact, the surgeons have to work in a three-dimensional (3D) space but are guided by two-dimensional (2D) images [4,5]. Due to the lack of depth perception and spatial orientation in the traditional 2D imaging system, 3D laparoscopy was developed as an alternative to conventional 2D laparoscopy [6]. Although 3D technology was introduced in the early 1990s, the equipment is still not standardly available in hospitals because of initial reports of side effects when using 3D imaging systems, poor image resolution, and higher cost [6]. There are a few clinical studies on the use of 3D with different results [7,8]. Aim Therefore, the present study aimed to compare the surgical outcomes of 3D versus 2D laparoscopic hysterectomy. Material and methods Patients A randomized, controlled trial that evaluated the effect of a 3D scope in laparoscopic hysterectomy was prospectively conducted between September 2019 and September 2020 at the authors' institution. The protocol was approved by the Institutional Review Board and registered with ClinicalTrials. gov (identifier: NCT04070872; date of trial registration: August 23, 2019, https://clinicaltrials.gov/ ct2/show/NCT04070872). Women with indications for laparoscopic hysterectomy for benign or premalignant gynecologic conditions were invited to participate in the study. The inclusion criteria were as follows: age between 18 and 80 years, American Society of Anesthesiologists Physical Status classification I−II, and the absence of pregnancy or lactation at the time of surgery. The exclusion criterion was any suspicious findings of malignant gynecologic disease. The patients were randomly assigned to the 3D group and the 2D group at a 1 : 1 ratio using a random permuted-block randomization algorithm via an interactive web-based response system (http:// www.randomization.com). A study coordinator who was unaware of the personal and medical information of the patients in an office distant from the hospital prepared sequentially numbered, opaque, sealed envelopes containing the assigned intervention to ensure that the sequence was concealed before the study began. The study investigators called the study coordinator on the day of surgery for randomization. The study was performed in accordance with the protocol, and all patients provided written informed consent before participation. Laparoscopic devices All surgical procedures were performed by one surgeon, who had performed more than 1500 laparoscopic hysterectomy procedures to control the variability in surgical skill. The laparoscopic port (or trocar) placement was determined according to the patient's condition and needs. For the laparoscopic camera system, a 10-mm ENDOEYE FLEX 3D Deflectable Videoscope LTF-190-10-3D (Olympus Corp., Hamburg, Germany) and a 10-mm 30º IDEAL EYES Laparoscope (Stryker, Kalamazoo, MI, USA) camera were used in the 3D group and the 2D group, respectively. The laparoscopic equipment and the operative procedural steps were the same in every patient to ensure a standardized approach. The only difference between the two groups was the laparoscopic camera used. Surgical techniques The operative technique used for laparoscopic hysterectomy was previously described in detail [9]. In brief, general anesthesia with endotracheal intubation was achieved and the patients were placed in the deep Trendelenburg position. After uterine sounding and cervical dilation, a RUMI uterine manipulator with a Koh Colpotomizer and Colpo-Pneumo Occluder (Cooper Surgical, Inc., Trumbull, CT, USA) was fixed onto the cervix to effectively construct a surgical field. Using an open Hasson approach, a 1.5-2-cm vertical incision was made within the umbilicus. Next, after pneumoperitoneum was created following insufflation with carbon dioxide to a pressure of 11 mm Hg, a laparoscope was inserted through the umbilical port. While the uterine body was retracted medially using either laparoscopic forceps or a myoma screw, the adnexal pedicle, round ligament, and broad ligament were transected with LigaSure (Valleylab, Boulder, CO, USA). Thereafter, the vesicouterine peritoneal fold was identified, and the bladder was mobilized by blunt and sharp dissection using LigaSure until the anterior vagina was identified. The uterine vessels were skeletonized, sealed, and transected using LigaSure. The cardinal and uterosacral ligaments were then transected. This procedure was then repeated on the opposite side. A circumferential colpotomy was performed with a monopolar electrical device over the Colpotomizer cup. The specimen was removed via the vagina, and uterine morcellation was performed with a knife, if necessary. Vaginal cuff closure was achieved laparoscopically. After carefully examining the bleeding and washing the pelvic cavity, the procedure was completed. The peritoneum and fascia were then approximated and closed using 1-0 Vicryl sutures (Ethicon, Somerville, NJ, USA). A liquid topical skin adhesive (Dermabond, Ethicon) was applied to close the incision. The patients were discharged from the hospital after the restoration of bowel activity, in the absence of postoperative fever, when they no longer needed narcotic analgesics, and could successfully ambulate. All patients were scheduled for check-up examinations at 1 week and 1 month after surgery. Outcome measures The primary outcome was operative blood loss and operative time. Operative blood loss was measured by the anesthesiologists after defining it as the difference between the total amount of suction and irrigation plus the difference between the total gauze weight before and after surgery. Operative time was defined as the time from incision to closure of the skin. Surgery was considered to have failed if the surgeon was required to use one or more additional ports or convert to laparotomy. The length of hospitalization (defined as the number of days from the operation day to the day of discharge), intraoperative complications (defined as major vessel injury, bowel injury, urinary tract injury, or any other severe unplanned events), and postoperative complications (defined as grade III or higher complications occurring within 30 days of surgery according to the Clavien-Dindo classification [10]) were assessed. Statistical analysis The present study was designed as a non-equality test, the hypothesis of which was to establish that 3D laparoscopic hysterectomy is not equal to conventional 2D laparoscopy in terms of surgical outcome. Therefore, the sample size was calculated according to the difference in operative blood loss, which was collected retrospectively from 30 consecutive patients who underwent conventional 2D laparoscopic hysterectomy before this study, showing an operative blood loss of 83.5 ±57.8 ml (authors' unpublished data). We estimated that 34 patients would be required per group to yield a type I error of 0.05, a power of 80%, and a predicted dropout rate of 10% to detect a difference of 41.8 ml (50% difference in the operative blood loss), which was considered clinically relevant, between the groups. No interim analysis was planned or performed. SPSS software 23.0 was used for the statistical analyses. All analyses were performed according to the intention-to-treat principle. For continuous variables, the data are presented as the mean ± standard deviation (SD) or median (interquartile range (IQR)) after verifying the normal distribution of the data. For categorical variables, data are presented as frequency (percent). The baseline characteristics, primary, and secondary outcomes were compared between the two groups using Student's t-test or the Mann-Whitney U test for continuous variables, and the χ 2 test or Fisher's exact test for categorical variables, as appropriate. A p-value of < 0.05 was considered statistically significant. Results Enrollment took place between September 2019 and September 2019. Follow-up visits concluded in December 2020. Of the 77 candidates who were asked to participate in this trial, 9 were excluded because of a suspicious premalignant ovarian tumor (n = 1), planned concomitant surgery (i.e., urinary incontinence surgery or cholecystectomy) (n = 2), a change in treatment from hysterectomy to myomectomy according to the patient's request (n = 3), and refusal to participate (n = 3). Therefore, 68 patients were randomly assigned to the 3D group or the 2D group ( Figure 1). After randomization or surgery, none of the patients changed their assigned groups or stopped participating in the trial. The baseline characteristics of both groups are shown in Table I. The mean age and body mass index of the patients were 45.5 ±5.4 years and 23.7 ±3.6 kg/m 2 , respectively, with no significant differences between the groups. The other baseline characteristics of a history of abdominal surgery, parity, uterine size, preoperative hemoglobin level, laparoscopic approach, mode of hysterectomy, indication for hysterectomy, and procedure performed were also not different between the groups (all, p > 0.005). Table II shows the primary and other surgical outcomes. The mean operative blood loss was not statistically different between the 3D group (74.4 ±51.6 ml) and the 2D group (79.2 ±55.4 ml) (p = 0.743). The operative time was similar in both groups (84.5 ±20.5 min vs. 87.8 ±24.4 min, p = 0.452). Moreover, no differences were observed between the groups in other surgical outcomes including changes in serum hemoglobin (defined as the difference between preoperative hemoglobin levels and hemoglobin levels at the first postoperative day), transfusion, the weight of the extracted uterus, adhesiolysis at the time of surgery, failure of the intended surgery, postoperative pain score (measured on a visual analog scale (0-10 scale), ranging from "no pain" to "pain as bad as it could be"), length of hospitalization, intraoperative complications, and postoperative complications. One patient in the 2D group experienced a postoperative wound complication requiring resuturing 2 weeks after surgery. Discussion We conducted this randomized controlled trial to test the hypothesis that a 3D imaging system could show favorable surgical outcomes in laparoscopic hysterectomy. The main finding of this trial was that 3D laparoscopy did not affect operative blood loss or operative time in patients who underwent laparoscopic hysterectomy for benign gynecologic diseases. We also found that other surgical outcomes were not influenced by the laparoscopic vision system (3D vs. 2D). We assert that this study is valuable to laparoscopists who are interested in 3D laparoscopy or minimally invasive surgery. This study demonstrated that 3D laparoscopy did not improve the surgical outcomes of laparoscopic hysterectomy for benign or premalignant gynecologic diseases. In the field of gynecologic surgery, three previous studies have compared 3D versus 2D laparoscopy [8,11,12]. Yazawa et al. reported the surgical outcomes of 3D versus 2D laparoscopic hysterectomy for benign gynecologic diseases [8]. They retrospectively compared 47 earlier laparoscopic hysterectomies using 2D laparoscopy (performed between July 2013 and October 2014) with 47 later laparoscopic hysterectomies using 3D laparoscopy (performed between November 2014 and December 2015). The 3D group had operative blood loss statistically similar to the 2D group (192 ±174 vs. 161 ±147 ml, p = 0.345) [8]. No differences in other perioperative outcomes or postoperative complications were observed between the two groups. The surgeons did not report any symptoms attributable to the 3D imaging system such as dizziness, eyestrain, nausea, or headache. Fanfani et al. performed a randomized controlled trial comparing the outcomes of 3D versus 2D laparoscopic hysterectomy with lymphadenectomy in 90 patients with endometrial or cervical cancer [11]. The surgical outcomes such as operative time (110 min, range: 25-393 vs. 108 min, range: 30-345, p = 0.593), operative blood loss (126 ml, range: 0-500 vs. 142 min, range: 0-650, p = 0.982), and postoperative complications were similar between both the 2D and 3D groups [11]. Lui et al. conducted a randomized controlled trial in 75 patients undergoing laparoscopic ovarian cystectomy and evaluated whether 3D laparoscopy had any advantage over 2D laparoscopy [12]. There were no significant differences between the 2D and 3D groups regarding operative time ( [12]. Taken together, we believe that 3D vision in the field of laparoscopic hysterectomy does not have a positive impact on the surgeon's performance. In the present study, no differences were observed between the 3D and 2D groups in operative blood loss and operative time. These findings can be attributed to the following three reasons. First, laparoscopic hysterectomy is generally a complex procedure, but it is not accompanied by very difficult techniques (i.e., retroperitoneal space dissection, reanastomosis, or intensive suturing). Because 3D laparoscopy mainly improves the depth of perception, leading to better visibility [7,13], the 3D vision system may be valuable for more complex gynecologic procedures including myomectomy or gynecologic cancer surgery. Second, conventional 2D laparoscopy did not cause side effects related to the 3D images. 3D vision systems could cause side effects for surgeons, such as eye strain, headaches, dizziness, and visual discomfort [14][15][16]. Third, all operations were performed by one fully experienced surgeon who had performed more than 1500 laparoscopic hysterectomy procedures prior to this study. Previous studies have shown that 3D laparoscopy was beneficial for less experienced surgeons [11,17,18]. There were several limitations to this study. First, the main limitation of this study is that the procedures were performed by a single experienced surgeon. Thus, our results may not be applicable to other surgeons. Second, we did not assess the side effects of stereoacuity in this study. Han et al. reported that 67% of surgeons experienced visually induced motion sickness (VIMS) during their first 3D laparoscopy case [19]. However, the incidence and severity of VIMS dramatically decreased from the second case onward. Finally, this study was not blinded from the surgeon (study investigators) because this would have been impossible given the nature of the study. However, patients and outcome assessors were unaware of the allocation information. Conclusions The 3D imaging system had no surgical advantage in laparoscopic hysterectomy for benign gynecologic diseases. However, 3D laparoscopy did not have any negative effects on surgical outcomes and did not increase the surgical risk. Considering that more complex procedures such as suturing and adhesiolysis might be easier to perform with 3D laparoscopy than with 2D laparoscopy, additional studies in various laparoscopic settings (more difficult hysterectomies, severe adhesion cancer surgery, and multiple surgeons with different surgical skills) are warranted to validate the current results.
2021-07-16T00:06:07.518Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "8a9e14eeed32826db4c8aab6b90a1720ffe5609a", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-42/pdf-43938-10?filename=A%20randomized%20controlled.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "929bbe5659c853b5d3daf4cfdb2a1c9448bbc37b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268603217
pes2o/s2orc
v3-fos-license
Seed and Soil: Consensus Molecular Subgroups (CMS) and Tumor Microenvironment Features Between Primary Lesions and Metastases of Different Organ Sites in Colorectal Cancer Purpose Consensus molecular subtypes (CMS) are mainly used for biological interpretability and clinical stratification of colorectal cancer (CRC) in primary tumors (PT) but few in metastases. The heterogeneity of CMS distribution in metastases and the concordance of CMS between PT and metastases still lack sufficient study. We used CMS to classify CRC metastases and combine it with histopathological analysis to explore differences between PT and distant metastases. Patients and Methods We obtained gene expression profiles for 942 PT samples from TCGA database (n=376) and GEO database (n=566), as well as 442 metastasis samples from GEO database. Among these, 765 PT samples and 442 metastasis samples were confidently identified with CMS using the “CMS classifier” and enrolled for analysis. Clinicopathological manifestation and CMS classification of CRC metastases were assessed with data from GEO, TCGA, and cBioPortal. Overall, 105 PT-metastasis pairs were extracted from 10 GEO datasets to assess CMS concordance. Tumor microenvironment (TME) features between PT and metastases were analyzed by immune-stromal infiltration with ESTIMATE and xCell algorithms. Finally, TME features were validated with multiplex immunohistochemistry in 27 PT-metastasis pairs we retrospectively collected. Results Up to 64% of CRC metastases exhibited concordant CMS groups with matched PT, and the TME of metastases was similar to that of PT. For most common distant metastases, liver metastases were predominantly CMS2 and lung and peritoneal metastases were mainly CMS4, highlighting “seed” of tumor cells of different CMS groups had a preference for metastasis to “soil” of specific organs. Compared with PT, cancer-associated fibroblasts (CAF) reduced in liver metastases, CD4+T cells and M2-like macrophages increased in lung metastases, and M2-like macrophages and CAF increased in peritoneal metastases. Conclusion Our findings underscore the importance of CMS-guided specific organ monitoring and treatment post-primary tumor surgery for patients. Differences in immune-stromal infiltration among different metastases provide targeted therapeutic opportunities for metastatic CRC. Introduction 2][3][4] The high heterogeneity of CRC on clinical and biological features culminates in significant differences in disease progression and treatment response, [5][6][7] thereby leading to the limited benefit of available treatment options to a considerable number of CRC patients in the clinic. ][10] As reported, approximately 33% of CRC patients develop metastases at presentation or follow-up.Accordingly, researchers have been engaged in analyzing the similarities and differences between primary lesions and metastases in CRC for further comprehending the characteristics of metastatic tumors and identifying therapeutic targets.For instance, Bhullar et al demonstrated that multiple biomarkers were highly consistent between primary and metastatic CRC. 11Wang et al observed diverse origins of metastatic tumors between lymph nodes and liver. 12Likewise, Eynde et al found that immune infiltration and mutation were heterogeneous between primary lesions and synchronous and metachronous metastases. 13Conversely, Vakiani et al revealed a highly consistent mutation state of paired primary and metastatic CRC. 14 Thus, it is warranted to more comprehensively and systematically investigate the differences between primary and metastatic CRCs. In 2015, Guinney et al proposed the CRC Subtyping Consortium, which identifies four consensus molecular subtypes (CMS; CMS1-4) for CRC. 15 Specifically, CMS1, an immunogenic subtype, is characterized by the enrichment of microsatellite instability-high (MSI-H) and B-type Raf (BRAF) mutations.CMS2 presents with epithelial characteristics, marked WNT and MYC pathway activation, and high chromosomal instability (CIN).CMS3 also exhibits epithelial features and obvious metabolic dysregulation with lower CIN, which is enriched for Kirsten rat sarcoma (KRAS) mutations.CMS4, a mesenchymal subtype, shows prominent upregulation of epithelial-to-mesenchymal transition (EMT)-related genes and transforming growth factor β (TGF-β), stromal invasion, angiogenesis, and inflammatory and immunosuppressive phenotypes. 15,16Unlike American Joint Committee on Cancer (AJCC) TNM staging, the CMS taxonomy was principally founded based on tumor biological differences rather than clinical outcomes, which can capture the inherent biomolecular heterogeneity of CRC. CMS classification is an unsupervised system that can robustly stratify CRC by integrating the characteristics of genomics, epigenetics, transcriptome pathways, stromal and immune microenvironments, mutated genes, and clinical manifestations, with independent predictive value. 15,17,18Importantly, CMS has been confirmed to be significantly associated with the invasion and metastasis of CRC cells, 19,20 the efficacy and resistance of chemo/radiotherapy, 21 tumor microenvironment (TME), 22 the infiltration of CD8+ cytotoxic lymphocytes and cancer-associated fibroblasts (CAFs), and patient prognosis. 23Although CMS has robust functions, it was mainly utilized to stratify primary tumors (PT), with only a few studies exploring the attributes of CMS in the metastatic setting and finding the CMS heterogeneity in CRC metastases.Specifically, most metastases were classified as CMS2, followed by CMS4, and a highly similar proportion of CMS2 and CMS4 in metastases and PT was observed. 24,25A strong depletion of CMS3 in metastases was found. 24,26The classification of CMS1 in metastases has been reported with conflicting results.Eide et al and Kamal et al reported a significant decrease of CMS1 in metastases, 24,26 while Piskol et al observed a slight enrichment of CMS1 in metastases compared with PT. 25 However, studies on exploring the subtype distribution of CMS in CRC metastases from different organ sites and assessing the concordance of CMS between PT and matched metastases are still insufficient.In our study, we use CMS combined with histopathological analysis to investigate the features of CRC metastases and compare the CMS classification rules of PT and paired metastases, therefore further ascertaining the process of metastasis and confirming and supplementing the "seed and soil" theory. 27Our discoveries provide some new insights to guide subtype-targeted therapy for metastatic CRC. Collection and Processing of Gene Expression Data Gene Expression Omnibus (GEO; https://www.ncbi.nlm.nih.gov/geo/) and The Cancer Genome Atlas (TCGA; https:// portal.gdc.cancer.gov/)were employed for downloading the gene expression profiles and clinical annotations of totally 238 normal colon tissues, 2241 primary CRC tissues, and 592 CRC metastases tissues (Supplementary Table 1).In these data, metastatic sites of CRC included the liver, lungs, distant lymph nodes, and peritoneum.The related clinical information was collected, including progression-free survival (PFS), TNM stage, overall survival (OS), and survival status.In addition, GEO was utilized for downloading the transcriptome sequencing data of 21 normal organ tissues (10 liver tissues, 7 lung tissues, and 4 peritoneum tissues), followed by the analysis of their microenvironmental cell composition.The series accession and sample number of each database are summarized in Supplementary Table 1.Thereafter, 1134 cases of clinicopathological data were collected from Metastatic Colorectal Cancer (Cancer Cell 2018) of cBioportal (http://www.cbioportal.org/),including 975 cases with identified first distant organ metastasis, differentiation degree, survival status, and time (Supplementary Table 2).A flowchart was also provided for displaying multiple public databases used in our study (Supplementary Figure 1).The batch effect of multiple datasets was eliminated with the ComBat method, and raw data were normalized with the robust multi-array average algorithms (affy package) in the R-Studio before analysis. Patients and Tissue Samples Due to the timely intervention of neoadjuvant therapy or the unresectability of some advanced tumors, we retrospectively collected precious paraffin-embedded primary and paired metastatic samples of 27 CRC patients who receive no treatment before surgery from the Guangzhou First People's Hospital (the Second Affiliated Hospital of South China University of Technology) between 2010 and 2019.These samples included 10 liver metastasis tissues, 9 lung metastasis tissues, 8 peritoneal metastasis tissues, and their paired primary tumor tissues.They were used for multiplex immunohistochemistry validation to explore the TME features between PT and metastases in different organ sites.Our study complied with the Declaration of Helsinki and was approved by the Ethics Committee of the Guangzhou First People's Hospital (Approval no.K-2019-070-01). CMS Classification CRC samples of PT and paired metastases from the public databases were subjected to CMS classification with the single sample prediction (SSP) algorithm from the "CMS Classifier" (https://github.com/Sage-Bionetworks/crcsc).After samples that could not be defined with this algorithm were excluded, 765 primary CRC samples (457 from GSE39582 and 308 from TCGA) (Supplementary Tables 3 and 4) and 442 metastasis samples (all from GEO) (Supplementary Table 5) were classified for subsequent analysis (Supplementary Figure 1).Furthermore, 105 pairs of CRC primary and metastatic samples (Supplementary Tables 1 and 6) were classified (a total of 209 samples since one patient simultaneously presented with two paired metastases).The matching degree of CMS groups of these samples was evaluated with a Sankey diagram and visualized with the "ggalluvial" R package. TME Evaluation Tumor purity, immune scores, and stroma scores were calculated with Estimate of Stromal and Immune Cells in Malignant Tumor Tissues from Expression Data (ESTIMATE) algorithm. 28The xCell algorithm (https://xcell.ucsf.edu/)can quantify the infiltration of 64 kinds of immune and stromal cells, such as B cells, T cells, macrophages, hematopoietic stem cells (HSCs), mesenchymal stem cells (MSCs), and fibroblasts. 29Therefore, this algorithm was utilized to identify the enrichment of cell subtypes.Additionally, these two algorithms were adopted to evaluate and compare the cell composition of the TME of primary and metastatic tumor samples. H&E Staining Paraffin sections of 27 pairs of primary and metastatic tumor tissues were subjected to H&E staining to observe the region of tumors and the situation of necrosis or infiltration. Imaging and HALO Analysis For H&E staining and mIHC, the entire section was subjected to panoramic scanning and digital imaging with the Aperio CS2 Digital Pathology Scanner (Leica) and Vectra Polaris Automated Pathological Imaging System version 1.0 (Akoya Biosciences), respectively.Sections were all scanned at 20× objective magnification.Two consecutive sections underwent H&E staining and mIHC, respectively, to simulate tumor tissues in the same section.The tumor region was observed and annotated on H&E images of successive sections, and then the same annotation was applied to the mIHC section.The scanned images were quantitatively assessed with the HighPlex-FL function of the HALO software version 3.3.25 (Indica Labs), followed by the tissue classification and area quantification of H&E images with the classifier function.Multispectral mIHC images were analyzed with the Multiplex-IHC function. Statistics All statistics were conducted with GraphPad Prism (version 8.0.2),IBM SPSS Statistic (version 25.0), and R Studio (version 4.0.0) as appropriate.All results were summarized as mean ± standard error of the mean.The paired t-test (twotailed) or Mann-Whitney U (two-tailed) test was utilized for comparing two groups, and the Kruskal-Wallis test and multiple post hoc comparisons were performed for comparing three or more groups.Associations between categorical variables were evaluated with the Fisher's exact test.Survival analyses were performed with the Kaplan-Meier analysis and Log rank test using the "survival" package in the R-Studio.Differences were considered statistically significant at bilateral p ≤ 0.05, and exact p values are listed in figures. CMS Classification and Survival Characteristics of Primary CRC The CMS classification has never been applied in the research of CRC metastases.Accordingly, to probe the association of the distribution of CMS groups between PT and paired metastases, we first summarize the clinical characteristics of the four CMSs of PT.1][32] Next, the prevalence of metastases was calculated in each CMS group, which showed that CMS4 was the most prone to metastases, with a metastasis rate of nearly 25% (Figure 1B, Supplementary Figure 1B, Supplementary Figure 2A).Additionally, the survival analysis exhibited that RFS and OS were the shortest in the CMS4 group, followed by the CMS2 group (GEO: RFS, p = 0.00018; OS, p = 0.049, Figure 1C and D; TCGA: RFS, p = 0.071; OS, p = 0.037, Supplementary Figure 1C and D, Supplementary Figure 2C and D). Furthermore, the clinical baselines of the four CMS groups were compared.As depicted in Supplementary Tables 8 and 9, four CMS groups were significantly different in terms of age, AJCC TNM stage, tumor location, mismatch repair/ CpG island methylator phenotype/CIN status, and TP53/KRAS/BRAF mutations.Notably, the clinicopathologic features of CMS4 tumors were consistent with those of advanced CRC.Taken together, CMS4 patients had the greatest risk of metastasis and the worst prognosis.As previously reported, the cause and molecular mechanism of metastasis may be related to the striking upregulation of EMT-related genes, activation of TGF-β, angiogenesis, and interstitial infiltration in CMS4 tumors. 30,33 Clinicopathological Features of CRC Metastases To clarify the clinical features of CRC metastases, a dataset including 975 metastatic CRC patients from cBioportal was obtained, and their epidemiological and clinicopathologic data were analyzed.5][36][37] In parallel, the degree of tumor differentiation was compared for the four dominant metastases, including 10).Conversely, liver and lung metastasis samples were relatively well-differentiated, with a proportion of moderate-poor and poor differentiation of less than 20% (Figure 2B, Supplementary Table 10).Further, we compared the OS of patients with liver (319 cases), lung (63 cases), distant lymph node (27 cases), and peritoneal metastases (43 cases) whose survival status and survival time were clearly recorded in database.The results showed that patients with peritoneal metastasis had the shortest OS, consistent with previous reports 38,39 (Figure 2C).Interestingly, patients with lung metastasis had the best survival before 200 months, but the survival rate decreased linearly after that (Log-rank χ 2 , p = 0.00042, Figure 2C). CMS Groups of CRC Metastases Mostly Matched Those of Their Primary Tumors To delve into the distribution of CMS groups in CRC metastases of different organs and tissues, we analyzed the transcriptomic data of 442 metastatic samples from GEO, which were confidently classified, including metastases of liver (N = 326), lung (N = 88), distant lymph node (N = 9), and peritoneum (N = 19) (Figure 3A, Supplementary Tables 1 and 11).The results demonstrated that metastases of different organ sites had distinct CMS patterns.Specifically, the common group for liver metastases was CMS2 (50.6%), followed by CMS4 (41.4%), which was consistent with previous studies that applied CMS in the liver metastasis setting. 24,40A similar result to previous studies was also observed in the peritoneal metastases, 41 that peritoneal metastases were dominated by the CMS4 group (78.9%).No systematic analysis is currently available to assess the CMS heterogeneity for metastases of distant lymph nodes and lungs.In our study, we found that CMS3 (66.7%) and CMS4 (61.4%) were the common groups for the metastases of distant lymph nodes and lungs, respectively (Figure 3B, Supplementary Table 11).To further explore the differences of CMS groups between primary CRC and paired metastases, we compared the CMS groups of 105 pairs of primary and metastatic tumors extracted from 10 GEO datasets, including PT and liver (N = 94), lymph node (N = 8), and peritoneal (N = 3) metastases (Supplementary Table 11).The results unveiled that up to 64% (67 of 105 pairs) of primary and metastatic lesions shared the same CMS groups (Figure 3C). Specifically, primary CRC samples of CMS2, CMS3, and CMS4 mostly metastasized to the liver, distant lymph node, and peritoneum, respectively, and then progressed to CMS2, CMS3, and CMS4 metastatic tumors, respectively (Figure 3D).While the CMS groups were inconsistent between primary and metastatic tumors (the proportion was 36%), liver and peritoneal metastases were still dominated by the CMS2 and CMS4 groups, respectively (Figure 3D).This correspondence of CMS group distribution exhibited that primary tumor cells of different CMS groups might prefer to colonize in specific organs during metastases and retained the unique molecular characteristics of these CMS groups in target tissues.Therefore, new metastatic organs and tissues should not be ignored as the growth environment of metastatic tumors. TME of CRC Metastases Was Similar to That of Primary Lesions In our study, several methods were employed to determine whether the correspondence of CMS group distribution represented the similarity of TME between paired metastases and primary lesions.Firstly, principal component analysis (PCA) depicted a high degree of separation between normal colon epithelial tissues and primary CRC tumors and a low degree of separation among tumors of four CMS groups (Figure 4A).Moreover, further results revealed statistically higher tumor purity in CMS2 and CMS3 groups, dramatically higher immune scores in CMS1 and CMS4 groups, and substantially higher stromal scores in the CMS4 group (Figure 4B).To further characterize the cellular heterogeneity landscape of different CMS groups, the xCell algorithm was adopted to evaluate the infiltration of 64 kinds of immune and stromal cells in the TME.Compared to normal colon and tumors of other CMS groups, CMS1 tumors had higher lymphoid cluster infiltration, CMS2 and CMS3 tumors showed "desert" infiltration of almost all cell clusters, and CMS4 tumors had markedly higher infiltration of myeloid, stem-like, and stromal clusters, such as M2-like macrophages, HSCs, MSCs, and fibroblasts (Figure 4C). Likewise, xCell was utilized to assess the three most prevalent distant metastases, namely liver, lung, and peritoneal metastases.As expected, the enrichment scores of all cell clusters were substantially lower in liver metastases, whereas the enrichment scores of myeloid, stem-like cells, and stromal cells were prominently higher in metastases of lung and peritoneum (Figure 4D).In other words, tumor cells metastasizing to the liver retained the molecular signatures of CMS2 PT, including high tumor purity and low immune and stromal infiltration, while tumor cells colonizing to lungs and peritoneum exhibited a phenotype of high myeloid infiltration and stromal and mesenchymal enrichment identical to CMS4 PT. Microenvironmental Differences in Normal Liver, Lung, and Peritoneum Tissues Considering the effects of distant organs on the CMS classification of CRC metastases, we further analyzed the distribution patterns of infiltrating cell subsets in the microenvironment of normal liver (10 cases), lung (7 cases), and peritoneal tissues (4 cases) to deepen the understanding of the microenvironment characteristics of these three tissues and their possible support or influence on metastatic cancer cells.PCA results unraveled marked differences in gene expression among normal liver, lung, and peritoneal tissues (Figure 5A).According to the results of the ESTIMATE algorithm, the immune score was significantly higher in normal lung and peritoneal tissues than in liver tissues, and the stroma score was also strikingly higher in lung tissues than in liver tissues.However, the stromal score of peritoneal tissues was insignificantly increased as compared to that of liver tissues, possibly due to the small sample size (Figure 5B and C).Cluster stratification maps were plotted to compare the expression of 11,761 genes among normal liver, lung, and peritoneal tissues, which indicated that there were indeed differences among the three tissues (Figure 5D). Subsequently, we analyzed the microenvironment cell landscape of normal liver, lung, and peritoneal tissues.It was observed that abundant stromal and immune cells were infiltrated in lung and peritoneal tissues, while cells were distributed in liver tissues in a relatively "barren" state (Figure 5E).These data were coincident with our previous finding that liver tissues favor the CMS2 metastasis, whilst lung and peritoneal tissues favor the CMS4 metastasis.In summary, the characteristics of normal liver, lung, and peritoneal tissues were also implicated in the formation and development of metastatic CRC. mIHC Validation of Differences in Cell Infiltration of Primary CRC and Paired Metastases To validate the aforementioned results, we retrospectively collected paraffin-embedded tissues of liver (N = 10), lung (N = 9), and peritoneal (N = 8) metastases and paired primary lesions for mIHC to assess immune-stromal infiltration within the TME.M2-like macrophages, CAFs, and regulatory T cells (Treg) in tumor areas were measured with CD206+, α-SMA+, and CD4+ FoxP3+ staining, respectively.The multispectral image analyses evidence that the infiltration of CD4+ T and M2-like macrophages were about 15% in both primary and metastatic tumors, with about 40% of CAFs and less than 10% of CD8+ T and Treg cells, suggesting the critical involvement of immunosuppressive cells in the development and metastasis of CRC (Figures 6-9A and B). Additionally, the infiltration of these five cell subsets was poorer in liver metastases and paired primary lesions than in lung and peritoneal metastases and their primary lesions (Figures 6-9A and B), consistent with the low immune and stromal scores in CMS2 CRC.In contrast, significantly high M2-like macrophage infiltration and extremely high CAF infiltration were observed in both PT and metastases of lung and peritoneum (Figures 6-9A and B), corresponding to the fact of the CMS4 CRC with high immune and stromal scores. To determine the influence of the organ-specific microenvironment on immune-stromal infiltration, infiltrated cells were compared between metastatic and matched primary lesions.Intriguingly, compared with PT, the data revealed significantly lower CAF proportion in liver metastases (Figures 6 and 9C), markedly higher proportion of CD4+ T cells and M2-like macrophages in lung metastases (Figures 7 and 9D), and substantially higher proportion of M2-like macrophages and CAFs in peritoneal metastases (Figures 8 and 9E).These results illustrated that disseminated "seed" of cancer cells not only retained their original hereditary characteristics and salient molecular features after metastasizing from primary tumor sites into distant organs but also enhanced or modified certain features according to the new TME to adapt to the new growth "soil" and form metastatic lesions. Discussion It is well established that CRC harbors tumor heterogeneity driven by genetic and epigenetic changes. 5,30This heterogeneity represents distinct gene expression profiles of patients and is strongly associated with diverse molecular characteristics and microenvironmental signatures of tumors, 16,33,42 thus complicating prognosis estimation, therapeutic regimen selection, and optimal timing for individual CRC patients. 32,43,44Moreover, CRC is an age-related malignancy, with 70% of new CRC diagnoses in those over 65 years old, 45 but the age stratification of CRC patients in clinical decisions is not justified.For example, in surgery, although older patients are more prone to severe postoperative complications, there is no significant difference in cancer-specific survival between younger and older patients, as the prognosis of the elderly may be confounded by differences in stage at presentation, tumor site, preexisting comorbidities and type of treatment received. 46In chemotherapy and targeted therapy, elderly patients can benefit from systemic therapies similar to younger patients in tolerance and survival outcomes. 47,48Given this, considering precision treatment for CRC patients based on biomolecular heterogeneity could be a more feasible approach. 3][54] However, due to tumor heterogeneity, current limited biomarkers cannot perfectly identify and stratify patients responding to specific 55 In immunotherapy, MSI-H/dMMR has been guideline-approved as a criterion for advanced or metastatic CRC using immune checkpoint inhibitors (PD-1/PD-L1), as MSI-H tumor cells express significantly higher levels of PD-L1 than those with microsatellite stability (MSS)/MMR-proficient (pMMR). 56However, there is still a heterogeneity in testing MSI status.As reported, a minority of MSI-patients confirmed by polymerase chain reaction (PCR) were diagnosed as pMMR by the initial MMR protein immunohistochemistry (IHC), 57 probably due to somatic mutations of tumors. 58Therefore, there remains an urgent need for a thorough understanding of tumor heterogeneity to stratify patients with advanced metastatic CRC to accept more precisive therapy. Of note, the heterogeneity of CRC is re-summarized and interpreted by CMS classification, which is the most comprehensive and systematic method of molecular typing method to date, enabling a multi-perspective and omics overview of CRC heterogeneity.In this study, we classified CRC distant metastases with CMS to investigate subtype distribution of CMS in different metastatic organs and evaluated concordance of CMS between PT and distant metastases, thereby further understanding the etiology and processes of CRC distant metastases and revealing certain patterns and mechanisms. Our data clarified that metastases of liver and distant lymph node were dominated by CMS2 and CMS3 groups, respectively, and CMS4 was the most frequent group for lung and peritoneal metastases.Up to 64% of PT-metastasis pairs had consistent CMS, indicating that PT of different CMSs had a preference for metastasis to specific organs.Specifically, CMS2, CMS3, and CMS4 PT commonly facilitated CMS2 liver metastases, CMS3 distant lymph node metastases, and CMS4 lung or peritoneal metastases, respectively.Due to the low proportion of CMS1 subtype in CRC metastases, and the small sample size of CMS1 PT-metastasis pairs in our study, the metastatic preference of CMS1 tumors was undetermined.In addition, the TME of PT and paired metastases was similar, with similar distribution and infiltration of stromal and immune cells.Consistently, mIHC results validated that liver metastases had low immune and stromal infiltration, while lung and peritoneal metastases presented with normal immune infiltration and high stromal infiltration.Given that CMS integrates multiple molecular features of CRC, including the pattern of immune and stromal infiltration, TME signatures are therefore highly correlated with CMS. 16,17CMS was not a reason or "driving factor" for the metastatic preference of PT in different CMS groups, but an interpretation of the molecular associations, including the immune and stromal signatures, between PT and distant metastases. In addition, among 36% of metastases with inconsistent CMS groups with paired PT, liver metastases were still mainly CMS2 and lung and peritoneum metastases were mainly CMS4.The xCell analysis further confirmed the presence of "desert" immune and stromal infiltration in normal liver tissues but high immune and stromal cell infiltration in normal lung and peritoneal tissues, highlighting that the formation of CRC metastases not only inherits the genetic mutations of the disseminated "seeds" but also is shaped and influenced by the tissue microenvironment of the distant organs.Generally, the spreading "seeds" colonize into certain tissues and organs whose microenvironment is similar to that of their PT, resulting in the formation of metastatic tumors that resemble PT.For the three most prevalent CRC metastases, when re-visited the "Seed and Soil" theory from a CMS perspective, we propose that CMS2 PT may prefer the "immune tolerant" liver to develop an immune suppressive and relatively low immune-stromal infiltrating TME similar to that in the primary lesion. 37,59However, CMS4 PT prefers the "immunogenic enrichment" lung, 60 as myeloid cells could remodel the pre-metastatic lung into an inflamed but immune-suppressive environment, thereby inducing EMT and developing lung metastasis. 61Moreover, the peritoneum exhibits a high expression of fibroblast, which can be stimulated by TGF-β signaling and transdifferentiate into myofibroblast and interact with tumor cells to establish the peritoneal metastasis, 62 therefore, CMS4 PT prefers peritoneum as a metastatic organ could be explained.Even with an incorrect selection of "seeds", the combined effects of genetics and environment can allow for the formation of metastases with different CMS groups from primary lesions. Our results further explain and complement the "Seed and Soil" theory. 27,59,63The CMS of CRC metastasis is collectively determined by the genetic characteristics of primary cancer cells and the microenvironmental characteristics of the metastasized organ tissues.After metastasizing into distant organs and tissues via circulating blood or lymph flow, cancer cell "seeds" not only retain their robust genetic characteristics but also interact with surrounding cells in the new microenvironment, finally developing into metastatic tumors. Intriguingly, mIHC results demonstrated that the proportion of Treg cells, M2-like macrophages, and CAFs was markedly higher in lung and peritoneal metastases than in liver metastases.Compared with primary lesions, liver metastases presented with significantly decreased CAFs, lung metastases had prominently elevated CD4+ T cells and M2-like macrophages, and peritoneal metastases showed substantially enhanced M2-like macrophages and CAFs.Based on these results, it can be concluded that different treatment measures should be selected for metastatic CRC in different organs and tissues to accurately and personally curb its formation and development.In addition, immune checkpoint inhibitors could probably benefit less in patients with liver metastases than those with lung or peritoneal metastases, while therapies targeting CD4+ T cells, M2-like macrophages, and CAFs may be considered for patients with lung and peritoneal metastases. Conclusively, we used CMS to classify CRC metastases and summarized the pattern and characteristics of CRC metastases.Our findings emphasized that the CMS groups of CRC metastases were determined by both the genetic characteristics of disseminated primary tumor cells and the microenvironment of the metastasized organ tissues, which is a supplement to the "Seed and Soil" theory.Meanwhile, our study unveiled the metastasis preference of PT of different CMS groups, indicating that potentially metastatic organs should be monitored following primary tumor surgery, and subtype-based interventions could be considered for timely benefit.Additionally, differences in immune-stromal infiltration among CRC metastases of different organs provide more precise therapeutic targets for the treatment of patients with distant metastases and could probably provide reference for clinical-decision making. Figure 1 Figure 1 The distribution and survival characteristics of primary colorectal cancer (CRC) with four CMS groups in GEO.(A) The proportion of primary CRC of four CMS groups in GEO.(B) The proportion of primary CRC of four CMS groups that progressed into distant metastases (M0 or M1).(C) Relapse-free survival (RFS) of primary CRC with four CMS groups analyzed with the Kaplan-Meier analysis and log-rank test.(D) Overall survival (OS) of primary CRC with four CMS groups analyzed with the Kaplan-Meier analysis and log-rank test.Specific p values are presented in the figure. Figure 2 Figure 2 Clinicopathologic features of metastatic CRC.(A) Pie chart showing the distribution rate of the first distant metastasis organs among 975 CRC patients from cBioPortal.(B) The degree of tumor differentiation of liver, lung, distant lymph node, and peritoneal metastases.(C) Kaplan-Meier analysis and log-rank test of the OS of patients with liver, lung, distant lymph node, and peritoneal metastases. Figure 3 Figure 3 Comparisons of CMS groups between primary and metastatic CRC.(A) Sample size histogram of liver, lung, distant lymph node, and peritoneal metastases from GEO. (B) The distribution of the four CMS groups of liver, lung, distant lymph node, and peritoneal metastases.(C) Pie chart displaying the proportion of the matched CMS group of 105 pairs of primary CRC and metastases.(D) Sankey chart exhibiting the relationship of the CMS classification between liver, lung, and peritoneal metastases and primary lesions. Figure 4 233 Figure 5 Figure 4 Tumor microenvironment (TME) analysis of primary and metastatic CRC.(A) Principal component analysis (PCA) of gene expression in normal colon tissues and primary CRC of four CMS groups.(B) The "tumor purity", "immune score", and "stroma score" of normal colon tissues and primary CRC of four CMS groups evaluated with the ESTIMATE algorithm.(C) The abundance of 64 types of immune and stromal cells infiltrated in normal colon tissues and primary CRC tissues of four CMS groups calculated with the xCell R package.(D) The abundance of 64 types of immune and stromal cells infiltrated in liver, lung, and peritoneal metastases of CRC calculated with the xCell algorithm. Figure 6 Figure 6 mIHC validation of cell infiltration in paired primary CRC and liver metastases.(A-B) H&E staining images and mIHC multispectral fluorescence images of paired primary CRC and liver metastases marked with the following six markers: α-SMA (sky blue), CD206 (green), CD8 (yellow), CD4 (red), FoxP3 (white), and DAPI (dark blue).The magnification of the tissue panorama is × 10, with a scale of 5 mm.The magnification of the enlarged local image is × 100, with a scale of 200 μm.The magnification of the enlarged image of Treg cells in the upper right corner is × 800, with a scale of 200 μm. Figure 7 Figure 7 mIHC validation of cell infiltration in paired primary CRC and lung metastases.(A-B) H&E staining images and mIHC multispectral fluorescence images of paired primary CRC and lung metastases marked with the following six markers: α-SMA (sky blue), CD206 (green), CD8 (yellow), CD4 (red), FoxP3 (white), and DAPI (dark blue).The magnification of the tissue panorama was × 10, with a scale of 5 mm.The magnification of the enlarged local image is × 100, with a scale of 200 μm.The magnification of the enlarged image of Treg cells in the upper right corner is × 800, with a scale of 200μm. Figure 8 Figure 8 mIHC validation of cell infiltration in paired primary CRC and peritoneal metastases.(A-B) H&E staining images and mIHC multispectral fluorescence images of paired primary CRC and peritoneal metastases marked with the following six markers: α-SMA (sky blue), CD206 (green), CD8 (yellow), CD4 (red), FoxP3 (white), and DAPI (dark blue).The magnification of the tissue panorama is × 10, with a scale of 5 mm.The magnification of the enlarged local image is × 100, with a scale of 200 μm.The magnification scale of the enlarged image of Treg cells in the upper right corner is × 800, with a scale of 200 μm. Figure 9 Figure 9 The statistical analysis of cell infiltration in paired primary CRC lesions and liver, lung, and peritoneal metastases.(A) The percentages of CD4+ T cells, CD8+ T cells, Treg cells, M2-like macrophages, and CAFs of primary CRC and liver, lung, and peritoneal metastases.(B) The percentages of CD4+ T cells, CD8+ T cells, Treg cells, M2-like macrophages, and CAFs of liver, lung, and peritoneal metastases.(C) Comparisons of the percentages of CD4+ T cells, CD8+ T cells, Treg cells, M2-like macrophages, and CAFs of paired primary lesions and liver metastases.(D) Comparisons of the percentages of the five cell populations in paired primary lesions and lung metastases.(E) The percentages of the above five cell populations in paired primary lesions and peritoneal metastases.Data were expressed as mean ± standard deviation, and specific p-values are marked in the figure.
2024-03-22T15:17:35.518Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "ae8912da0415882b0b5cb6c340b1f2791241ec71", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b59d2b7a5172b31f66a07828f17b5f67c114a421", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119697156
pes2o/s2orc
v3-fos-license
Random walks in non homogeneous Poissonian environment We consider the moving particle process in Rd which is defined in the following way. There are two independent sequences (Tk) and (dk) of random variables. The variables Tk are non negative and form an increasing sequence, while variables dk form an i.i.d sequence with common distribution concentrated on the unit sphere. The values dk are interpreted as the directions, and Tk as the moments of change of directions. A particle starts from zero and moves in the direction d1 up to the moment T1 . It then changes direction to d2 and moves on within the time interval T2 minus T1 , etc. The speed is constant at all sites. The position of the particle at time t is denoted by X(t). We suppose that the points (Tk) form a non homogeneous Poisson point process and we are interested in the global behavior of the process (X(t)), namely, we are looking for conditions under which the processes (Y(T,t), T is non negative), Y(T,t) is X(tT) normalized by B(T), t in (0, 1), weakly converges in C(0, 1) to some process Y when T tends to infinity. In the second part of the paper the process X(t) is considered as a Markov chain. We construct diffusion approximations for this process and investigate their accuracy. The main tool in this part is the paramertix method. Introduction We consider the moving particle process in R d which is defined in the following way. There are two independent sequences (T k ) and (ε k ) of random variables. The variables T k are non negative and ∀k T k ≤ T k+1 , while variables ε k form an i.i.d sequence with common distribution concentrated on the unit sphere S d−1 . The values ε k are interpreted as the directions, and T k as the moments of change of directions. A particle starts from zero and moves in the direction ε 1 up to the moment T 1 . It then changes direction to ε 2 and moves on within the time interval T 2 − T 1 , etc. The speed is constant at all sites. The position of the particle at time t is denoted by X(t). Study of the processes of this type has a long history. The first work dates back probably to Pearson and continued by Kluyer (1906) and Rayleigh (1919). Mandelbrot (1982) considered the case where the increments T n − T n−1 form i.i.d. sequence with the common law having a heavy tail. He also introduced the term "Levy flights" later changed to "Random flights". To date, a large number of works were accumulated, devoted to the study of such processes, we mention here only articles by Kolesnik (2009), De Gregorio (2012, 2015) and Orsingher and Garra (2014) which contain an extensive bibliography and where for different assumptions on (T k ) and (ε k ) the exact formulas for the distribution of X(t) were derived. Our goals are different. Firstly, we are interested in the global behavior of the process X = {X(t), t ∈ R + }, namely, we are looking for conditions under which the processes {Y T , T > 0}, weakly converges in From now on we suppose that the points (T k ), T k ≤ T k+1 , form a Poisson point process in R + denoted by T. It is clear that in the homogeneous case the process X(t) is a conventional random walk because the spacings T k+1 −T k are independent, and then the limit process is Brownian motion. In the non homogeneous case the situation is more complicated as these spacings are not independent. Nevertheless it was possible to distinguish three modes that determine different types of limiting processes. For a more precise description of the results it is convenient to assume that T k = f (Γ k ), where Π = (Γ k ) is a standard homogeneous Poisson point process on R + with intensity 1. In this case (Γ k ) where (γ k ) are i.i.d standard exponential random variables. If the function f has power growth, the behavior of the process is analogous to the uniform case and then in the limit we obtain a Gaussian process which is a linearly transformed Brownian motion where W is a process of Brownian motion, for which the covariance matrix of W (1) coincides with the covariance matrix of ε 1 and K α (s) is a nonrandom kernel, its exact expression is given below. In the case of exponential growth, the limiting process is piecewise linear with an infinite number of units, but ∀ǫ > 0 the number of units in the interval [ε, 1] will be a.s. finite. Finally, with the super exponential growth of f , the process degenerates: its trajectories are linear functions: In the second part of the paper the process X(t) is considered as a Markov chain. We construct diffusion approximations for this process and investigate their accuracy. To prove the weak convergence we use the approach of Stroock and Varadhan (1979). Under our assumptions the diffusion coefficients a and b have the property that for each x ∈ R d the martingale problem for a and b has exactly one solution P x starting from x (that is well posed). It remains to check the conditions from Stroock and Varadhan (1979) which imply the weak convergence of our sequence of Markov chains to this unique solution P x . We consider also more general model which may be called as "random walk over ellipsoids in R d ". For this model we establish the convergence of transition densities and obtain Edgeworth type expansion up to the order n −3/2 , where n is a number of switching. The main tool in this part is the paramertix method (Konakov (2012), Konakov and Mammen (2009)). Random flights in Poissonian environment The reader is reminded that we suppose T k = f (Γ k ), where (Γ k ) is a standard homogeneous Poisson point process on R + . Assume also that Eε 1 = 0. It is more convenient to consider at first the behavior of processes Z n (t) = Y Tn (t), as for T = T n the paths of Z n have an integer number of full segments on the interval [0,1]. The typical path of {Z n (t), t ∈ [0, 1]} is a continuous broken line with vertices . Theorem 1. Under previous assumptions 1) If the function f has power growth: and W is a process of Brownian motion, for which the covariance matrix of W (1) coincides with the covariance matrix of ε 1 . Then Z n =⇒ Y, where Y is a continuous piecewise linear process with the vertices at the 3) In super exponential case, suppose that f is increasing absolutely continuous and such that We take B(T ) = T. Then Tn T n+1 → 0 in probability, and Z n =⇒ Y, where the limiting process Y degenerates: Remark 1. In the case of power growth the limiting process admits the following representation: where, as before, W is a Brownian motion, for which the covariance matrix of W (1) coincides with the covariance matrix of ε 1 . It is clear that we can also express Y in another way : where w is a standard Browniam motion and K is the covariance matrix of ε 1 . Remark 2. In the case of exponential growth it is possible to describe the limiting process Y in the following way: We take a p.p.p. T = (t k ), t k = e −βΓ k−1 , defined on (0, 1], and define a step process {Z(t), t ∈ (0, 1]}, Then Diffusion approximation In this section firstly we consider a model of random flight which is equivalent to the study of random broken lines {X n (t), t ∈ [0, 1]} with the vertices ( k n , X n ( k n )), and such that (h = 1 n ) X n ((k + 1)h) = X n (kh) + hb(X n (kh)) + √ hξ k (X(kh)), where {ε k } and {ρ k } are two independent sequences and Suppose that b and σ are continuous functions satisfying Lipschitz condition Moreover it is supposed that b(x) and 1 det (σ(x)) are bounded. Then, Our next result is about approximation of transition density. We consider now more general models given by a triplet is a radial density depending on a parameter θ controlling the frequency of changes of directions, namely, the frequency increases when θ decreases. Suppose X(0) = x 0 . The vector b(x 0 ) acts shifting a particle from Several examples of such functions ∆(θ) for different models will be given below. Define The initial direction is defined by a random variable ξ 0 , the law of ξ 0 is a pushforward of the spherical measure on S d x 0 (1) under affine change of variables Then particle moves along the ray l x 0 corresponding to the directional unit vector and changes the direction in (r, r + dr) with probability Let ρ 0 be a random variable independent on ξ 0 and distributed on l x 0 with the radial density (2). We consider the point Starting from x 1 we repeat the previous construction to obtain After n switching we get a point x n , To obtain the one-step characteristic function Ψ 1 (t) we make use of formula (6) from Yadrenko (1980): where J ν (z) is the Bessel function and dΦ E (r) is the F -measure of the layer between E x 0 (r) and E x 0 (r + dr), F is the law of ρ 0 ε 0 . Now we make our main assumption about the radial density: Denote by p E (n, x, y) the transition density after n switching in the RF-model described above. To obtain the one step transition density p E (1, x, y) (we write (x, y) instead of (x 0 , x 1 )) we use the inverse Fourier transform, (3) and (A1). After easy calculations we get where Consider two examples. Example 1. We put ∆(θ) = (d + 1) 2 θ 2 and Using (3), formula 6.623 (2) on page 726 from Gradshtein and Ryzhik (1963), and the doubling formula for the Gamma function we obtain It is easy to check that (3) and formula 6.631 (4) on page 731 of Gradshtein and Ryzhik (1963) we obtain It is easy to see that the transition density (4) corresponds to the one step transition density in the following Markov chain model n , then ∆ (θ n ) = 1 n and we obtain a sequence of Markov chains defined on an equidistant grid Note that the triplet (b(x), σ(x), f (r; θ)), x ∈ R d , r ≥ 0, θ ∈ R + , of the Example 2 corresponds to the classical Euler scheme for the d-dimensional SDE Let p(1, x, y) be transition density from 0 to 1 in the model (7). We make the following assumptions (A2) The function a(x) = σσ T (x) is uniformly elliptic. (A3) The functions b(x) and σ(x) and their derivatives up to the order six are continuous and bounded uniformly in x. The 6-th derivative is globally Lipschitz. The operator L * in (8) is the same operator as in (9) but with coefficients "frozen" ay x. Clearly, L = L * but, in general, L 2 = L 2 * . The convolution type binary operation ⊗ is defined for functions f and g in the following way At the same time, Therefore the process X n converges weakly to the process This process is in some sense degenerate. Hence this case is not very interesting. Remark 3. In the case where β = 1 it is simply necessary replace e −Γ k by e − Γ k β . Remark 4. It seems that the last result could be expanded by considering more general sequences (ε k ). Interpretation: ε k |ε k | defines direction, |ε k | defines the velocity of deplacement in this direction on the step S k . Asymptotic behaviour in case of power growth In this case T k = Γ α k , α > 1/2, t n,k = T k Tn = Γ k Γn α , and Let x ∈ R d be such that |x| = 1. We will show below that where C(x) = 2α 2 2α−1 E ε 1 , x 2 . Therefore it is natural to take B 2 n = n 2α−1 . We proceed in 5 steps: Step 1: Lemmas Step 2: We compare X n (·) with Z n (·) where Z n (t n,k ) = α Step 3: We compare Z n (·) with W n (·) where W n (t n,k ) = α Step 4: We show that process U n (·), converges weakly to the limiting process here W (·) is a process of Brownian motion, for which the covariance matrix of W (1) coincides with the covariance matrix of ε 1 . Step 5: We show that the convergence W n ⇒ Y follows from the convergence U n ⇒ Y . Finally: We get the convergence X n ⇒ Y . 4.3.1 Step 1 Lemma 1. Let α > 0 and m ≥ 1. Then ∀x > 0, h > 0 where Proof. By the formula of Taylor-Lagrange we have (12) with Proof. It follows from the inequalities: ✷ Lemma 3. Let Γ be the Gamma function. Then as k → ∞ Proof. It follows from Lemma 2 and well known asymptotic Proof. The result follows from the well known fact that and Lemma 3. ✷ Lemma 5. Let α ≥ 0. The following relations take place as k → ∞: where |ρ k | = O(k α−2 ) in probability; We deduce immediately from (16) the following relation. Step 3 We show now that Z n − W n ∞ Similar to the previous case (β k ) under condition M is the sequence of sums of independent random variables with mean zero. Therefore j . By independence of γ j and Γ j−1 We have finally P{max k≤n |β k | ≥ t} → 0, n → ∞, which gives the convergence W n − Z n P − → 0. Step 4 Let U n be the process defined at the points k n by and by linear interpolation on the intervals [ k n , k+1 n ], k = 0, . . . , n − 1. We now state weak convergence of the processes U n to the process Y , W is a Brownian motion, for which the covariance matrix of W (1) coincides with the covariance matrix of ε 1 . The proof is standard because U n (·) represents a (more or less) usual broken line constructed by the consecutive sums of independent (non-identically distributed) random variables. One could apply Prokhorov's theorem (see Gikhman and Skorohod (1996), ch.IX, sec. 3, Th.1). Only one thing must be checked: that for any 0 < s < t ≤ 1, and for any x ∈ R d , |x| = 1, It is clear that by the theorem of Lindeberg-Feller it is sufficient to state the convergence of variances. We have and which are the same. Due to the steps 2 and 3 it is sufficient to show that W n ⇒ Y . Let f n : [0, 1] → [0, 1], be a piecewise linear continuous function such that f n (t n,k ) = k n α ; t n,k = Γ k Γn α ; k = 0, 1, . . . , n. By definition of W n and U n we have By the corollary to Lemma 6 (see below) the function f n converges in probability uniformly to f , f (t) = t, and by previous step U n ⇒ Y . It means that we can apply Lemma 7 which gives the necessary convergence. Proof of Lemma 6. We have where (ξ n,k ) k=1,...,n are the order statistics from [0, 1]-uniform distribution. Let δ n := max k≤n ξ n,k − k n . Evidently, δ n ≤ sup [0,1] |F * n (x) − x|, where F * n is a uniform empirical distribution function. By Glivenko-Cantelli theorem, sup [0,1] |F * n (x) − x| → 0 a.s, which gives the convergence M n → 0 in probability. ✷ The proof follows directly from Lemma 6 due to the uniform continuity of the function Lemma 7. Let {U n } be a sequence of continuous processes on [0, 1] weakly convergent to some limit process U. Let {f n } be a sequence of random continuous bijections [0, 1] on [0, 1] which in probability uniformly converges to the identity function f (t) ≡ t. Then the process W n , W n (t) = U n (f n (t)), t ∈ [0, 1], will converge weakly to U. As the last convergence implies evidently the a.s. uniform convergence ofŨ n (f n (t)) tõ U(f (t)), we get the convergence in distribution of U(f n (·)) to U(f (·)) = U(·). sup 0≤t≤n |x(t, ω) − x(t, ω ′ )| 1 + sup 0≤t≤n |x(t, ω) − x(t, ω ′ )| then it is well known that D is a metric on Ω and (Ω, D) is a Polish space. The convergence induced by D is uniform convergence on bounded t -intervals. For simplicity, we will omit ω in the future and we will be assuming that all our processes are homogeneous in time. Analogous results for time-inhomogeneous processes may be obtained by simply considering the time-space processes. We will use M to denote the Borel σ -field of subsets of (Ω, D) , M = σ[x(t) : t ≥ 0]. We also will consider an increasing family of σ-algebras M t = σ[x(s) : 0 ≤ s ≤ t]. Classical approach to the construction of diffusion processes corresponding to given coefficients a and b involves a transition probability function P (s, x; t, ·) which allows to construct for each x ∈ R d , a probability measure P x on Ω = C([0, ∞); R d ) with the properties that for all 0 ≤ t 1 < t 2 and Γ ∈ B R d (the Borel σ -algebra in R d ). It appears that this measure is a martingale measure for a special martingale related with the second order differential operator is a martingale. We will say that the martingale problem for a and b is well-posed if, for each x there is exactly one solution to that martingale problem starting from x. We will be working with the following set up. For each h > 0 let Π h (x, ·) be a transition function on R d . Given x ∈ R d , let P h x be the probability measure on Ω characterized by the properties that for all k ≥ 0, Define and where B(x, ε) is the open ball with center x and radius ε. What we are going to assume is that for all R > 0 lim Assume that in addition to (29)-(32) the coefficients a and b are continuous and have the property that for each x ∈ R d the martingale problem for a and b has exactly one solution P x starting from x (that is well posed). Then P h x converges weakly to P x uniformly in x on compact subsets of R d . Sufficient conditions for the well-posedness is given by the following theorem. Let S d be the set of symmetric non-negative definite d × d real matrices. Theorem B. (Strook and Varadhan (1979), page 152, Theorem 6.3.4). Let a : R d −→ S d and b : R d −→ R d be bounded measurable functions and suppose that σ : R d −→ R d × R d is a bounded measurable function such that a = σσ * . Assume that there is an A such that for all x, y ∈ R d . Then the martingale problem for a and b is well-posed and the corresponding family of solutions {P x : x ∈ R d } is Feller continuous (that is P xn → P x weakly if x n → x). Note that (33) and uniform ellipticity of a(x) imply the existence of the transition density p(s, x; t, y) (Strook and Varadhan (1979), Theorem 3.2.1, page 71).
2016-09-26T18:21:16.000Z
2016-09-22T00:00:00.000
{ "year": 2016, "sha1": "b7a0d4df2984817cca5021cb127b86c56fbbfc23", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b7a0d4df2984817cca5021cb127b86c56fbbfc23", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
14721863
pes2o/s2orc
v3-fos-license
Digestion of Yeasts and Beta-1,3-Glucanases in Mosquito Larvae: Physiological and Biochemical Considerations Aedes aegypti larvae ingest several kinds of microorganisms. In spite of studies regarding mosquito digestion, little is known about the nutritional utilization of ingested cells by larvae. We investigated the effects of using yeasts as the sole nutrient source for A. aegypti larvae. We also assessed the role of beta-1,3-glucanases in digestion of live yeast cells. Beta-1,3-glucanases are enzymes which hydrolyze the cell wall beta-1,3-glucan polyssacharide. Larvae were fed with cat food (controls), live or autoclaved Saccharomyces cerevisiae cells and larval weight, time for pupation and adult emergence, larval and pupal mortality were measured. The presence of S. cerevisiae cells inside the larval gut was demonstrated by light microscopy. Beta-1,3-glucanase was measured in dissected larval samples. Viability assays were performed with live yeast cells and larval gut homogenates, with or without addition of competing beta-1,3-glucan. A. aegypti larvae fed with yeast cells were heavier at the 4th instar and showed complete development with normal mortality rates. Yeast cells were efficiently ingested by larvae and quickly killed (10% death in 2h, 100% in 48h). Larvae showed beta-1,3-glucanase in head, gut and rest of body. Gut beta-1,3-glucanase was not derived from ingested yeast cells. Gut and rest of body activity was not affected by the yeast diet, but head homogenates showed a lower activity in animals fed with autoclaved S. cerevisiae cells. The enzymatic lysis of live S. cerevisiae cells was demonstrated using gut homogenates, and this activity was abolished when excess beta-1,3-glucan was added to assays. These results show that live yeast cells are efficiently ingested and hydrolyzed by A. aegypti larvae, which are able to fully-develop on a diet based exclusively on these organisms. Beta-1,3-glucanase seems to be essential for yeast lytic activity of A. aegypti larvae, which possess significant amounts of these enzyme in all parts investigated. Introduction Aedes aegypti, among other species of the genera Aedes, is the main vector of several pathogenslike Dengue, Urban Yellow Fever, Chikungunya, West Nile and Zika viruses, whose endemic areas include 40% of human populations worldwide (2.5 billion people) [1,2]. In spite of being considered diseases restricted to tropical countries, recent global warming has increased concerns about their spread to regions with temperate climate [3], including reports of West Nile virus in Europe, Asia, North America and Australia [4]. Current main strategies for fighting these diseases rely on vector control, as there are no vaccines commercially available. Historically, the control of mosquitoes has been done with chemical insecticides, which are losing their potential effectiveness due to appearance of resistant populations [5]. New strategies for control of vector populations as transgenic mosquitoes, transfection of insects with Wolbachia or paratransgenesis have been proposed and are currently under evaluation [6][7][8]. Interestingly, some of these strategies depend on rearing massive amounts of insects and, therefore, mosquito nutrition has become a strategic point of investigation. The haematophagic bevaviour of adult female A. aegypti and the fact that the initial site for development and transmission of pathogens by this insect is the intestine, had led to several studies of its digestive physiology [9][10][11][12]. Comprehensibly, those studies have focused the physiology of female adults, and larval digestion is known to a lesser extent [9,[13][14]. Interestingly, burden of A. aegypti-transmitted diseases is primarily determined by the occurrence of larval breeding sites [15]. Thus, knowledge of larval physiology and biochemistry can result in new insights for vector control. A. aegypti larvae are considered as detritivores, ingesting solid particles from liquid media and scraping solid material from surfaces. Among the particles ingested by mosquito larvae several microorganisms, such as bacteria, fungi, protozoa and rotifers have been found [16][17][18][19][20][21], but the mechanisms used by larvae for breakdown of these nutritional sources remain largely unknown. Recent understanding of the importance of gut microbiota in several aspects of insect physiology [22] resulted in more detailed investigations of the role of bacteria in development and vectorial capacity of mosquitoes. For example, it was demonstrated the dependence of Aedes aegypti, Anopheles gambiae and Georgecraigius atropalus larvae on gut bacteria for full development [23]. In spite of that, the exact mechanisms of interaction between these organisms was not fully investigated, as beneficial effects of ingested bacteria might be of nutritional, immunological or even endocrinological nature. In this respect, a deep understanding of interactions between specific microorganisms and mosquito larvae is still lacking. The main objective of this work was to investigate physiological consequences of yeast ingestion by A. aegypti larvae, using Saccharomyces cerevisiae as model nutrient source. Yeasts are a more defined food source, antibiotic free and less likely to transmit pathogens to the insects than the standard cat or animal food which is used to raise larvae in regular mosquito colonies [24][25][26]. We discovered that A. aegypti larvae could nourish exclusively from live S. cerevisiae cells, revealing that this insect bears mechanisms for yeast cell wall breakdown and full acquisition of nutrients from this microorganism. Accordingly, we showed in vitro that larval gut homogenates have lytic activity against live S. cerevisiae cells. Beta-1,3-glucanases hydrolyse glicosidic bonds in beta-1,3-glucans, which are the major polysaccharide component of the yeast cell wall. We investigated the effects of a S. cerevisiae exclusive diet on larval beta-1,3-glucanase activity, and competition experiments revealed that this enzyme is crucial for the larval lytic activity against this microorganism. These findings, besides unravelling new basic physiological aspects of culicid larvae, could help in the establishment of better defined, pathogen free artificial diets for large-scale mosquito larvae rearing in future. Insects rearing and maintenance Aedes aegypti eggs from the Rockfeller strain were obtained from the colony of the Laboratory of Physiology and Control or Arthropod Vectors (LAFICAVE/IOC-FIOCRUZ; Dr Denise Valle and Dr José Bento Pereira Lima). Insects were reared until adult stage in the Laboratory of Insect Biochemistry and Physiology (LABFISI, IOC/FIOCRUZ) at 27±2°C and 70±10% relative humidity with a 12-h light/12-h dark cycle. To obtain synchronized developing larvae, hatching was induced by adding 100 mL of distilled water into 200 mL plastic cups containing eggs and then incubating at 28°C for 30 minutes. After incubation, first instar larvae (n = 80) were transferred together to plastic bowls containing 100 mL of dechlorinated water and 0.1 g of cat food (Whis-kas1, Purina, Brazil) and kept at 26±1°C until adult stage. The food was added only once in the beginning of each experiment. Larvae which received cat food are considered the control group. Saccharomyces cerevisiae S14 was kindly donated by Professor Pedro Soares de Araújo (Chemistry Institute, University of São Paulo, Brazil). For feeding experiments with live S. cerevisiae, a single colony was transferred into 3-5 mL of liquid Sabouraud medium [27] and incubated overnight at 30°C under shaking at 100 rpm. After overnight incubation, 100 uL of culture were subpassaged into 50 mL of Sabouraud medium and incubated overnight at 30°C under shaking at 100 rpm. 50 mL of cultures were then centrifuged (7,500 x g, 30 min, 4°C) and the supernatant was discarded. All cells were then suspended in water and released into larval cups. A similar experiment was performed autoclaving the cells (120°C, 20 min, 1.5 atm) before larval feeding. Biological parameters Initially, we investigated if a yeast diet could have an impact in development of fourth instar larvae of Aedes aegypti. With this objective, recently molted 4 th instar larvae were fed on live Saccharomyces cerevisiae cells until the prepupae phase. To investigate if A. aegypti could fullydevelop when feeding exclusively on cells of this yeast species, recently hatched first instar larvae were transferred to a bowl containing yeasts as the sole food source. Larval and pupal mortality, pupation and emergence were monitored and recorded daily. Fourth instar larvae, pupae, and male and female adults were weighed individually or in pools of 10 individuals each. Pupation and emergence data were plotted and compared by the Log-rank (Mantel-Cox) Test. Mortality and weights were expressed as means ± SEM and non-transformed data were compared by ANOVA or pairwise t-tests. Preparation of samples for enzymatic assays Larvae were immobilized by placing them on ice, after which they were dissected in cold 0.9% (w/v) NaCl. Parts dissected in each larva were the head, gut and rest of body. Heads and rest of bodies were homogenized in MilliQ water with aid of a micro tube pestle (Model Z 35, 997-1, Sigma, USA), using a ratio of 100 μL of water per 10 insects. Guts were homogenized in cold MilliQ water containing 20 mM phenylmethylsulfonyl fluoride (PMSF), 20 μM Pepstatin A and 20 μM trans-epoxysuccinyl-L-leucylamido(4-guanidino)butane (E-64). All samples were centrifuged for 10 min at 14,000 x g at 4°C. Both pellets and soluble fractions were stored at -20°C until used as enzyme source for enzymatic assays. Yeast viability assays To test if larval gut contents have some influence in live yeast cells, we performedassays incubating these two materials mixed and followed yeast viability. Gut soluble fraction was prepared as above and filtered through a 0.45 μm PVDF syringe filter (Millipore Code. JBR6 103 14 Lot. B2MN40511) and then incubated at 30°C with 10 colony-forming units (CFUs)/μL of live S. cerevisiae cells in 10 mM citrate-sodium buffer pH 7.0. After different time points, assays were sampled and aliquots were plated onto solid Sabouraud medium (1% w/v yeast extract, 1% w/v peptone, 1% w/v dextrose, 2% w/v Agar). After overnight incubation at 30°C, colonies were counted. Cell stability under assay conditions was confirmed by using controls without enzyme. Enzymatic assays and protein quantitation β-1,3-glucanase activity in Aedes aegypti larvae was determined by measuring the release of reducing groups from 0.25% (w/v) laminarin from Laminaria digitata, (SIGMA Cat. no. L9634) in a thermocycler with a modified bicinchoninic acid reagent according to ref. [28]. All assays were performed at 30°C under conditions such that activity was proportional to protein concentration and time. Controls without enzyme or without substrate were included. One unit of enzyme (U) is defined as the amount that hydrolyses 1 μmol of substrate (or bonds)/ min. Protein concentration was determined according to [29] using ovalbumin as a standard. To test if feeding with yeasts could change beta-1,3-glucanase expression, we compared activities in all parts of A. aegypti 4 th instar larvae reared on live or autoclaved S. cerevisiae cells with levels found in larvae fed on cat food. Comparisons between means of two independent groups were done with a pairwise t test. Results are expressed as the group mean ± SEM. Yeast cell counts To confirm that A. aegypti larvae are actively ingesting live S. cerevisiae cells, and not merely filtrating released molecules from broken or dead cells, we decided to check the integrity and viability of the yeasts in our experimental conditions. During the preparation of the experimental diets, the yeasts, after growing on liquid Sabouraud media, are centrifuged and ressuspended in water. We decided to count viable cells using Trypan Blue staining and by light microscopy after these treatments to check viability as below. S. cerevisiae cultures (45 mL) were prepared in Sabouraud liquid media as described previously and then centrifuged (7,500 x g, 40 min, 4°C). The supernatant was discarded and cells were resuspended in the same volume of Sabouraud liquid media or water. Ten microliter aliquots of each suspension were withdrawn and then combined with 90 μL of PBS. These samples were mixed with 100 μL of a 0.4% (w/v) Trypan Blue solution in PBS and then 15 μL were loaded on a Neubauer chamber (hemocytometer), where dead and live cells were counted in a light microscope (400 x magnification). In one experiment, yeast cells ressuspended in water were kept at 26°C for 24 hours before staining and counting. For counting yeast cells ingested by A. aegypti larvae, insects were raised on cat food as described previously until they reached the fourth larval instar. Then larvae were transferred to a bowl with S. cerevisiae cells as food source as described, and after different time points larvae were withdrawn from the pots and dissected. Entire guts were homogenized in 100 μL PBS, combined with Trypan Blue and then live and dead yeast cells were counted as above. Food protein and sugar contents For quantitations in S. cerevisiae, cells were grown in Sabouraud liquid media as described and 45 mL of culture were centrifuged (7,500 x g, 30 min, 4°C). Supernatant was discarded and cells were ressuspended in 5 mL of water. Ten microliter aliquots were withdrawn for protein and sugar measurements. For quantitations in cat food samples, 0.1 g of cat food was homogeneized in 1 mL water and 20 μL aliquots were withdrawn for measurements. Proteins were determined with the bicinchoninic acid method [30] and total sugars were measured with the phenol-sulfuric method [31]. Due to the presence of insoluble material, cat food samples submitted to reaction with BCA were centrifuged (quick spin) before absorbance readings. Statistical analysis Linear regressions were performed using Microsoft Excel (Microsoft). Statistical comparisons were made using GraphPad Prism software (version 5.0, GraphPad Software Inc.). Significance was considered when p<0.05. Results A. aegypti 4 th instar larvae fed on live Saccharomyces cerevisiae cells reached the end of the larval stage with significantly higher weights when compared to controls (p < 0.05, unpaired ttest, n = 3, Table 1). A. aegypti raised from eggs on live S. cerevisiae cells resulted in larvae heavier than controls (p < 0.05, unpaired t-test, n = 6, Table 1).However, pupae and female adults derived from these larvae had similar weights when compared to controls (p > 0.05, unpaired t-test, n = 6, Table 1). Yeast fed male adults had weights significantly higher than controls (p < 0.01, unpaired t-test, n = 6, Table 1). We observed a small but significant delay in both pupation and adult emergence (p < 0.05, Log-rank (Mantel-Cox) Test, n = 320, Fig 1), but no significant changes in larval or pupal mortality (p > 0.05, unpaired t-test, n = 6, Table 1). Viable cell counts revealed that ressuspension in water does not affect the number or viability of yeast cells (p > 0.05, unpaired t-test, n = 9, Fig 2A). Yeast cells remain viable even after being incubated in water for 24 hours (p > 0.05, unpaired t-test, n = 9, Fig 2A), which suggests that larvae have been exposed to live cells throughout our experiments. Counting of yeast cells inside the gut of 4 th instar larvae which were exposed to S. cerevisiae diets revealed that the insects have ingested a significant amount of cells already at the first time point analysed (2 hours; Fig 2B). During 48 hours of exposure of larvae to the yeast diet, the total number of ingested cells does not dramatically change. However, a significant decrease in viable cells occured after 24 hours, with an increase of dead cells (Fig 2B and 2C). At the same conditions, control insects maintained on cat food showed no yeasts inside the gut ( Fig 2B). Taken together, these results clearly show that, in spite of some changes in development, A. aegypti can nourish and fully-develop from alive S. cerevisiae cells. To have a better understanding of possible reasons for the observed changes in development when A. aegypti larvae are raised in live yeast cells, we compared the protein and sugar amounts in the yeast diet to the amounts present in the regular cat food which was given to controls. The yeast diet contains respectively 11.6 and 3.2 times more protein and sugar than the cat food, when we compare the amounts which were given to each group (Table 2). Since A. aegypti larvae were able to develop solely on a live S. cerevisiae diet, we hypothesized that larvae were able to break down the macromolecules from this nutrient source. Because one of the main constituents of the yeast cell wall is beta-1,3-glucan [32], we decided to investigate if A. aegypti larvae produced beta-1,3-glucanase. Beta-1,3-glucanase activity was present in all parts of 4 th instar larvae, with a prevalence in the rest of body and minor activities in the head and gut (Fig 3A). Surprisingly, specific activity (measured as μU/mg protein) in the head was ten times higher than in gut or rest of body (Fig 3B). Activity present in the suspension from containers used to raise the larvae was negligible (Fig 3A), suggesting that activity present in the gut is secreted at this organ and not acquired from food. After finding significant beta-1,3-glucanase activities in all parts of A. aegypti 4 th instar larvae, we verified whether these activities could be modified (elicited or inhibited) by a diet with live S. cerevisiae cells. Rearing of A. aegypti exclusively on live S. cerevisiae did not result in any significant changes in beta-1,3-glucanase levels in the soluble fraction of all samples tested when compared to controls fed with cat food (p > 0.05, unpaired t-test, n = 4, Fig 4A). We also measured the activity associated with the insoluble fraction of samples, which in the case of gut Yeast Digestion in Mosquito Larvae putatively contains undigested S. cerevisiae cells and cell walls. The activities in the insoluble fraction of guts and heads were also not changed (p > 0.05, unpaired t-test, n = 4, Fig 4B), as well as the total activity at each tissue (p > 0.05, unpaired t-test, n = 4, soluble + insoluble fractions; Fig 4C). Surface exposure of structural components of the cell wall could be an important factor in possible changes in beta-1,3-glucanase activity in larvae during development when feeding on yeast cells. Nevertheless, activities from insects fed on autoclaved S. cerevisiae did not differ from controls in all tissues, neither in the soluble fraction (p > 0.05, unpaired t-test, n = 4, Fig 5A), the insoluble fraction (p > 0.05, unpaired t-test, n = 4, Fig 5B) or in total amount (p > 0.05, unpaired t-test, n = 4, Fig 5C). The only remarkable exception on this pattern was the activity in the head, which was significantly lower in larvae fed on autoclaved yeasts compared with controls fed on cat food. This was observed in total as well as both soluble and insoluble fractions (p < 0.05, unpaired t-test, n = 4, Fig 5A-5C). The presence of a constitutive beta-1,3-glucanase activity in the gut of A. aegypti larvae raised the question about the real importance of this enzyme in the breakdown of ingested yeast cell walls. These cells were stable under assay conditions (see controls, Fig 6), and incubation of these cells with gut soluble fraction from A. aegypti larvae resulted in rapid loss of viability (p < 0.05, unpaired t-test vs controls, n = 9, Fig 6). Addition of laminarin, a beta-1,3-glucan from Laminaria digitata and a commercial substrate for beta-1,3-glucanases, to the assay mixture prevented the effect of A. aegypti larval gut soluble fraction on live S. cerevisiae. cells (p > 0.05, unpaired t-test vs controls, n = 9, Fig 6). Discussion Mosquito larvae feed on particulate material, which can include plant debris, algae, protists, and fungal and bacterial cells. In fact, several works supported by evidence derived with microscopes showed the active ingestion of microorganisms by mosquito larvae [16][17][18][19][20][21]. Sometimes the identification of intact cells inside the larval gut is difficult due to their quick disruption, which seems to be the case for protists. Considering the observed speed of the effect of A. aegypti larval gut homogenates on S. cerevisiae cells (20% viability loss in 15 minutes), this could partially explain the poor record of yeast cells inside the gut of culicid larvae. Culicidae larvae present different modes of feeding. Although classified as filter feeders, sometimes it is hard to distinguish passive ingestion from active selection of food components. In spite of that, yeast cells have already been described as part of the mosquito diet, but there is Table 2. Nutritional parameters of the different diets tested for Aedes aegypti larvae. Cat food was used to raise insects in control conditions. Yeast cells (Saccharomyces cerevisiae) were grown in liquid Sabourad media and offered to larvae as described. Figures correspond to protein and no evidence about the nutritional importance of these microorganisms in wild larvae. In fact, mosquito larvae seem to be strongly generalist, coping with extreme variations of microbial composition in nursing sites [9]. The data presented in this work showed that A. aegypti larvae was able to ingest yeast cells, but more experiments should be performed to assess the preference of larvae for S. cerevisiae over other dietary microbes. Since S. cerevisiae sp. live cells were the only source of carbon and nitrogen for A. aegypti larvae in our experiments, it was expected that they produced enzymes capable of digesting the main polyssaccharides and proteins of this yeast. Digestive chitinases and trypsin-like proteases were already described in mosquito larvae [14,[33][34], but digestion of beta-1,3-glucan, the major yeast cell wall polyssaccharide [32], was never studied in Culicidae. Beta-1,3-glucanase activities were described in cockroaches, termites, grasshoppers, beetles, moth larvae and, recently, in sandfly larvae [33][34][35][36][37][38][39][40][41]. These are digestive enzymes involved in the breakdown of plant hemicellulose or fungal cell wall disruption. Some insect gut beta-1,3-glucanases have high lytic capacity against yeast cells, being in those cases, endo-beta-1,3-glucanases (E. C.3.2.1.39) [37]. Insects beta-1,3-glucanases are proteins belonging to glycoside hydrolase family 16 [36,38]. In some insects as termites and moths, GHF16 proteins were incriminated in pathogen recogniton [42][43]. This dual physiological role is evident in the distribution of beta-1,3-glucanase activity in A. aegypti larvae. Gut activities seem to be constitutive, as it would be expected for a digestive enzyme in holometabolan larvae [44][45]. Head beta-1,3-glucanase seems to be involved in sensing of microbes in ingested food, as autoclaved food resulted in ablation of this enzyme. A similar pattern of expression was observed for lysozymes in Drosophila larvae [46]. Activity in the rest of body is putatively involved in defense against pathogens, since digestion does not occur in these tissues and beta-1,3-glucans were never described as intermediate metabolites in animals. In this respect, A. aegypti larval beta-1,3-glucanases could be homologous to the beta-1,3-glucanases already described in other insects as beetles (gut, [37]) and moths (rest of body, [43]). Notably, this is the first description of beta-1,3-glucanases in larvae of Culicidae. Beta-1,3-glucanase activity in sand fly Lutzomyia longipalpis larvae is putatively related with the active ingestion of fungal cells by this insect [40][41]. The presence of beta-1,3-glucanases in guts of A. aegypti larvae suggests that fungal and plant hemicelluloses could be regular components in their diet, as this enzyme has these structures as substrates in other insects. In this respect, A. aegypti larval gut beta-1,3-glucanase could be complementing the chitinase activity already described [33], which is putatively involved in digestion of fungi and other chitin-containing particles. It is possible that chitinase and beta-1,3-glucanase have complementary roles in fungal cell disruption by mosquito larvae, but the observation that the presence of laminarin (commercial Yeast Digestion in Mosquito Larvae beta-1,3-glucan) in excess prevented lysis of live S. cerevisiae cells by gut homogenates suggest that beta-1,3-glucanase is essential for disruption of yeast cell walls. This evidence coincides with the predominance of beta-1,3-glucans in fungal cell walls and their structural role [32,47]. In this respect, beta-1,3-glucanase might be an important enzyme for larval nutrition in A. aegypti larvae and an interesting target for inhibition, as mammals lack this enzyme (CAZY, www.cazy.org). Additionaly, beta-1,3-glucanase might be an essential enzyme for mosquito larvae feeding on fungi, as mechanical disruption of cells in insect digestion is negligible, and chemical break down of cell wall polysaccharides in necessary to permit access to intracellular nutrient sources as proteins, glycogen and nucleic acids [44][45]. Nevertheless, further molecular characterization of beta-1,3-glucanase activity in A. aegypti is required, because some insect beta-glycosidases also have activity against laminarin [48] and lytic activity against yeast cells was also reported for glycosidases [49]. However, it is unlikely that beta-glycosidase is the main responsible for lysis in A. aegypti larvae, because insect glycosidases have low binding affinity for laminarin, and in this case this substrate would constitute a poor competitor in the lytic assay. Results shown here demonstrate that A. aegypti can complete development on a diet exclusively of S. cerevisiae cells. In this respect, these cells must contain all macro and micronutrients which are necessary for mosquito development. This is expected to a certain extent, as yeast extract (S. cerevisiae) had already been used as an exclusive food source to fully develop A. aegypti [49]. It is interesting to notice that using S. cerevisiae live cells we obtained a similar delay in pupation when compared to controls (2 days), but higher percentages of adult emergence (80%) when compared to insects developed on yeast extract (58%) [50]. Our observation that the yeast-based diet has much higher protein and sugar contents than the regular cat food points to a possible deficiency in some essential micronutrient, but more studies are necessary to elucidate this issue. In a very recent report, it was shown that Culex pipiens larvae are able to nourish from yeast cells of several species, including S. cerevisiae [51]. This fact suggests that the nutritional relation between mosquito larvae and yeasts is at least partly shared among Culicidae. It is likely that the lytic mechanism in C. pipiens involves the action of a β-1,3-glucanase as in Aedes, but this still needs to be confirmed. Interestingly, our data suggest that S. cerevisiae might be a potential probiotic for mosquito larvae, besides being a promising component for the development of diets for larvae based on microorganisms. This might result in cheaper, pathogen free, and more reproducible diets for these insects, with an important impact in mass rearing which is necessary for the development of new vector control management strategies. Considering that Drosophila may also be reared on yeasts [52], there is an interesting nutritional parallel throughout the order Diptera. Diets containing yeast cells may be a starting point to novel strategies for intervention in the metabolism or genetics of mosquitoes. These new approaches might include knockout or mutant yeasts, yeast producing recombinant proteins, GFP tagged peptides or dsRNA. Conclusions Aedes aegypti larvae were able to ingest and break down live yeast cells (S. cerevisiae). Beta-1,3-glucanase activities were present in the head, gut and rest of body of these insects, being involved in yeast digestion (gut) and possible recognition of invading microorganisms (head and rest of body). Beta-1,3-glucanase in the gut and rest of body were not affected by yeast diets, but head activity is suppressed in insects fed on autoclaved cells, suggesting a role in sensing of food-borne microbes. A. aegypti larval gut beta-1,3-glucanase was essential for lysis of yeast cells, and might be a crucial enzyme when these insects feed solely on this nutrient source. Supporting Information S1 File. The file S1_File.xls includes raw data used for the calculations of biological parameters, enzyme activities, sugar and protein quantitatities and yeast cell counts, used in Figs 1-6 and Tables 1 and 2. (XLSX)
2018-04-03T02:35:12.270Z
2016-03-23T00:00:00.000
{ "year": 2016, "sha1": "640fc8241266967eaa8197de588beed27c06849c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0151403&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "640fc8241266967eaa8197de588beed27c06849c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55267763
pes2o/s2orc
v3-fos-license
A Secure Ciphertext Self-Destruction Scheme with Attribute-Based Encryption The secure destruction of expired data is one of the important contents in the research of cloud storage security. Applying the attribute-based encryption (ABE) and the distributed hash table (DHT) technology to the process of data destruction, we propose a secure ciphertext self-destruction scheme with attribute-based encryption called SCSD. In SCSD scheme, the sensitive data is first encrypted under an access key and then the ciphertext shares are stored in the DHT network along with the attribute shares. Meanwhile, the rest of the sensitive data ciphertext and the shares of access key ciphertext constitute the encapsulated self-destruction object (EDO), which is stored in the cloud. When the sensitive data is expired, the nodes in DHT networks can automatically discard the ciphertext shares and the attribute shares, which canmake the ciphertext and the access key unrecoverable. Thus, we realize secure ciphertext self-destruction. Compared with the current schemes, our SCSD scheme not only can support efficient data encryption and fine-grained access control in lifetime and secure self-destruction after expiry, but also can resist the traditional cryptanalysis attack as well as the Sybil attack in the DHT network. Introduction Cloud storage has attracted much attention from both industry and academia for its low cost, flexible deployment, and strong extensibility in recent years.The cloud storage system is composed of massive storage resource on the Internet as well as the resource management and access control mechanism for the resource accessing transparency of users [1].With friendly user interface and strong extensibility, the cloud storage system can provide users with unlimited storing space; thus, it can form a new delivery model called storage as a service [2].Cloud storage brings new opportunities for efficiency increasing, cost saving, and green computing in the area of information technology; however, it is also faced with some security challenges. In the service model of cloud storage, data is outsourced to the storage server which performs as the third party.So, data is out of the control of data owner and the security of data highly depends on the server.Due to the dishonesty of cloud storage server, the data owner will first encrypt the original sensitive data and then outsource the ciphertext to the cloud in order to keep the confidentiality of data.The encryption key is kept by the data owner privately.However, even if the data is stored by cloud in the form of ciphertext, there are some security risks.For example, in order to improve the service reliability, the cloud may make several backups for the user's data and distribute them to different storage servers [3].On this condition, when the data has expired and the owner needs to delete the data from the storage servers, the cloud server may not destruct all the backups of data.Once adversaries get the encryption key and the backups of the ciphertext from cloud, the sensitive data can be recovered and the confidentiality is destroyed.Therefore, the assured destruction of expired data, namely, the thorough deletion and the permanent elimination of ciphertext, is one of the important contents in the research of cloud storage security [4]. In this paper, applying the attribute-based encryption and the distributed hash table (DHT) technology to the process of data destruction in the cloud storage environment, we propose a secure ciphertext self-destruction scheme with attribute-based encryption called SCSD.In SCSD scheme, the sensitive data is first encrypted under an access key, and then the access key is encrypted using an attributebased encryption method.The ciphertext of sensitive data is extracted and transformed in order to get the ciphertext shares, which are stored in the DHT network along with the attribute shares.Meanwhile, the rest of the sensitive data ciphertext and the shares of access key ciphertext constitute the encapsulated self-destruction object (EDO), which is stored in the cloud.When the sensitive data is expired, the nodes in DHT networks can automatically discard the ciphertext shares and the attribute shares, which can make the ciphertext of sensitive data and the access key unrecoverable.Thus, we realize secure ciphertext self-destruction.Compared with the current schemes, our SCSD scheme can resist the traditional cryptanalysis attack as well as the Sybil attack in the DHT network. The rest of the paper is organized as follows.In Section 2, we introduce some related works of the secure data destruction.Then, in Section 3, we review some preliminaries.Next, we introduce the system and security model and the detailed construction of our SCSD scheme in Section 4. In Section 5, we make an evaluation for the scheme in security analysis and scheme performance.Finally, concluding remarks and future work are given in Section 6. Related Works In cloud storage system, some data is stored in the servers for a long time, which can be compromised by adversaries, because the data may be backed up by the cloud servers and these backups may still exist after the delete command of users.It is difficult to destruct all the backups in the cloud, and the following works are some attempts to achieve the secure destruction of data. Perlman is the first to focus on the secure deletion of documents [7].Perlman designed an unrecoverable system for documents.The encryption key is deleted when it is expired; thus, the document encrypted under this key can not be recovered.However, this system considers only the lifetime of encryption key.Besides, this is a local-centered system and is unfit for the cloud environment.Then, following this idea, FADE [8], one secure overlap cloud storage system built under the existing cloud infrastructure, is developed.This system can assure the deletion of documents and can support different document access policies.Another feasible system is Ephemerizer [9], which needs a trusted server to store and manage the decryption key.In Ephemerizer, the data owner sets the expired time for the decryption key.The trusted server deletes the decryption key once the key is expired.Thus, the ciphertext is unreadable. The above methods follow the idea of centralized solution, which has some limitations as follows.(1) The key management depends too much on the server.(2) When there is an investigation from government, the administrator needs to give up the right of key management.This condition makes the server no longer trusted.(3) There is a need for additional commands and operations to achieve the assured deletion of data. In order to solve the problem brought by the centralized destruction scheme, Geambasu et al. propose an interesting data self-destruction system called Vanish [5].The private data is encrypted under a symmetric key, which is divided into several key shares using threshold secret sharing scheme and then distributed to a large scale DHT P2P network.The nodes in the DHT network will automatically delete the key shares periodically, which will result in the unreadable ciphertext.Thus, it realizes the self-destruction of data and this needs trusted servers or additional operations.Wang et al. improve the Vanish system by extracting and distributing parts of ciphertext to the DHT network [6].This improvement will resist the traditional cryptanalysis attack and bruteforce attack more efficiently. However, [10] points out that there are Sybil attacks against the Vuze DHT network adopted by Vanish system.Adversaries can get enough key shares to reconstruct the key before the ciphertext is expired.Thus, there are security problems in the schemes of [5,6].Besides, these decentralized solutions adopt the symmetric encryption algorithms, which will bring complex key management and distribution problems.To solve these problems, an improved system called SafeVanish is proposed [11].RSA algorithm is adopted to firstly encrypt the symmetric key in order to resist the Sybil attack.But this system can not support fine-grained access control mechanism.Applying attribute-based encryption algorithm, Xiong et al. [12] firstly propose a secure selfdestruction scheme, which can support fine-grained access control on documents.However, the direct adoption of attribute-based encryption algorithm on documents is not efficient. Therefore, a secure sensitive data self-destruction scheme, which supports efficient data encryption and key management, fine-grained access control in lifetime and secure selfdestruction after expiry, and traditional cryptanalysis attack and Sybil attack resistance, is needed in the cloud storage environment. .Distributed hash table (DHT) [13] supports a distributed database storage model.And DHT network is comprised of large-scaled distributed infrastructures in the P2P networks which support the query, storage, retrieval, and management of data without servers.Every node in the DHT network is responsible for a small-scaled routing and can store parts of data.Thus, the whole DHT network realizes an addressing and storing of data.There are many DHT networks in Internet, such as Vuze, Chord, OpenDHT, and Pastry. Distributed Hash Table The index of every document stored in the DHT network can be expressed as a pair of (, ). is denoted as the hash value of name or other descriptive pieces of information of the document; can be denoted as the IP address or other descriptive pieces of information of the node that stored the document in DHT network.All of the index items compose a large document index hash table.When is specified, the location of document can be assured through the corresponding relationship. Every DHT network has the following three important characteristics, which is suitable for constructing data selfdestruction scheme in cloud storage environment: (1) Data availability: DHT network can provide reliable distributed storage capacity, which assures the availability of the data stored in the nodes of DHT network in the lifetime.This is the foundation of constructing data self-destruction scheme.(2) Automatic data deletion in the nodes in DHT network: nodes in DHT network can automatically remove the old data in order to store the new data periodically.Thus, the data stored in the nodes will be destroyed automatically after expiry, which provides a mechanism for ciphertext self-destruction.(3) Large-scaled and global distribution: for example, there are more than one million of active nodes in Vuze network simultaneously, and these nodes are distributed to more than 190 countries all over the world.These completely distributed nodes in DHT network can provide attack resistance capability for self-destruction scheme. Attribute-Based Encryption. Attribute-based encryption (ABE), a typical public key cryptography, was firstly proposed by Sahai and Waters in 2005 [14].In an ABE scheme, the identifier for a user is a set of descriptive attributes rather than a string of characters in identity-based encryption (IBE).Every attribute can be mapped to an element in Z * using a hash function.The ciphertext and user's key are both associated with the attributes.ABE can support threshold policy of attributes.Namely, if and only if the number of same attributes in both sets of attributes and * is greater than or equal to a certain threshold value, a user with a set of attributes can decrypt the ciphertext successfully which is encrypted under a set of attributes * . Specifically, an authority firstly defines a threshold value and generates the system public key, the length of which is related to the number of attributes in * .Then, the authority generates the private key for user with a set of attributes . is associated with a random − 1 order polynomial ().In a decryption process, if ∩ * ≥ , then the user chooses random attributes in the set ∩ * and reconstructs the encryption key through Lagrange's interpolation on the associated polynomial ().Thus, the user can decrypt the ciphertext and get the plaintext. Threshold Secret Sharing.Threshold secret sharing scheme was first proposed by Shamir [15].The main idea is to divide the secret data into shares and then distribute these shares to users.If there is or more than shares are extracted from these users, then the secret data can be generated.Otherwise, the secret data can not be generated.This method is called (, ) threshold secret sharing. SCSD Scheme Construction In this section, we first describe the system model of the secure ciphertext self-destruction (SCSD) scheme.Then, the detailed algorithm descriptions and the outline of scheme are introduced as follows. 4.1.System Model.The SCSD system comprises six different entities: authority, cloud storage servers, DHT network, data owners, data consumers, and adversaries, as shown in Figure 1. Authority.Authority provides the system with security parameters setup and key generation processes.Besides, it also assigns attributes for each user.Adversaries.Adversaries may try to capture the data in the cloud or in DHT network. Cloud Storage This paper is aiming at preventing the leakage of sensitive data stored in the cloud after expiry.For example, sensitive information in user's historic archive may leak out in the condition of an investigation from government.We assume that the data owner and other authenticated users trust each other.Thus, adversaries may try to compromise the EDO in the cloud after the lifetime of EDO.Or the adversaries may capture the ciphertext shares and the attribute shares stored in DHT network within the lifetime of EDO.So, in the security model of our scheme, we divide the behavior of adversaries into the following two kinds.( 1 lifetime of EDO.The adversary tries to decrypt the ciphertext and get the sensitive information according to the shares. Algorithm Descriptions.Algorithms of our SCSD scheme are described as follows. (3) Associpher( ) → (): given a ciphertext , the data owner firstly divides the ciphertext into blocks of bits.If the last block is less than bits, then several bits of "0" are added to the end until the length of the last block is bits.Suppose the ciphertext is divided as 1 ‖ 2 ‖ ⋅ ⋅ ⋅ ‖ ; the data owner associates the blocks as follows: Then, the associated ciphertext is ) DeAssocipher() → ( ): this is the inverse algorithm of Associpher( ) → ().Given an associated ciphertext = 1 ‖ ⋅ ⋅ ⋅ ‖ ‖ ⋅ ⋅ ⋅ ‖ , a data consumer performs as follows: Then, the data consumer gets the ciphertext from the association 1 ‖ 2 ‖ ⋅ ⋅ ⋅ ‖ . (6) ShareDistribute(CS, ) → (CI, AJ): given the ciphertext shares CS and the attribute shares { = } ∈Attri from , the data owner firstly chooses a random index CI for CS as a seed to a pseudorandom number generator.Then, the data owner runs the generator to generate indices 1 , 2 , . . ., .For = 1, 2, . . ., , each ciphertext share CS is stored in the node indexed by in the DHT network.Similarly, for the attribute shares { = } ∈Attri from , the data owner firstly chooses a random index AJ as a seed to a pseudorandom number generator.Then, the data owner runs the generator to generate indices 1 , 2 , . . ., .For = 1, 2, . . ., , each attribute share is stored in the node indexed by in the DHT network.( 7) EDOGen(Attri , * , , CI, AJ) → (EDO): given the attribute set of the data owner Attri , * from , , CI, and AJ, the data owner generates the encapsulated selfdestruction object EDO = (Attri , * , , CI, AJ) and then sends the EDO to the cloud. (8) KeyRecover(EDO, USK) → (): before the expiration timestamp of EDO, a data consumer, with a secret key USK and an attributes set Attri , firstly gets the EDO from the cloud.Then, the data consumer runs the pseudorandom number generator to generate indices 1 , 2 , . . ., of attribute shares { = } ∈Attri under the seed AJ.Then, the data consumer gets as many = , ∈ Attri , as possible from the DHT network according to the indices 1 , 2 , . . ., . In order to recover the access key , the data consumer chooses a set of attribute shares Att ∈ Attri ∩ Attri .Note that if there are no more than attribute shares in the set of Attri ∩ Attri , the data consumer can not recover the access key since he can not satisfy the ABE threshold policy.If there is a set of attribute shares Att, the data consumer firstly gets Lagrange's coefficient Δ ,Att () = ∏ ,∈Att, ̸ = ((−)/(−)) and then recovers the access key as follows: The outline of SCSD scheme. Analysis and Performance In this section, we evaluate our SCSD scheme by modularizing it into two parts, namely, security analysis and scheme performance. Security Analysis. In the applications of our scheme, because adversaries can not specify the particular object of attack before the expiration timestamp, we assume that the copies of EDO stored in the cloud are secure during this time. Besides, because the attribute shares and ciphertext shares stored in the DHT network will be discarded after the expiry of EDO, once the DHT network is updated periodically, the contents of EDO copies will be unreadable. There are mainly two kinds of attack aiming at our scheme.The first one is cracking the expired EDO copies stored in the cloud through cryptanalysis attack and bruteforce attack.Despite the fact that the attribute shares and ciphertext shares are discarded, there are still EDO copies stored in the cloud.The other kind of attack is aiming at collecting the attribute shares and ciphertext shares in the DHT network before the expiration timestamp of EDO, and these shares will be used in the tracing attack against the EDO copies stored in the cloud. Therefore, the security of our scheme is mainly affected by two aspects.One is the security of encryption algorithm used in the sensitive data encryption under the access key, which depends on the capability of resisting the cryptanalysis attack and brute-force attack.The other is the security of DHT network that stored the attributes shares and ciphertext shares, which depends on the capability of resisting sniffing attack, hopping attack, and other DHT Sybil attacks.So, we make the security analysis of our scheme based on these two aspects as follows.The brief comparisons of security properties of our SCSD scheme [5,6] are summarized in Table 1. The Security of Encryption Algorithm.The brute-force attack is implemented by trying any possible decryption keys on the ciphertext to recover the plaintext.This kind of attack is based on the integrity of ciphertext.So, adversaries should first get the integrated ciphertext before implementing the brute-force attack.In our scheme, however, the sensitive data is first encrypted under the random access key and then the ciphertext is associated and extracted.Because every block of the associated ciphertext is correlated with each other, once some of the blocks are extracted, the remaining blocks will be no more integrated.Therefore, without the integrated ciphertext, adversaries can not recover the sensitive data by the brute-force attack. Besides, implementing the traditional cryptanalysis attack is also based on an integrated ciphertext.Because the remaining ciphertext blocks stored in the cloud are incomplete, the traditional cryptanalysis attack had no effect on our scheme. The Security of DHT Network. In the following, we will discuss whether adversaries can crack the EDO copies by attacking the DHT network before the expiration timestamp of EDO.Because adversaries can not specify the particular object of attack before the expiration timestamp, the adversaries may try to get as many attribute shares and ciphertext shares as possible during this time.For example, the adversaries may keep on attacking the DHT network in order to get enough shares.However, this kind of attack will bring expensive cost to the adversaries. Due to the characteristic of DHT network, the method of attacking the DHT network to get the attribute shares and ciphertext shares is very difficult.Reference [5] has made detailed analysis aiming at various kinds of DHT attacks by performing simulations in the Vuze DHT network.The result shows that it is impossible for the adversaries to get enough shares from DHT network by implementing sniffing attack, hopping attack, and other DHT attacks.Therefore, in the same way, the adversaries in our scheme also can not get enough attribute shares or ciphertext shares by attacking the DHT network in order to crack the EDO copies stored in the cloud. Performance and Optimization. In this section, we first make a performance evaluation of SCSD on the time cost in both the data encapsulation phase and the data reconstruction phase, respectively.Then, we implement the parameter optimization by analyzing the tradeoff between security and availability of our scheme. Performance Evaluation. In Phase I, the communication overhead is mainly caused by the distribution of ciphertext shares and attribute shares to the DHT network.The computation overhead is mainly caused by the ABE algorithm on the access key, the symmetric encryption algorithm on sensitive data, and the association and the shares generation algorithm on ciphertext.In Phase II, the communication overhead is also mainly caused by the collection of ciphertext shares and attribute shares from the DHT network.The computation overhead is mainly caused by the reconstruction of the access key and the ciphertext. Based on the above analysis, we execute our SCSD scheme and measure the times spent in the two main phases.For the sake of simplicity, we set the total shares = and the threshold = for the ciphertext shares and attribute shares, respectively.The evaluation uses an Intel G2130 3.2 GHz with 4 GB of RAM, Java 1.6, and a broadband network.The times of the two main phases are shown in Figure 3. Figure 3 shows that the data collection and reconstruction phase is relatively fast.The time cost of data encapsulation and distribution, however, is quite large.Fortunately, a simple pretreatment, pregenerating the access key and prepushing shares into the DHT network, can be implemented.As shown in Figure 3, this pretreatment can lead the time of data encapsulation phase to a fixed 1.6 s.Thus, the performance of SCSD scheme is relatively effective and efficient. Parameter Optimization. Next, we assume that the adversaries have comprised 5% of the nodes in a thousandnode DHT network.We will show how the security and the availability of our scheme are affected by the parameters and the threshold .The probability that an adversary captures sufficient shares to reconstruct the ciphertext shares is shown in Figure 4.It is clear that increasing the number of shares can decrease the adversary's success probability.Furthermore, the security can also be enhanced as the threshold increases. As shown in Figure 5, the availability is also affected by the parameters.The maximum timeout gets longer as the number of shares increases.And longer timeout can also be supported by smaller threshold since the scheme can tolerate more share loss.So, the choice of threshold can represent a tradeoff between security and availability.High threshold can provide more security and low threshold can provide longer lifetime.Therefore, by choosing the proper share number and threshold, we can get a tradeoff of high security and good availability. Besides the parameters, there are other kinds of optimizations for our scheme.Because of the adoption of ABE algorithm, our SCSD scheme can implement one-to-many authorization and access control flexibly.Moreover, the access key can be used repeatedly in the condition of timely processing huge volume of data while the security requirement is lower.And if the requirement of security is higher, the ciphertext shares CS = (CS 1 , CS 2 , . . ., CS ) and the attribute shares { = } ∈Attri can also be distributed to different DHT networks, respectively, one to Vuze and the other to OpenDHT [16], which will improve the security of our scheme significantly. Conclusion In cloud storage system, secure data destruction is one of the problems that need to be addressed in data security.Many data destruction schemes have been proposed in recent years.However, there are still some limitations.In this paper, we mainly focus on the ciphertext destruction and propose a secure ciphertext self-destruction scheme with attribute-based encryption called SCSD, which applies the attribute-based encryption and the distributed hash table technology to the process of data destruction in the cloud Servers.Cloud storage servers are responsible for storing the data sent by the users and assuring that only authenticated users can get access to the data.DHT Network.Nodes in the DHT network are responsible for storing the ciphertext shares and the attribute shares and can automatically discard the stored data.Data Owners.A data owner generates sensitive data and then encrypts it under a random access key.Ciphertext shares are sent by data owner to the DHT network along with the attribute shares.Besides, EDO is sent to cloud by data owner.Data Consumers.The data consumer downloads ciphertext shares and attribute shares from the DHT network and EDO from the cloud.Then, he can decrypt the EDO if his attributes satisfy the ABE threshold policy. Table 1 : Comparisons of security properties.
2018-12-13T01:59:05.823Z
2015-10-13T00:00:00.000
{ "year": 2015, "sha1": "c0a97f6c16639402be4bf9af7f94b0f15623e0ce", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2015/329626.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c0a97f6c16639402be4bf9af7f94b0f15623e0ce", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
112899
pes2o/s2orc
v3-fos-license
A New Void Fraction Measurement Method for Gas-Liquid Two-Phase Flow in Small Channels Based on a laser diode, a 12 × 6 photodiode array sensor, and machine learning techniques, a new void fraction measurement method for gas-liquid two-phase flow in small channels is proposed. To overcome the influence of flow pattern on the void fraction measurement, the flow pattern of the two-phase flow is firstly identified by Fisher Discriminant Analysis (FDA). Then, according to the identification result, a relevant void fraction measurement model which is developed by Support Vector Machine (SVM) is selected to implement the void fraction measurement. A void fraction measurement system for the two-phase flow is developed and experiments are carried out in four different small channels. Four typical flow patterns (including bubble flow, slug flow, stratified flow and annular flow) are investigated. The experimental results show that the development of the measurement system is successful. The proposed void fraction measurement method is effective and the void fraction measurement accuracy is satisfactory. Compared with the conventional laser measurement systems using standard laser sources, the developed measurement system has the advantages of low cost and simple structure. Compared with the conventional void fraction measurement methods, the proposed method overcomes the influence of flow pattern on the void fraction measurement. This work also provides a good example of using low-cost laser diode as a competent replacement of the expensive standard laser source and hence implementing the parameter measurement of gas-liquid two-phase flow. The research results can be a useful reference for other researchers’ works. Introduction In the past decades, the studies and industrial applications of gas-liquid two-phase flow in small-channel systems, such as chemical reactors, heat exchangers, refrigeration processes and micro-evaporators etc., have become a significant area [1][2][3]. Void fraction is an important parameter of the two-phase flow. Its on-line measurement is of great importance for the heat and mass transfer efficiency, the estimation of pressure drop, the process control and the measurement of other parameters of the two-phase flow [4][5][6][7][8][9]. However, with the decrease of the channel dimension, the measurement of the void fraction becomes more and more difficult. The conventional measurement methods cannot fulfill the growing requirements of the industrial applications and the mechanism studies of the two-phase flow [1][2][3][4][5][6][7][8][9]. Due to its advantages of high spatial and temporal resolution, the optical measurement technique is a very attractive and useful approach to implement the parameter measurement of gas-liquid two-phase flow in small channels [10][11][12]. The conventional optical measurement methods can be mainly divided into three categories: optical probe method, visualization method, and laser-based method [10][11][12]. Because the optical probe is directly in contact with the detected fluid, the optical probe method will more or less disturb the practical flow of the fluid. In addition, the probes may be contaminated and unpredictable measurement error will arise [13][14][15]. The visualization method includes high-speed cameras, digital cameras, and optical tomography etc. [16][17][18]. Although the measurement performance of the visualization method is satisfactory, the visualization method has the disadvantages of high cost, complicated construction and higher requirement of the measurement environment [16][17][18]. Many laser-based methods have been proposed and widely studied, including laser Doppler velocimetry, laser induced fluorescence and particle image velocimetry, etc. However, the conventional laser based methods need an expensive laser source (e.g., Nd-Yag laser source or He-Ne laser source.) and a complicated measurement system (including seeding particle, fluorescent particle and objective lens, etc.) [19][20][21]. These methods also have the disadvantages of high cost and higher requirements of the measurement environment. Therefore, although significant technical achievements and progresses have been obtained, due to the above mentioned disadvantages, more research works should be undertaken to seek a more effective optical method to implement the parameter measurement of gas-liquid two-phase flow in small channels with the advantages of lower cost, simpler construction and better capability of the environment [10][11][12]. Currently, the techniques of information science and micro-electronics have been rapidly developed. As a new photo-electric device, the performance of photodiode sensor has been significantly improved. The dimension of photodiode sensing element has been greatly decreased and a photodiode array sensor can be successfully developed with much lower cost/price [22,23]. As a new kind of laser source, in some cases, the laser diode can be used as a low-cost alternative of the conventional expensive laser source [24,25]. These technical progresses have laid a solid foundation of developing a low-cost optical measurement system. Meanwhile, as a newly emerging signal processing technology, machine learning, which aims to implement data mining, pattern recognition and modeling, etc., has been widely applied and studied in many research fields. Machine learning technology can provide useful approaches to make full use of the measurement information and hence to implement the parameter measurement successfully [26][27][28][29]. However, up to date, our knowledge and experience on the applications of the above new devices and machine learning technology to the parameter measurement of gas-liquid two-phase flow in small channels are very limited [4][5][6][7][8][9]. Based on a photodiode array sensor and a laser diode, this work aims to develop a low-cost void fraction measurement system and hence to propose a new optical measurement method for the void fraction measurement of gas-liquid two-phase flow in small channels by using machine learning technology. Figure 1a shows the structure of the new void fraction optical measurement system, including a laser diode, an extender lens, a slit, a photodiode array sensor, a data acquisition unit and a microcomputer. The laser diode which is used to produce a beam of laser is YuanDa laser L63510P5 with a wavelength of 635 nm (the wavelength of the laser diode is chosen according to our experience and previous experimental results) and an output power of 5 mW. The extender lens and the slit are used to change the laser into a parallel laser sheet (The extender is used to extend the beam diameter and decrease the laser's divergence. In this work, the resulting laser through the extender lens has a diameter of 40 mm). The laser sheet passes through the transparent channel perpendicularly, and is absorbed, reflected or deflected by the gas-liquid two-phase flow inside the small channel. The exit laser, which contains the characteristic information of the two-phase flow, is recorded by the array sensor. The output signals of the array sensor are then transmitted to the microcomputer by the Figure 1b illustrates the layout of the photodiode array sensor. According to the required sensing area, the cost and the size of the sensing element, the array sensor consists of 72 (12ˆ6) sensing elements. The outputs of the 72 sensing elements will be sent into the micropucter simultaneously. The sensing element is Vishay Telefunken PIN photodiode BPW34, which has a sensing area of 3.0ˆ3.0 mm 2 (in this work, the Signal to Noise Ratio of the sensing element is about 30 dB). Meanwhile, it is necessary to indicate that the number of the sensing elements is determined by our previous experimental results. It is not an optimal number. To look for an optimal element number of the array sensor will be our further research. microcomputer by the data acquisition unit. Figure 1b illustrates the layout of the photodiode array sensor. According to the required sensing area, the cost and the size of the sensing element, the array sensor consists of 72 (12 × 6) sensing elements. The outputs of the 72 sensing elements will be sent into the micropucter simultaneously. The sensing element is Vishay Telefunken PIN photodiode BPW34, which has a sensing area of 3.0 × 3.0 mm 2 (in this work, the Signal to Noise Ratio of the sensing element is about 30 dB). Meanwhile, it is necessary to indicate that the number of the sensing elements is determined by our previous experimental results. It is not an optimal number. To look for an optimal element number of the array sensor will be our further research. Research works have verified that the flow patterns of gas-liquid two-phase flow have significant influences on the measurement of the void fraction [1][2][3][4][5][6]30]. If we use one measurement model for the void fraction measurment, the measurement accuracies will not be satisfactory. An effective approach to solve this problem is to develop different measurement models for different flow patterns. Thus, the real time flow pattern identification result is necessary and the identification result is introduced to the void fraction measurment. In this work, the real-time flow pattern identification result is introduced into the void fraction measurement process. Meanwhile, for each typical flow pattern, a specific void fraction measurement model is developed. Figure 2 shows the scheme of the void fraction measurement. With the obtained measurement signals (a total of 72 groups of optical signals obtained by the array sensor), a feature vector is firstly extracted. According to the feature vector, the real-time flow pattern identification is then implemented. Finally, according to the flow pattern identification result, a relevant void fraction measurement model is selected to calculate the void fraction. Research works have verified that the flow patterns of gas-liquid two-phase flow have significant influences on the measurement of the void fraction [1][2][3][4][5][6]30]. If we use one measurement model for the void fraction measurment, the measurement accuracies will not be satisfactory. An effective approach to solve this problem is to develop different measurement models for different flow patterns. Thus, the real time flow pattern identification result is necessary and the identification result is introduced to the void fraction measurment. In this work, the real-time flow pattern identification result is introduced into the void fraction measurement process. Meanwhile, for each typical flow pattern, a specific void fraction measurement model is developed. Figure 2 shows the scheme of the void fraction measurement. With the obtained measurement signals (a total of 72 groups of optical signals obtained by the array sensor), a feature vector is firstly extracted. According to the feature vector, the real-time flow pattern identification is then implemented. Finally, according to the flow he flow pattern identification is a pattern recognition problem. The development of a on measurement model is a modeling problem. As mentioned in Section 1, machine lear ology can provide many useful approaches to solve pattern recognition problems or mod ems, such as k-nearest neighbor, linear discriminant analysis, and Bayes classifier, etc ment the pattern recognition [26][27][28][29], while linear regression, artificial neural network ian learning, etc. can implement the modeling [26][27][28][29]. To obtain the comprehensive information of the two-phase flow, the feature vector is extracted from the 72 measurement signals obtained by the photodiode array sensor (each measurement signal is obtained by one sensing element and contains a series of data points). The feature vector consists of two groups of statistical features which contain useful information of the gas-liquid two-phase flow, the mean values and the standard deviations of the 72 measurement signals. Void Fraction Measurement System and Measurement Scheme The mean value represents the time-averaged characteristic of a measurement signal. The mean value of the measurement signal obtained by the k th sensing element m k is where L is the data length of the measurement signal, u k is the measurement signal obtained by the k th sensing element, and k = 1,2,3, . . . ,72. The standard deviation represents the dispersion of a measurement signal. The standard deviation of the measurement signal obtained by the k th sensing element d k is Thus, combining the two groups of statistical features, the feature vector x is obtained The flow pattern identification is a pattern recognition problem. The development of a void fraction measurement model is a modeling problem. As mentioned in Section 1, machine learning technology can provide many useful approaches to solve pattern recognition problems or modeling problems, such as k-nearest neighbor, linear discriminant analysis, and Bayes classifier, etc. can implement the pattern recognition [26][27][28][29], while linear regression, artificial neural network and Bayesian learning, etc. can implement the modeling [26][27][28][29]. Compared with the other pattern recognition techniques mentioned above, Fisher Discriminant Analysis (FDA) is a dimensionality reduction technique and can provide a linear transformation that maximizes the between-class scatter and minimizes the within-class scatter. FDA has been widely used in the pattern identification field, and its effectiveness has been verified [26][27][28][29]31,32]. Thus, in this work, FDA is adopted to implement the flow pattern identification. Compared with the other modeling techniques mentioned above, Support Vector Machine (SVM) is a valid machine-learning technique and is suitable for model developing in small sample conditions. SVM has good generalization ability and has been widely used in many fields for model development [26][27][28][29]33]. Therefore, in this work, SVM is selected to develop the void fraction measurement models. Flow Pattern Identification Four typical flow patterns of gas-liquid two-phase flow in small channels are investigated in this work, including bubble flow, slug flow, stratified flow, and annular flow. According to our experimental results, the measurement signals of the bubble flow and the slug flow have some similarities, while the measurement signals of the stratified flow and the annular flow have some similarities. Figure 3 shows typical groups of the measurement signals and the images of the four flow patterns. The measurement signals are obtained by a sensing element (i.e., the sixth BPW34 diode at the fourth column of the photodiode array sensor). The images of the flow patterns are obtained by a high-speed camera (Intergrated Design Tools, Inc. (IDT) Redlake MotionXtra N-4). As shown in Figure 3, in the bubble flow, when a gas bubble passes through the measurement cross-section, the measurement signal has a clear voltage decrease and stays steady when the channel is full of water. In the slug flow, the measurement signal also has clear voltage decrease when a gas slug approaches and leaves, while at the central part of the gas slug, the measurement signal remains invariable. The measurement signals of the bubble flow and the slug flow have some similarities, but the voltage decrease amplitudes of the signals are different. In the annular flow or the stratified flow, the measurement signals both display fluctuations. However, the amplitude of the measurement signal fluctuation in the annular flow is different from that in the stratified flow. Therefore, based on the above characteristics of the measurement signals in different flow patterns, in this work, the bubble flow and the slug flow are initially classified as one group (Group 1), and the stratified flow and the annular flow are initially classified as the other group (Group 2). Figure 4 shows the flowchart of the flow pattern identification. The process of the flow pattern identification has two key steps. The first step is to determine that the real-time flow pattern belongs to Group 1 or Group 2 by Classifier A. The second step is to determine the specific real-time flow pattern by Classifier B or C. As shown in Figure 3, in the bubble flow, when a gas bubble passes through the measurement ross-section, the measurement signal has a clear voltage decrease and stays steady when the hannel is full of water. In the slug flow, the measurement signal also has clear voltage decrease hen a gas slug approaches and leaves, while at the central part of the gas slug, the measurement ignal remains invariable. The measurement signals of the bubble flow and the slug flow have some imilarities, but the voltage decrease amplitudes of the signals are different. In the annular flow or he stratified flow, the measurement signals both display fluctuations. However, the amplitude of he measurement signal fluctuation in the annular flow is different from that in the stratified flow. herefore, based on the above characteristics of the measurement signals in different flow patterns, n this work, the bubble flow and the slug flow are initially classified as one group (Group 1), and the tratified flow and the annular flow are initially classified as the other group (Group 2). Figure 4 hows the flowchart of the flow pattern identification. The process of the flow pattern identification as two key steps. The first step is to determine that the real-time flow pattern belongs to Group 1 or roup 2 by Classifier A. The second step is to determine the specific real-time flow pattern by lassifier B or C. The three classifiers (Classifier A, B, and C) are developed by FDA and each classifier is aimed to solve a binary classification problem. The decision function of the binary classifier can be expressed as: where y is the class label (y =´1 or 1), x is the feature vector, ω is the Fisher vector and ω 0 is the threshold. ω can be determined by solving the following problem: where S w is the within-class-scatter matrix and S b is the between-class-scatter matrix. Once the classifier is developed, by inputting the feature vector x into the decision function, the identification result can be obtained according to the class label y. The detailed description concerning FDA is available in [26][27][28][29]. Figure 5 shows the flowchart of the void fraction measurement model development. For each typical flow pattern, a specific void fraction measurement model is developed. According to experimental results, a training data set is constructed. The training data set includes the sample feature vectors extracted from the experimental measurement signals and the reference void fraction values. With the training data set, the void fraction measurement models of the four flow patterns (totally four models) are developed by SVM. experimental results, a training data set is constructed. The training data set includes the sample feature vectors extracted from the experimental measurement signals and the reference void fraction values. With the training data set, the void fraction measurement models of the four flow patterns (totally four models) are developed by SVM. Void Fraction Measurement Model Development where α is the void fraction, x is the feature vector, and , is the training data set with p samples. K(x,xi) is kernel function. In this work, the radial basis function , exp |x | / is selected as the kernel function, and σ is its diameter. b is a parameter. βi and βi* are the Lagrange multipliers, which are determined by solving the following optimization problem: According to the principle of SVM, the void fraction measurement model can be expressed as: where α is the void fraction, x is the feature vector, and x i, α i ( p i"1 is the training data set with p samples. K(x,x i ) is kernel function. In this work, the radial basis function K px, x i q " expp´|x´x i | 2 q{σ 2 is selected as the kernel function, and σ is its diameter. b is a parameter. β i and β i * are the Lagrange multipliers, which are determined by solving the following optimization problem: where ε is the slack variable and C is the penalty factor. The detailed description concerning the SVM is available in [27][28][29]. Practical Process of the Void Fraction Measurement The practical process of the void fraction measurement is illustrated in Figure 6. Firstly, the 72 measurement signals of the gas-liquid two-phase flow are obtained by the array sensor, and the feature vector x of the signals is extracted. Secondly, the real-time flow pattern is identified. Then, according to the identification result, a relevant void fraction measurement model is selected. Finally, with the selected measurement model and the obtained feature vector x, the void fraction measurement α is obtained. feature vector x of the signals is extracted. Secondly, the real-time flow pattern is identified. Then, according to the identification result, a relevant void fraction measurement model is selected. Finally, with the selected measurement model and the obtained feature vector x, the void fraction measurement α is obtained. Figure 7 shows the experimental setup of the void fraction measurement system for gas-liquid two-phase flow in small channels. The gas and the liquid phase are driven into the small channel by syringe pumps or a nitrogen tank (if either the gas flowrate or the liquid flowrate is less than 3.6 L/h, the corresponding syringe pump is used; otherwise, the nitrogen tank is used). Nitrogen is used as the gas phase and its flowrate ranges from 0 to 1300 L/h. Tap water is used as the liquid phase, and its flowrate ranges from 0 to 20 L/h. The two phases mix at the mixer, and then the two-phase flow flows through a horizontal channel with a length of 0.95 m. The distance between the channel inlet and the photodiode array sensor is 0.25 m. The optical measurement signals of the two-phase flow are obtained by the photodiode array sensor and then sent to the microcomputer by the data acquisition unit. Meanwhile, an IDT Redlake MotionXtra N-4 high-speed camera (maximum fps (frames per second) @ full resolution: 3000 fps @ 1024 × 1024) is used to obtain the images of the flow patterns. Experimental Setup Four small channels with inner diameters (i.d.) of 4.22, 3.04, 2.16 and 1.08 mm, respectively, are used in the experiments. Four typical flow patterns including bubble flow, slug flow, stratified flow and annular flow are investigated. The sampling frequency of the photodiode array sensor is set to 1 kHz. The National Instruments cDAQ-9172 is selected as the data acquisition unit. The reference data of the void fraction is determined by the quick-closing valve method [4][5][6]. Figure 7 shows the experimental setup of the void fraction measurement system for gas-liquid two-phase flow in small channels. The gas and the liquid phase are driven into the small channel by syringe pumps or a nitrogen tank (if either the gas flowrate or the liquid flowrate is less than 3.6 L/h, the corresponding syringe pump is used; otherwise, the nitrogen tank is used). Nitrogen is used as the gas phase and its flowrate ranges from 0 to 1300 L/h. Tap water is used as the liquid phase, and its flowrate ranges from 0 to 20 L/h. The two phases mix at the mixer, and then the two-phase flow flows through a horizontal channel with a length of 0.95 m. The distance between the channel inlet and the photodiode array sensor is 0.25 m. The optical measurement signals of the two-phase flow are obtained by the photodiode array sensor and then sent to the microcomputer by the data acquisition unit. Meanwhile, an IDT Redlake MotionXtra N-4 high-speed camera (maximum fps (frames per second) @ full resolution: 3000 fps @ 1024ˆ1024) is used to obtain the images of the flow patterns. Experimental Setup Four small channels with inner diameters (i.d.) of 4.22, 3.04, 2.16 and 1.08 mm, respectively, are used in the experiments. Four typical flow patterns including bubble flow, slug flow, stratified flow and annular flow are investigated. The sampling frequency of the photodiode array sensor is set to 1 kHz. The National Instruments cDAQ-9172 is selected as the data acquisition unit. The reference data of the void fraction is determined by the quick-closing valve method [4][5][6]. Figures 8-11 show the experimental results of the void fraction measurement in the four small channels. Compared with the reference void fractions obtained by the quick-closing valve method, the maximum absolute errors of the void fraction measurement in the four small channels are all less than 7%. Experimental Results The experimental results indicate that the development of the void fraction measurement system is successful and the proposed void fraction measurement method is effective. Figures 8-11 show the experimental results of the void fraction measurement in the four small channels. Compared with the reference void fractions obtained by the quick-closing valve method, the maximum absolute errors of the void fraction measurement in the four small channels are all less than 7%. Experimental Results The experimental results indicate that the development of the void fraction measurement system is successful and the proposed void fraction measurement method is effective. Figures 8-11 show the experimental results of the void fraction measurement in the four small channels. Compared with the reference void fractions obtained by the quick-closing valve method, the maximum absolute errors of the void fraction measurement in the four small channels are all less than 7%. Experimental Results The experimental results indicate that the development of the void fraction measurement system is successful and the proposed void fraction measurement method is effective. Discussion In this work, a photodiode array sensor is used to obtain the signals of the exit laser. With 72 sensing elements, the array sensor has enough sensing area to obtain sufficient signals of the exit Discussion In this work, a photodiode array sensor is used to obtain the signals of the exit laser. With 72 sensing elements, the array sensor has enough sensing area to obtain sufficient signals of the exit Discussion In this work, a photodiode array sensor is used to obtain the signals of the exit laser. With 72 sensing elements, the array sensor has enough sensing area to obtain sufficient signals of the exit Discussion In this work, a photodiode array sensor is used to obtain the signals of the exit laser. With 72 sensing elements, the array sensor has enough sensing area to obtain sufficient signals of the exit laser. Then, comprehensive information of the two-phase flow can be acquired from the obtained signals, and the void fraction measurement can be implemented. Meanwhile, in the proposed measurement method, a low-cost laser diode is used as the laser source. According to the current technique level, the performance indexes (such as output power, laser coherence and divergence, etc.) of the conventional standard laser sources (e.g., Nd: YAG laser source, He-Ne laser source, etc.) are comprehensively better than that of the laser diode. The only comparable performance index of the laser diode is the power stability (Power stability is the maximum drift with respect to mean power over eight hours. In this work, the laser diode has a power stability of 2%, while the standard laser sources such as THORLABS HNL050L He-Ne laser source have a power stability of 2.5% [34]). The proposed measurement method is mainly on the basis of the power changes and distribution of the exit laser. The power stability is the key performance index of the laser source. Thus, in this work, the advantage of the laser diode in power stability is fully utilized. Furthermore, with the supports of suitable machine learning techniques and the developed photodiode array sensor, the low-cost laser diode successfully acts as a competent replacement of the expensive standard laser source. Sufficient information concerning the characteristic of the two-phase flow is obtained and processed. Finally, the void fraction measurement is implemented. To implement the void fraction measurement with a satisfactory accuracy, the influence of the flow pattern should be condersided. Compared with the normal scale channel, the flow characteristics of the two-phase flow in small channels have significant differences. As the dimension of the channel decreases, some flow patterns become common and the others are difficult to observe [2][3][4][5]. According to the experimental results, bubble flow is observed in the 4.22-mm and 3.04-mm i.d. channels but not in the 2.16-mm and 1.08-mm i.d. channels, while the slug flow, stratified flow and annular flow are all observed and investigated in the four small channels. These experimental results may provide useful reference for others' research work. To overcome the influence of the flow pattern on the void fraction measurement, a void fraction measurement method is proposed. In this method, a specific void fraction measurement model is developed for each typical flow pattern. In practical measurement, the parameters of the models vary with the flow patterns, which means that the flow pattern indeed has significant influence on the void fraction measurment. To overcome the influence of the flow pattern, the flow pattern of the two-phase flow is identified at first, and then, according to the identification result, a relevant void fraction measurement model is selected to implement the void fraction measurement. The experimental results show that, with the introduction of the real-time flow pattern identification result, the influence of the flow pattern on the void fraction measurement has been significantly reduced. Conclusions In this work, based on a photodiode array sensor and a laser diode, a low-cost and simple-structure void fraction measurement system for gas-liquid two-phase flow in small channels is developed. A low-cost laser diode is adopted as the laser source and a 12ˆ6 photodiode array sensor is used to obtain the information concerning the two-phase flow. Meanwhile, a new void fraction measurement method is proposed. The machine learning techniques (FDA and SVM) are adopted to implement the flow pattern identification and the development of the void fraction models. To overcome the influence of flow pattern on the void fraction measurement, the identification result is introduced to the void fraction measurement. Then, according to the identification result, a relevant void fraction measurement model is selected to implement the void fraction measurement. Experiments are carried out in four small channels with different inner diameters of 4.22, 3.04, 2.16 and 1.08-mm, respectively. Four typical flow patterns including bubble flow, slug flow, stratified flow and annular flow are investigated. The maximum absolute error of the void fraction measurement is less than 7%. The experimental results show that the proposed void fraction measurement method is effective and the development of the measurement system is successful. The experimental results also show that the introduction of the flow pattern information can overcome the influence of the flow pattern on the void fraction measurement. The research results also verify that the low-cost laser diode can act as a competent replacement of the expensive standard laser source if suitable signal processing techniques and information acquisition techniques are used. That can significantly reduce the cost of the laser based measurement system. This research work can provide a good reference for other researchers' works.
2016-03-14T22:51:50.573Z
2016-01-27T00:00:00.000
{ "year": 2016, "sha1": "6f020f80b3851b2c92d42afccaf73b255d8bfc2c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/2/159/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f020f80b3851b2c92d42afccaf73b255d8bfc2c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
122420880
pes2o/s2orc
v3-fos-license
Coherent effects in a thin film of metamaterial The refraction is theoretically considered of ultimately short pulses at interface of two dielectrics that contains a thin film of nonlinear metamaterial. For the model of metamaterial composed of nanoparticles and magnetic nanocircuits (split-ring resonators) the equations are obtained suitable for describing the coherent responses of such film. The numerical simulation demonstrates the emergence of oscillatory echo in inhomogeneous system of meta-atoms. It is supposed that the reported methods are applicable for investigation of thin metamaterial films. INTRODUCTION Quite a long time ago the ideas were stated 1,2 of the media where the directions of phase velocity and Poyting vector were opposite. The unusual properties were predicted of the waves propagating in such materials. These materials were called "left-handed material" (LHM). The LHMs possess the negative refraction index, when the real parts of permittivity and permeability are both negative in some frequency range. This is the reason for another name for such media -the negative refractive index (NRI) media. The existence of LHM was experimentally demonstrated firstly in microwave band [9][10][11][12] . The experimental evidences of LHM in optics were reported in [13][14][15][16][17] . The results of investigations and the analysis of perspective application of NRI materials are reviewed in 18,19 . The recent theoretical studies [1][2][3][4][5][6][7] have caused a great interest to the producing of NRI media. The nanotechnology development has led to the production of composite materials containing the inclusions of nanoparticles, nanowires, carbon nanotubes, nanomagnets and photonic crystals with metallic structure elements. To underline the artificial nature of such media they call them metamaterials 8 . Not all metamaterials possess negative refraction index, and not for all frequencies the refraction index is negative. However, this term is convenient when it is necessary to point to an object rather than on physical property. The concrete constructions of metamaterials depend on both the elementary structural units (metaatoms) and a spatial layout of those atoms. Thus different sorts of metamaterials may have rather diverse electrodynamic features. In this connection it is expedient to engage the representation of homogeneous effective medium with certain optical characteristics (refractive index, extinction, permittivity etc.) while studying macroscopical properties of such media. In most cases, the metamaterials are produced in film geometry. Films of meta-material often demonstrate rather substantial absorption of incident radiation. For experimental study of both linear and nolinear optical features of such media the methods of coherent spectroscopy [20][21][22][23][24][25][26] and nonlinear optics of thin films can be successfully applied 27 . For example, the measuring of bistability thresholds of nonlinear layer covering a thin film of meta-material allows determining the permittivity and permeability of the film 28,29 . In the present work, the refraction is considered of ultimately short pulse of electromagnetic radiation at the nonlinear interface of dielectric media. The excitation pulse represents mainly a pulson -a signal of electromagnetic field containing only several periods of carrier field, i.e. being very short in time and in spatial extension. The nonlinearity is due to a thin film of nonlinear material. If the thickness of the film is less then the length of a pulse, the pulse envelope of reflected and refractive waves are coupled to the incident wave by jump conditions. The evolution of polarization and magnetization is governed by differential equations, which can be formulated based on meta-material model. To describe the propagation of electromagnetic waves in artificial metamedium the phenomenological model of oscillatory polarization and magnetization was used 7,30,31 . This model well matches the metamaterial made of periodically arranged nanowires and nanocircuits (split ring resonators (SRRs)) [32][33][34][35][36] . The generalization of this model, accounting the nonlinear properties of meta-material, was proposed in 37,38 . In this report, we used the analogous model 39 , where the nonlinearity of plasma vibrations in nanoparticles immersed in dielectric matrix provides the nonlinear response of the film. The magnetic properties of the film are described by linear SRR oscillators. The dispersion in the parameters of nanoparticles and SSRs leads to inhomogeneous broadening of resonance absorption lines. Thus, a pair of ultimately short pulses incident on such film produce the coherent responses in the form of photon echo. By our knowledge this is a first report of observation of echo response in metamaterial excited by extremely short pulses. By variation of the carrier wavelength it is possible to shift from the frequency band where meta-material is PRI to the field where it is NRI. That is one of the reasons to presume that photon echo effect could be a diagnostic tool to make conclusion on the optical properties of metamaterial. Both, the magnetization and the polarization in these formulae contain homogeneous part associated with dielectric media and singular (proportional to delta-function) part, associated with thin film. For the sake of definiteness we will focus on the case of TE wave. Then equations (1) yields: If to integrate the second equation in (2) over x then with the account of ( ) Under δ tending to zero this yields: The integration over x in the third equation of (2) with the account of ( ) The following comes from above relationships, when δ goes to zero: Expressions (3.1) and (3.2) show, that due to permeability and permittivity of a thin film, the strength of electric and magnetic fields are different by opposite sides of the film. These conclusions are valid for both the continuous waves and the ultra-short or the ultimately short pulses of electromagnetic radiation. The relationships (3) are the generalization of the corresponded formulae in [20][21][22][23][24][25][26][27] . For the ТЕ wave the jump conditions (3.1) and (3.2) can be utilized to derive the relationships between the strength of electromagnetic fields in incident, reflected and refracted pulses. It is assumed for simplicity that the dielectric media by both sides of the film are dispersionless ones. Then the solutions of Maxwell equations can be written in the following form: Due to the absence of dispersion in these media, the group velocities are the parameters which feature the media. The jump condition (3.1) provides the relationship: It follows from Maxwell equations for planar symmetry that With the account of this expression equation (3.2) yields: Making use of (2) and bearing in mind that the introduction of the retarded time equation (6) can be then re-written as ( ) Equation (7) can be integrated over time, thus providing the relationship ( ) (5) and (8) in order to write down the relationship generalizing the Fresnel formula to the case of anharmonic waves. ( ) ( ) In order to obtain polarization and magnetization it is necessary to define the field inside the film. It was demonstrated in 40 by means of Green-function technique that these values can be redefined in a symmetric way: If the film is a non-magnetic one a known result comes out from (10.1) for the case of TE wave 21,22 : The field strength inside the film follows from (10): where it was used that the media beyond the film are non-dispersive and phase velocity coincides with the group one. As an example the inner field for TE wave can be obtained from (9): If the media by both sides of the film are identical, these expressions simplify A thin film material To obtain the polarization and magnetization of the layer material it is necessary to select the model describing this material. Oftenly 41,42 in the model of metamaterial the dielectric properties are accounted by means of Lorenz oscillators for plasma vibrations, but magnetic features are described by the ensemble of circuits -SRRs [32][33][34][35][36] . As nanoparticles and nanocircuits are the elementary objects of artificial medium, they call them the metaatoms for brevity. The simplest generalization of this model can be achieved by an accounting the anharmonicity of the electrical oscillations in nanoparticles 39 or the insertion of nonlinear capacity in a circuit 37,38 . Let us use the following equations 39 respectively. The losses due to the ohmic resistance in circuits and the damping of plasmonic oscillations are accounted by parameters m Γ and e Γ . p Ω ≈2π·27,5 GHz for Cu. In the case when anharmonicity constatnt κ is equal to zero, the system of equations (12) coincide with one used in 30,31 to investigate propagation and refraction of electromagnetic waves in NRI media. If the parameters of meta-atoms differ, the polarization and magnetization (12) correspond to an individual meta-atom from the ensemble. The total polarization (and magnetization) in (9) is the result of averaging over this ensemble. The incident pulses In the frames of current consideration, the incident field constitutes the ultimately short pulses (USP). The USP's called uniplolar pulses or videopulses do not possess carrier at all. The electromagnetic field of such pulses can be presented by the Gaussian shape: The spectrum of such pulse { } The other sort of USP is a pulse of quasiharmonic radiation, squeezed by special optical means down to the duration of about a period of electromagnetic field vibration in initial signal. This shape can be presented as The spectrum of pulson is expressed by the formula: Its FWHM is the same as for videopulse, but the whole spectrum is shifted on 0 Ω . By varying 0 Ω one can drive the spectrum of pulson from a frequency band, where metamaterial is characterized by a positive refraction index, to the area of negative refraction. In particular, the transmission F and reflection R coefficients can be now considered as the function of 0 0 p ω = Ω Ω . The effective permittivity and effective permeability of bulk sample, which meet the model of metamaterial (12) in linear approximation, are given by the expressions: For the square of refraction index we have NUMERICAL RESULTS System of equations (13) was solved numerically by the iteration procedure of prediction and correction with a desired accuracy. System (13) contains a certain complexity as the polarization and magnetization for individual mataatom (13.3,4) are determined by the fields which in their turn depend on polarization and magnetization averaged over the inhomogeneous ensemble of metaatoms. That makes the problem self-consistent. This obstacle can be overcome with the additional iteration process involving transmitted and reflected field. The starting approximation for the fields could be their classical Fresnel values for empty interface. The iteration process over fields ends on reaching the assigned accuracy. It should be emphasized that the nature of the observed response is the oscillatory echo 44 as it forms on the ensemble of nonlinear oscillators represented by equations (13.3) for plasmonic vibrations. In fact, due to the self-consistent form of (13) the magnetodipole subsitem also takes part in echo formation. The oscillatory echo appears in the form of eqidistant train of signals at the moments after double excitation of the nonlinear medium by short pulses ( fig.1 left panel). The plots in figure 1 demonstrate, that oscillatory echo effect completely vanishes if anharmonicity constatnt κ is zero ( fig.1 right panel). The Gauss form-factor of inhomogeneous ensemble of metaatoms and the dependence of the dimensional quantization frequency d ω on the dimensional parameter x are depicted on the inset. The excitation of metamedia with pulson provides the opportunity to shift the spectrum band of pulson over the frequency domain by varying the pulsation frequency 0 ω . But, in order to operate with a distinct spectral band of pulson, one needs to satisfy the condition 2 0 2 p t π Ω ≈ , rather then simply 0~1 p t Ω . That means the shift of the frequency must be accompanied by the variation of pulse total duration. Our preliminary calculations show that the oscillatory echo effect in the ensemble of metaatoms is stronger when the pulsation frequency falls into the NRI band. This case is depicted in figure 2, where echo is generated by a multiperiod pulson. In the inset there is a frequency layout of metamaterial parameters were dots point to the zeroes and poles of ε , µ and film n (15,16). It is seen that in this particular case the pulsation frequency get into NRI area. CONCLUSION In conclusion, we considered the refraction of ultimately short electromagnetic pulse in the form of pulson at a thin layer of metamaterial placed at the interface between two dielectric media. Under such excitation and for such geometry of a x ω ω = − − sample the conventional permittivity and the permeability cannot be introduced. But still, effective permeability could be consider, and these parameters can change sign depending on the frequency of shape modulation in pulson. Our special concern is the model where there is a diversity in optical properties of nanodipoles and nanocircuits. The reason for inhomogenety might then be the dispersion of geometrical scales of metaatoms which converts into a spectral inhomogeneous resonance line not necessarily symmetric. The dephasing, under the action of the first pulse, and subsequent rephrasing, caused by the second pulse, possibly results in echo effect. This effect was reliably detected in our numerical simulations. It is established that the observed response is an oscillatory echo -the characteristic effect for an ensemble of nonlinear oscillatiors repeatedly excited by ultrashort pulses. Our preliminary calculations show that the oscillatory echo effect in the ensemble of metaatoms is stronger when the pulsation frequency falls into the NRI band. The transmission and reflection of a single USP is accompanied by a well-pronounced coherent effect of field nutation in the later time moments. That clearly takes place when the pulson modulation frequency is close to the resonance frequency of the model. We believe that the further studies in this field will open the opportunities to utilize the oscillatory echo and other coherent effects as a diagnostic tool to explore metamaterials.
2019-04-20T13:09:04.704Z
2007-12-17T00:00:00.000
{ "year": 2007, "sha1": "4c4c5177b514041df49f1c304e2f9084b8b89d70", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.2842", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2ded091429d36e9df30a8a8a13e0b99894cd04b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Engineering", "Physics" ] }
253402465
pes2o/s2orc
v3-fos-license
Comparison of Adherence to Mediterranean Diet between Spanish and German School-Children and Influence of Gender, Overweight, and Physical Activity Background: Poor dietary habits and low levels of physical activity (PA) have a strong tendency to track from childhood into adulthood. The Mediterranean Diet (MD) is known to be extremely healthy, associated with lower BMI and a lower risk of obesity in children and adolescents. Therefore, adherence to the MD was compared between Spanish (n = 182) and German (n = 152) children aged 10 to 13 years to examine a possible more “westernized” diet in Spain with a non-Mediterranean country, that traditionally prefers a “Western diet” and to determine the association between adherence to the MD and gender, body composition, and PA levels. Methods: In the German observational longitudinal cohort study and the Spanish cohort study, body composition and questionnaires (KIDMED, Diet Quality (IAES)) were obtained, and accelerometers (Actigraph) were applied to detect PA. Results: Girls had higher BMI-standard deviation score (SDS) than boys and Spanish girls were less active than boys. Differences were detected in MD habits, such as favorable fruit-, vegetables-, fish-intakes, and dairy products in Spanish children and unfavorable consumptions of fast food, processed bakery goods, candies, and sweet beverages in German children. Independently of country, girls, children with lower BMI-SDS and children with higher PA level were related with better diet quality. Conclusion: Spanish children showed higher adherence to MD and diet quality (IAES) compared to German children, but there was a trend toward a more “westernized” diet. Gender, body composition, and PA influenced nutrition regardless of country. Introduction Overweight and obesity are a global burden and a tremendous problem for children and adolescents, with one in three children living with overweight or obesity [1]. Being overweight or obese seems to be the result of several factors and the causes are still not known entirely [2]. However, one reason is an unhealthy lifestyle behavior, such as poor dietary habits, high sedentary time, and low levels of physical activity (PA), which has a strong tendency to track from childhood into adulthood [3,4]. Bull et al. concluded that PA has benefits for various health outcomes, such as physical fitness, cardio-metabolic health, bone health, cognitive outcomes, mental health, and reduced adiposity [5]. To achieve this, the World Health Organization (WHO) recommends for children and adolescents (aged 5-17 years) at least an average of 60 min/day of moderate to vigorous intensity PA. Amounts of sedentary time is recommended to be reduced, especially screen time. In addition to PA, nutrition plays a key role in health and is closely related. A healthy diet is defined by the WHO as health-promoting and disease-preventing [6]. It provides Methods Baseline data from two separate studies in Spain and Germany were analyzed. In the Spanish cohort study, 182 children participated from six public schools in Madrid's central area, with high sociocultural status and ethnicity variability. The district and schools were selected by the funder, since the foundation (Fundación Maratón) was looking for schools to promote physical activity and health in critical environment. To collect data on nutrition and physical activity, they contacted the Technical and Autonomous University of Madrid for health professionals. Data were obtained between October and December 2015. Participating children and their parents gave written consent before initiating the study, which was approved by the ethics committee of the Universidad Politécnica de Madrid (UPM, PIA 12009-11, Date: 5 March 2015) and based on the guidelines of the Helsinki Declaration. One hundred fifty-two children from two secondary schools in northern Germany (Lower Saxony and North Rhine-Westphalia) participated in the German observational longitudinal cohort study "Rebirth active school", which also showed high variability in sociocultural status, but took place in cities smaller than Madrid. As "Rebirth active school" was a pilot study, secondary schools were asked to participate. The overall goal was to integrate a physical activity program into the school day, achieving WHO recommendations on PA. Further details are published by Memaran et al. [23]. Data were obtained between April and June 2017 as baseline of an overall 3-year study period. Participating children and their parents gave written consent, and the study was approved by the Hannover Medical School ethics committee (Number 1790, Date: 9 January 2017), based on the guidelines of the Helsinki Declaration. Inclusion criteria in both studies was school affiliation. Exclusion criteria included being at home on the day of data collection. Anthropometrics In both studies, height and weight were measured using a portable stadiometer (Spain: SECA 213, SECA, Azcapotzalco, Mexico; SECA 764, Seca GmbH & Co. KG, Hamburg, Germany) in light clothes without shoes. Hip circumference was measured with a standard centimeter. All anthropometric measurements were recorded according to the International Society for the Advancement of Kinanthropometry principles [24]. Each measurement was carried out at least two times and, if the percentage of measurement error was insufficient (>1%), a third measurement was performed. For two measurements, the mean was taken as the outcome variable, and for three measurements, the median [24]. According to the German population, height, weight, BMI, and hip circumference were transformed to standard deviation scores (SDS) to compare the two countries normalized by age and gender [25]. "Overweight" was defined as BMI-SDS over 1.282 and "obese" as BMI-SDS over 1.881 [26]. Therefore, three groups "normal weight", overweight", and "obese" were used in the nutrition analysis. For PA analysis, "overweight" and "obese" were combined to one group, and two groups "normal weight" and "overweight" were used. In both studies, all measurements were performed by previously trained medical personnel or sports scientists in a one-to-one situation. Questionnaires To assess the adherence to the Mediterranean diet in children, the Mediterranean Diet Quality Index or KIDMED was used [27]. KIDMED was the first to assess adherence to Mediterranean dietary patterns in children and youth [27]. The psychometric investigating properties of this index have also been used in other countries not classified as Mediterranean geographies, such as Portugal [28]. It consists of 16 questions that are answered "yes" or "no". Thirteen questions have a positive aspect (score +1) and three a negative aspect (score −1). The KIDMED is the sum of these scores, ranging from −3 to 13. It is categorized as "very low or deficient quality" (KIDMED ≤ 3), "need to improve the eating pattern to adjust it to the Mediterranean model" (KIDMED from 4 to 7), and "optimal Mediterranean diet" (KIDMED ≥ 8). To compare the questionnaires of both projects, three questions with a positive aspect had to be skipped (nuts intake, rice and starchy food, and the usage of olive oil at home). They have been deleted for the German study since, for example, most German children did not know, which oil was being used for cooking at home. All other questions were part of both studies. Therefore, the KIDMED ranged from −3 up to 10. The Index of a healthy Alimentation diet for the Spanish population (IASE) is a rapid and affordable method to estimate the quality of the diet in the Spanish population [29]; it uses secondary data from the Spanish National Health and Nutrition Survey and the feeding guidelines. The IASE identifies the frequency intakes of food groups using 9 of the 12 food groups intake described in the Spanish National Health and Nutrition Survey (ENS-06). It includes 10 variables, 9 questions, and a variable "diet-variety" calculated from the answers to the 9 questions. The first five questions represent the consuming of the main groups of alimentations (grains and derivatives, fruits, vegetables, dairy products, and meat) and the remaining four represent the achievement of reference nutritional objectives (total fat, saturated fat, cholesterol, and socio), asking for cold-processed meats, sweets, and beverages. Every variable gets a score from 0 to 10 with higher scores representing better diet quality. The sum of the scores makes it possible to calculate the IASE with a maximum value of 100 and to classify the diet into three categories: "Healthy feeding" (score > 80), "need-for-change" (score between 50 and 80), and "little healthy" (score < 50) following the recommendations of the Spanish Feeding Guidelines [29,30]. Both questionnaires were translated into German from the scientifically published English version by native speakers and back-translated to avoid errors. The original Spanish version was used in the Spanish study. Questionnaires were completed at the beginning of each test day and each child was interviewed in a one-to-one situation by previously trained sports scientists. KIDMED and IASE were used complementarily for in-depth analysis of dietary quality, since KIDMED was designed for use in children and adolescents, and IASE supplements additional information to the KIDMED results. Accelerometers The accelerometer models used in the Spanish study were the ActiGraph GT1M, GT3X, and GT3X+ (Actigraph TM, LLC, Fort Walton Beach, FL, USA) worn at the hip and the ActiGraph GT9X Link (Actigraph TM, LLC, Fort Walton Beach, FL, USA) worn at the non-dominant wrist in the German study. The GT1M device is a uniaxial accelerometer, while all other devices are triaxial accelerometers. The generations GT1M and GT3X+ showed strong agreement in children under free-living condition and thus were used in the Spanish study [31,32]. Instructions on how to wear the accelerometer were explained to participants on the day of device delivery. In addition, this information was provided to parents/guardians and school administrators in written form. The Spanish children wore the device for 7 consecutive days attached on an elastic band on their lower back, whereas the German children wore it for at least 3 consecutive weekdays and 1 weekend on their non-dominant wrist. All participants were asked to take off their accelerometer for water activities, such as swimming and taking a shower or when going to sleep. Data were downloaded and analyzed using Actilife software (version 6.13.4 in the German study, version 6.62 in the Spanish study, Actigraph TM, Pensacola, FL, USA). In the Spanish study, a daily recording duration of at least 10 h per day was required to be included in the analysis and for the classification of physical activity into light, moderate, and vigorous intensity, the cut-off points proposed in the HELENA study were used [33]. In the German study, the cut-off points from Chandler et al. were used, and at least 1152 min per day must have been recorded [34]. Mean time per day spent in light, moderate, and vigorous intensity were calculated. Since both studies used different devices, different wearing positions, and different minimum requirements for valid days, for each country the mean time per day in moderate and vigorous intensity spent during weekdays, weekend days, and all valid days was calculated separately. Then, the percentage of weekend days compared to weekdays was determined and the PA levels were sent as "low" for being in the first tertile of all valid days, "medium" for being in the middle tertile, and "high" for being in the third tertile. Statistics All data are given as mean ± standard deviation. Normal distribution was tested using the Kolmogorov-Smirnov test. Distribution of the data was tested with a Chi-squared test. Differences between the two groups were tested with an unpaired t-test for parametric data, respectively, and a Mann-Whitney-U test was used for non-parametric data with Hedges' g as the effect size. The interaction of country and gender was computed with an ANOVA with eta squared η 2 as the effect size. All post-hoc tests were corrected by Bonferroni's method. Odd ratios were calculated with a forward stepwise binary logistic regression. A backwards multiple linear regression was used to determine the relationship between KIDMED and IAES as a dependent variable, and country, gender, BMI group, and PA level as independent variables. The significance level was set at 0.05. All calculations were carried out with SPSS (version 27, Armonk, NY, USA). Results A total of 334 boys and girls were included in the analysis, 152 from Germany and 182 from Spain. There were no significant differences between both countries (see Table 1). The girls showed higher weight, BMI, and waist circumference than the boys. The interaction between gender and country was significant for age. The analysis of the SDS-scores showed no significant differences for the interaction of gender and country (see Figure 1). Height-SDS differed significantly between both countries with higher values for Germany (η 2 = 0.01) and the girls had significantly higher BMI-SDS than the boys (η 2 = 0.01). BMI-SDS and hip circumference-SDS of German and Spanish girls were above zero. The PA level distribution showed a significant difference for gender and BMI group in Spain. Spanish girls were less active than the boys and the overweight group was less active than the normal weight (see Table 2). The activity on weekend days was lower than on weekdays with a higher decrease in Spain than in Germany (Germany 80.4 ± 36.5%; Spain 70.3 ± 32.2%; p = 0.014; g = 0.30). Table 3 shows the results of the each KIDMED question by country, with 10 out of 13 questions showing significant differences between both countries. The KIDMED score was 3.45 ± 1.99 for Germany and 6.15 ± 1.95 for Spain, with no significant differences between gender (p ANOVA = 0.078) and the interaction gender and country (p ANOVA = 0.486), but it differed significantly between the two countries (p ANOVA < 0.001; η 2 = 0.32). Odd ratios for all KIDMED questions are displayed in Figure 2. The Spanish children showed a higher probability of answering "yes" for six out of nine KIDMED questions with a positive aspect and a lower probability of answering "yes" for all questions with a negative aspect. Children with a higher PA level had simultaneously a higher probability to eat a second serving of fruits per day and a lower one to eat commercially baked goods or pastries for breakfast. The multiple linear regression showed positive relationships between Spain, girls, and higher PA level (see Figure 2). The PA level distribution showed a significant difference for gender and BMI group in Spain. Spanish girls were less active than the boys and the overweight group was less active than the normal weight (see Table 2). The activity on weekend days was lower than on weekdays with a higher decrease in Spain than in Germany (Germany 80.4 ± 36.5%; Spain 70.3 ± 32.2%; p = 0.014; g = 0.30). The IASE and its variables are shown in Table 4. The IASE and nine out of ten variables were significantly higher in the Spanish children than in the German ones. The classification in the three categories was significantly different (p < 0.001): "Little healthy" (Germany 65.3%, Spain 0%), "need-for-change" (Germany 34.0%, Spain 52.2%), and "healthy feeding" (Germany 0.7%, Spain 47.8%). Figure 3 displays the results of the multiple linear regression. The IASE and nine of its variables showed a positive relationship with Spain. The girls had a positive relationship with the IASE and the BMI group had a negative relationship with fresh or cooked vegetables per day, indicating that the higher the BMI-SDS, the unhealthier the answer. Discussion The aim of the present study was to compare the adherence to the MD between Spain and a non-Mediterranean country in school children aged 10-13 and to determine the association between adherence to MD and gender, body composition, and PA levels. Results showed that Spanish children had a higher adherence to MD and diet quality (IAES) compared to German children, but there was a trend toward a more "westernized" diet among the Spanish children studied. Gender, body composition, and PA influenced nutrition regardless of country. When analyzing both countries together, females reached higher results in both nutritional scores (KIDMED and IAES) compared to the males. Summarizing these results from the KIDMED and IAES, the Spanish children showed higher adherence to MD compared to German children, but although the MD is accepted and promoted in Germany, the German children reached scores well beyond their Spanish counterparts. Therefore, the probability of answering "yes" to the questions of KIDMED with a positive aspect was only higher for Spanish children and to the questions with a negative aspect only for German children. Similar results showed the questions for IAES, where only Spanish children had a positive relationship with healthier answers. Since three questions with a positive aspect and adherence to MD were not asked out of the original KIDMED, the results could not be categorized. Therefore, the probability of answering "yes" to the questions of KIDMED with a positive aspect was only higher for Spanish children and to the questions with a negative aspect only for German children. Similar results showed the questions for IAES, where only Spanish children had a positive relationship with healthier answers. Since three questions with a positive aspect from the original KIDMED were not asked, the results could not be categorized. However, the results showed that neither country would reach very low adherence to MD as the German children reached at least 3.45 ± 1.99. The Spanish children (6.15 ± 1.95) were closer to achieve good adherence to MD, knowing that the three missing items are typically for MD (olive oil, nuts, and grain foods). Therefore, it is rather unlikely that the German group would achieve higher scores and would "need to improve their eating pattern to adjust it to the MD" according to the reference values of KIDMED. Data from the second wave of the German KIGGS study analyzed over 1353 adolescents, which showed inadequate consumption of fruits, vegetables, starchy, and milk/dairy products, and exceeded the intake of meat and sugar-sweetened beverages [4]. These results are in line with those of this study, compared to the Spanish children. It should be noted, however, that this study applied food-frequency questionnaires, which did not include questions regarding quantities (e.g., g/day) of food, such as in KIGGS. A review from Garcia-Cabrera et al. analyzed 18 studies (17 southern European studies and one from Chile regarding adherence to the MD in children and young adults 2-25 years) [13]. Only 10% of all 24,067 participants had a high adhesion to the MD applying the KIDMED, while 21% showed low adhesion. A study by Farajian et al., with 4786 Greek children comparable in age, showed that 45% had a low adherence to the MD and only 5% showed high adherence [35]. Lopez-Gil et al. found in 26% of his study population optimal adherence to MD, applying the KIDMED questionnaire [36]. All these results show not only the relevance of the geographical location, but also a moderate adherence to MD, which showed the need to improve the MD in these countries [37]. Similarly, the IAES data reflected the "need-for-changes" in the quality of the diet of the German children. Recommendations for a healthy diet recommend at least 5 portions of fruit and vegetable intake per day. Fruit intake was significantly higher in Spain (91% vs. 70%). Similarly, 75% of the Spanish group compared to 57% of the German group ate vegetables once per day. Children with higher PA levels consumed more often a second serving of fruits per day, regardless of country. The highest difference was displayed when analyzing the consumption of fish per week (Spain 81%, German 35%). However, in a study by Mariscal et al. [38], comparing the Spanish group to another comparable Spanish population, it appears that various intakes have been dropped (e.g., fruits, vegetables, dairy products). However, on the contrary, the consumption of fish increased from 42% up to 81%. Spain has a vast coast where fish and other sea product intakes are usual, and their availability and fair prices could be a determinant of higher consumption. It is quite common to purchase sea products from many little shops or fisheries located in the town on different areas than the supermarkets. Furthermore, in Spain, which is traditionally associated with Catholic beliefs, a meat-free diet with fish and sea products is practiced during the Fridays of Lent. Even in a town, such as Madrid, which has no coastline, the fish availability is great. Furthermore, typical Spanish recipes use sea products for cooking. These changes, some positive, but also negative, may be explained by Interventions and promotion of healthy foods in schools. Typically unhealthy intakes associated with the "Western diet" were found in German children, who had significantly higher intakes of fast food, commercially baked goods, ate sweets and candies several times per day, and skipped their breakfast more often. Biblioni et al. found the "Western" dietary pattern often among boys, while the "Mediterranean" dietary pattern was mainly followed by girls, which is in line with this study [39]. These "need for changes" for the German children are important, as meal patterns, such as skipped breakfast, have been suggested as a marker of an inappropriate dietary intake among adolescents [40]. Similarly, eating frequency was identified as a risk factor for obesity in both boys and girls [41]. Hoyland et al. found that breakfast has beneficial effects in school children on cognitive function compared to those who attend school fasting [42]. Forkert et al. found an increased waist circumference in European girls with low physical activity levels, which is an indicator of being overweight or obese [43]. When skipping breakfast, they found a strong relation between total and abdominal obesity. Poor adherence to MD was shown to be closely related to obesity in Greek school-children [44]. Flieh et al. found a strong relationship between free sugar intake and obesity in 843 European adolescents [45]. In Spain, it has been described that more than 80% of all ultra-processed food add sugar [46]. Specifically, children consuming higher amounts of sweet beverages demonstrated significantly increased amounts of LDL, tryglyceride, and cholesterol and lower amounts of HDL [47]. More specifically, Hebestreit et al. mentioned the association between daily energy intake and age-gender-specific BMI, which should be detected in future studies, since a relationship seems to be noticeable [48]. The results on obesity and overweight did not show significant differences between German and Spanish children. According to the KIGGS reference population, 20% of the German group and 11% of the Spanish group were overweight or obese, with a tendency toward more overweight or obese children in Germany [25]. The KIGGS study showed that 15% of German children and adolescents aged 3-17 years were overweight and 6% suffered from obesity [17]. Regardless of country, girls had a significantly higher BMI and hip circumference compared to boys. Compared to the reference population, the BMI-SDS and hip circumference SDS of the girls were even above zero [25]. Additionally, the Spanish girls were less active than the Spanish boys, and the same results were found in the overweight or obese Spanish pupils. In contrast, the German children did not show these differences. PA level has shown to have an influence in this study independently of country and is known to be beneficial for various health outcomes. In addition, it is most effective in adolescents to reduce cardio-metabolic risk as well as overweight and obesity [5,15]. Furthermore, being physically active is related to better quality of life in children and adolescents [49]. Moradell et al. concluded the findings from the international multicenter cross-sectional HELENA study, proving that those adolescents who met the PA guidelines and screentime recommendations had higher intakes of healthy foods (e.g., fruits, vegetables, and dairy products) [3]. Similarly, Lopez-Gil et al. found high associations of PA level and MD scores [36]. PA level has been associated with food choice, and cereals, fruits, and vegetables appeared more likely in the diet of active adults and children [50]. Lazarou et al. (2010) found a healthy diet in children who maintained high levels of PA, which was also found in [36,51,52]. Ottevaere et al. found higher fruit intake in most active males compared to the lowest PA group [53]. The German KIGGS study (2nd wave) found associations between PA levels and consumption of healthy food and beverages in German children and adolescents [52]. In addition to lower PA level in the Spanish girls, the results of this study demonstrate a significant difference in PA during the week and weekend with higher PA at weekdays. This suggests that school hours may be an ideal setting for increasing PA levels and decreasing sedentary activity, since children spend most of their time in this environment [54]. Moreover, it could be used to advise them on how to be physically active during the weekend. The school rhythm may have influenced PA behavior [55]. In Spain, school days generally end at 5 pm compared to German school days which generally end at 3 pm. This difference in time might explain the PA results in this study. These results reflect the huge potential in using long school days for intermitted PA, since sedentary time is increased the longer children are at school during the week [56]. Furthermore, the school setting may be an ideal setting for increasing PA levels. There might be a possibility of developing outdoor physical education programs. For example, these programs could focus on food sustainability from an early age to contribute to expanding responsible food consumption habits while promoting physical activity in the natural environment. Limitations Since both studies have been developed independently, recruitment and measurement protocols differed. With regard to nutrition, a full comparison of the KIDMED was not possible, since three questions were skipped, for example, more than half of the German children did not know which oil was being used at home, as sunflower oil and butter are also very common for cooking in Germany. Furthermore, due to the focus and time frame of the German study, no validation of the German translation of the questionnaires was performed. A direct comparison of physical activity between both countries was not possible since accelerometers worn at different body parts (hip vs. wrist) and different types of accelerometers were used. Therefore, the PA values for each country were calculated separately by tertiles. Conclusions In summary, Spanish children had higher adherence and quality regarding MD compared to German children. Huge differences were detected in typical MD habits, such as fruit-, vegetables-, fish-intakes, and dairy products. In contrast, German children had higher intakes of fast food, processed bakery goods, candies, and sweet beverages. Independently of country, for girls, children with lower BMI-SDS and children with higher PA level were related with better diet quality. However, since girls of both countries had higher BMI-SDS and the Spanish girls were less active than boys, they are at risk of poorer diet quality. Knowing that the majority of the pupils in both countries spent most of their daytime in schools including their lunch, a possible way to influence and change habits on PA and nutrition seems apparent.
2022-11-09T16:10:39.305Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "f074dbda4edf7ac2a25ded2540922cd95472dc63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/21/4697/pdf?version=1667806289", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6dc7dd525db5a8ac1c301b8af4dc663220a047ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2920006
pes2o/s2orc
v3-fos-license
Angular asymmetries as a probe for anomalous contributions to HZZ vertex at the LHC In this article, the prospects for studying the tensor structure of the HZZ vertex with the LHC experiments are presented. The structure of tensor couplings in Higgs di-boson decays is investigated by measuring the asymmetries and by studing the shapes of the final state angular distributions. The expected background contributions, detector resolution, and trigger and selection efficiencies are taken into account. The potential of the LHC experiments to discover sizeable non-Standard Model contributions to the HZZ vertex with $300\;{\rm fb}^{-1}$ and $3000\;{\rm fb}^{-1}$ is demonstrated. I. INTRODUCTION In the Summer of 2012, the CMS and ATLAS Collaborations at the LHC reported the discovery of a new neutral resonance in searches for the Standard Model Higgs boson. This discovery was later confirmed by analyses of the full LHC Run-I dataset by both collaborations [1,2]. It was demonstrated that the new particle with a mass around 125.5 GeV was dominantly produced via the gluon-fusion process and decays into pairs of gauge bosons: γγ, ZZ and W W . The observed production and decay modes identified the discovered particle as a neutral boson. The subsequent measurement of its couplings to fermions and bosons demonstrated the compatibility of the discovered resonance with the expectations for the Standard Model Higgs boson within available statistics [3][4][5]. In the Standard Model, electroweak symmetry breaking via the Higgs mechanism requires the presence of a single neutral Higgs boson with spin 0 and even CPparity. Theories beyond the Standard Model often require an extended Higgs sector featuring several neutral Higgs bosons of both even and odd CP-parity. In such a case, mixing between Higgs boson CP-eigenstates is possible. The Higgs boson mass eigenstates observed in experiment may thus have mixed CP-parity. Such an extension of the Higgs sector is important because effects of CP violation in the SM are too small and, in particular, cannot explain the baryon asymmetry of the Universe. Dedicated studies of spin and parity of the Higgs candidate discovered by ATLAS and CMS showed that its dominant spin and parity are compatible with J CP = 0 ++ [4 -6]. The dataset of about 25 fb −1 currently collected by each of the major LHC experiments allows to set an upper limit on the possible CP-odd contribution. The sensitivity is expected to improve with larger datasets to be collected at the LHC. There have been many works on direct measurement of CP violation in the Higgs sector . In this paper the sensitivity of LHC experiments to observe CP-mixing effects with 300 fb −1 and 3000 fb −1 is evaluated using the method of angular asymmetries. This paper is organised as follows. In Section II observables sensitive to CP violation in the HZZ vertex are discussed. The spin-0 model, a Monte Carlo production of signal and background, and a lagrangian parametrisation for CP-mixing measurements are discussed in Section III. In Section IV the expected sensitivity of the LHC experiments to the CP-violation effects based on angular asymmetries is presented. Constraits are set on the contribution of anomalous couplings to the HZZ vertex. Section V introduces the measurement technique based on observables fit. Exclusion regions for the mixing angle are presented. Section VI gives the overall summary of obtained results. II. OBSERVABLES In this paper we study the sensitivity of final state observables to the CP violating HZZ vertex in the process: (1) Following the notation introduced in [ scattering amplitude describing interactions of a spinzero boson with the gauge bosons is given by: Here the f (i)µν = µ i q ν i − ν i q µ i is the field strength tensor of a gauge boson with momentum q i and polarisation vector i ;f (i)µν = 1/2 µναβ f αβ is the conjugate field strength tensor. The symbols v and m V denote the SM vacuum expectation value of the Higgs field and the mass of the gauge boson respectively. In the Standard Model, the only non-vanishing coupling of the Higgs to ZZ or W W boson pairs at treelevel is g 1 = 2i, while g 2 is generated through radiative corrections. For final states with at least one massless gauge boson, such as γγ, gg or Zγ, the SM interactions with the Higgs boson are loop-induced. These interactions are described by the coupling g 2 . The coupling g 4 is associated with the interaction of CP-odd Higgs boson with a pair of gauge bosons. The simultaneous presence of CP-even terms g 1 and/or g 2 and the CP-odd term g 4 leads to CP violation. In general, g i couplings can be complex and momentum dependent. However imaginary parts of these couplings are generated by absorptive parts of the corresponding diagrams and expected to be small: approximately less than 1%. We further assume that the energy scale of new physics is around 1 TeV or higher, so that the momentum dependence of the couplings can be neglected. Thus, in the following we will consider g i couplings as real and momentum-independent. One of possibilities to study CP violation in the process of Eq. 1 is to analyse the shapes of angular and mass distributions of the final state [33,34]. The common choice of angular observables for this type of analysis is show in Fig. 1. A complimentary approach is based on studies of angular-function asymmetries arising in the case of CP violation. There are six observable functions proposed in [35]. The first angular observable function is defined as follows: Here p i , i = 1, . . . 4 are the 3-momenta of the final state leptons in the order l 1l1 l 2l2 . The subscripts Z and H denote that the corresponding 3-vector is taken in the Z or in the Higgs boson rest frames. Using these definitions, the second observable function reads: The third observable function O 3 is constructed using O 1 : where The remaining three observable functions are given by: These observables are related to the final state angular variables defined in [33] and illustrated in Fig. 1. For instance, a trivial calculation yeilds: O 1 = cos θ 1 and Note that the total cross section is CP even (no interference between CP-even and CP-odd terms) and cannot be used to detect the presence of CP violating terms in the HZZ vertex. III. SPIN-0 MODEL AND MONTE CARLO PRODUCTION The dominant Higgs boson production mechanism at the LHC is gluon-fusion. To simulate the production αs 2π . of a Higgs-like boson and its consequent decay into ZZ and 4l, the MadGraph5 Monte Carlo generator [36] was used. This generator implements the Higgs Characterisation model [37]. The corresponding effective Lagrangian describing the interaction of the spin-0 Higgs-like boson with vector bosons is given by: where Λ is the new physics energy scale and the field strength tensors are defined as follows: The dual tensor V µν is defined as: The mixing angle α allows the production and decay of CP-mixed states and implies CP violation when α = 0 or α = π/2. The definitions of effective tensor couplings g XV V are shown in Table I. The relations between parameters of the Lagrangian of Eq. (3) and tensor couplings of the effective amplitude of Eq. (2) can be derived from Feynman rules. The corresponding conversion coefficients are shown in Table II. In this table the following definitions are used: c α = cos α, and s α = sin α. Here X denotes either H or A and the index V V denotes the final state gauge boson pair. The effective couplings g XV V are defined as follows: • In the case of ZZ or W W interactions,g XV V = 1; • For γγ, Zγ and gg interactions, couplingsg XV V are equivalent to the couplings g XV V defined in Table I. The couplingsK H∂V , where V = W, Z, γ, correspond to the so-called contact terms of the Higgs Characterisation Lagrangian of Eq. (3). These contact terms can be reproduced in the amplitude of Eq. (2) by re-parametrising the g 1 coupling in the following form [38]: This equation represents the leading terms of the form factor expansion. In the case of complex k H∂W , the momenta of the W bosons should be assigned as follows: q 1 for W − and q 2 for W + . In the case of HZγ interaction with a real photon, the term proportional to k H∂γ vanishes. In the following we will consider a model based on the Lagrangian of Eq. (3) in which the mixing is provided by the simultaneous presence of the Standard model CP-even term and a non-Standard model CP-odd term in the HZZ decay vertex. The signal Monte Carlo samples used in this analysis are produced using the Higgs Characterisation model parameters presented in Table III. The coefficient k AZZ was chosen such that it provided equal cross sections for decays of CP-odd and CPeven Higgs states: σ(c α = 0) = σ(c α = 1). The tensor couplings for the decay vertex corresponding to the kSM kHZZ kAZZ kHgg kAgg Λ 1 0 28.6 1 1 10 3 amplitude of Eq. (2) can be restored using the following relations: g 2 = 2ic α and g 4 = 2is αKAZZ , wherẽ K AZZ = 1.76. It is noted that the factor 2i is not important in the study of asymmetries because it defines the overall cross-section normalisation. The signal samples were produced using the Mad-Graph5 Monte Carlo generator [36]. These samples were created in the range of mixing angles −1 ≤ cos α ≤ 1 in steps of 0.05. The dominant background processes qq → ZZ, Zγ were also simulated with MadGraph5. After simulation of signal and background events at √ s = 14 TeV, the parton showering was performed using the PYTHIA6 Monte Carlo generator [39]. Generic detector effects were included by using the PGS package [36]. The main detector parameters used for this simulation are presented in Table IV the expected acceptance, efficiencies and resolutions of the ATLAS and CMS detectors of the LHC can be found in [40,41]. Finally a kinematic selection was applied. It was required that candidates decayed to two same flavour oppositely charged lepton pairs. If several of such candidates could be reconstructed in an event, the leptons pairs with invariant masses closest to the on-shell Z mass where chosen. Each individual lepton had a psudorapidity |η| < 2.5 and transverse momentum p T > 7 GeV. The most energetic lepton should satisfy p T > 20 GeV whereas the second (third) similarly had p T > 15 GeV (p T > 10 GeV). The invariant mass of the on-shell Z boson was in the mass window (50, 106) GeV while the off-shell Z boson m Z * > 20 GeV. Only Higgs candidates in the signal region 115 GeV< m H < 130 GeV where considered. The selection is a simplified version of the one presented in [2]. IV. ASYMMETRIES For each observable O i sensitive to CP violation, the corresponding asymmetry can be defined as: where N is the number of events with the observable less or greater than zero. Integrating the corresponding decay probabilities, it can be shown that these asymmetries directly probe the tensor couplings defined in the amplitude of Eq. (2) [35]. The value of A 1 is proportional to Im(g 4 ), while A 2 , A 3 , A 4 , A 5 and A 6 probe the values of Re(g 4 ) and Im(g 2 ) respectively. Analysis of asymmetries sensitive to CP-violation for the process of Eq. (1) was performed in [35]. In this section we extend this analysis by including effects of parton showering, hadronization, generic detector effects and contributions from the irreducible qq → ZZ/Zγ → 4l background. Lepton interference in the final state and the contribution of two off-shell Z-bosons are also taken into account. The generated using the production and decay model defined in Table III. The contributions from the signal and qq → ZZ → 4l background are normalised to their respective expectations at 300f b −1 . It is noted that the presence of CP-mixing leads to distortions of distributions of selected observables. The distributions of O 2 through O 5 become asymmetric in the presence of a real component of g 4 . This asymmetry is especially pronounced for O 4 and O 5 . As suggested in [35], the background is CP conserving and the corresponding distributions of observables are symmetric. The shapes of asymmetries A i for the model presented in Table III CP-even and CP-odd cases are given by cos α = 1 and cos α = 0, respectively. Note, that according to the structure of Lagrangian (Eq. (3)) the CP-violating contribution is defined by the parameter p =K AZZ tan α. This parameter thus determines the corresponding asymmetries of angular observables. Knowing the distribution of asymmetries for giveñ K AZZ it is possible to obtain the corresponding distributions for anyK AZZ by using the condition p = const. It is noted, that for the physics model used in this study, the observables O 1 and O 6 do not generate asymmetries visible with the current Monte Carlo sample. The asymmetric behaviour is clearly visible for O 2 through O 5 . The asymmetries for O 4 and O 5 calculated using Eq. (4) may exceed 10% . In Fig. 3 asymmetry plots are given for cos α in the range from 0 to 1. For negative cos α the asymmetries change sign but keep the same shape. This property allows using the asymmetry approach to measure the relative phase in the amplitude of Eq. (2). The significance of the expected asymmetry can be estimated as: where N = N S + N B is the total number of signal and background events and ∆N is the difference in the number of events with O i < 0 and O i > 0. It is also noted that ∆N ≈ ∆N S , because the ZZ background does not contribute to asymmetries at leading order. Following the results of the simulation presented in [42], the number of signal and background events at √ s = 14 TeV can be estimated as: N S = 1.32L and N B = 0.71L respectively. Here L represents the integrated luminosity in fb −1 . A dataset with the integrated luminosity of 300 fb −1 is expected to be collected during the Run III of the LHC. Using the above expressions, one can estimate an expected asymmetry of about 9.5% to be measured with this data sample. The corresponding significance will be around two standard deviations. The region 0.340 < cos α < 0.789 will then be excluded at 95% CL. This exclusion range can be expressed in terms of f g4 fraction of events [4] arising from the anomalous coupling g 4 : where g i are couplings of the decay vertex, and σ i is the cross section of the processes H → ZZ → 4l corresponding to g i = 1, g i =j = 0. Eq. (5) can be rewritten in terms of the mixing angle α as: where the ratio of cross sections σ 4 /σ 1 = 0.139 is obtained from the Monte Carlo generator. The range of the fraction of events of Eq. (5) close to 1 has been already excluded by CMS [4]. Taking this into account, the exclusion limit obtained in the presented analysis becomes f g4 < 0.207 at 300 fb −1 for the model described by the Lagrangian of Eq. (3) and parameters given in Table III. For the high luminosity LHC, assuming the same signal and background yields per fb as above, the following exclusion range can be established: 0.089 < cos α < 0.968 at 95% CL. This corresponds to an upper limit f g4 = 0.028 at 3000 fb −1 . In the same way as above, we performed estimates for four more values of the model parameterK AZZ . Monte Carlo samples were generated for each point of two dimensional model space (cos α,K AZZ ). The number of signal events was calculated as N S = N SM S σ/σ SM assuming constant K-factors. The results are presented in Table V. These limits on f g4 are close to the ones expected in ATLAS [42] and CMS [4] experiments. The region ofK AZZ /1.76 above 1.4 is not considered. In this region the cross sections exceed the SM cross section by more than a factor of two. In provide limited information about the anomalous contributions to the HZZ vertex. The optimal sensitivity to these contributions can be obtained by studying the shapes of distributions of observables O i and their correlations. The sensitivity of individual observables to the presence of anomalous contributions to the HZZ vertex is studied by fitting the shape of these observables as a function of the mixing angle. The likelihood function of the fit is defined as: Here besides the parameter of interest cos α, two nuisance parameters have been introduced: the best fitting signal strength µ and a systematic normalization uncertainty θ. The likelihood function is a product over the different final states and bins of the specific observable that is being fitted. In each bin, the observed number of events from pseudo-data N , is compared to the expected number of events of the model S + B assuming a Poissonian distribution of entries P . By varying the mixing parameter cos α of the likelihood for a given dataset we can construct the standard log-likelihood test statistic: whereα denotes the mixing angle that maximises the likelihood function over the scan. The other likelihood parameters are profiled at the corresponding cos α value. The 95% exclusion is reached when −2 ln Λ(cos α) > 3.84. The definitions of the 64% CL and 95% CL exclusion regions is demonstrated in Fig. 6. Results of the scan of the mixing angle α produced with the mixing angle observable fit corresponding to the integrated luminosity of 300 fb −1 are presented in Fig. 7. The results are reported for the model withK AZZ = 1.76 and remaining parameters as defined in Table III. The values of the mixing angle cos α used to generate the input pseudo-data are marked on the x-axis. Every bin of the injected cos α on represents the null-hypothesis likelihood curve similar to Fig. 6. The y-axis shows the cosα values reconstructed in the fit. The blue and grey dashed areas represent the 64% CL and 95% CL limits respectively. The white area in each bin of injected cos α is excluded at 95% CL. As expected, the sensitivity to the mixing angle varies for different observables, resulting in significantly different exclusion regions. The weakest exclusion is reached with the O 2 , while the strongest is reached with the O 4 . The results corresponding to the integrated luminosity of 3000 fb −1 are presented in Fig. 8. Compared to 300 fb −1 , the 95% CL exclusion regions around the fitted cosα values are significantly reduced. Assuming the pure Standard Model signal, the following exclusion limits can be set using the O 4 observable alone: 0 < cosα < 0.708 at the 95% CL for 300 fb −1 and 0 < cosα < 0.908 at the 95% CL for 3000 fb −1 . The exclusion limits obtained from other observables assuming the Standard Model signal are reported in Table VI. The exclusion limits obtained for hypothetical BSM signals can be read from Fig. 7 and 8. It is noted that by fitting the shape of the O 4 observable alone the exclusion limits similar to those reported in the Section IV can be obtained. Further improvements can be obtained by combining several observables in the same fit. VI. CONCLUSION In this article studies of tensor structure of the HZZ vertex are presented. The investigation is performed using the pp → H → ZZ → 4l process assuming the gluon fusion production of the spin-0 resonance. The background contributions, detector resolution, trigger and selection efficiencies expected for the LHC are taken into account. Two different approaches to detect CP-violation effects in the HZZ vertex were used. The first approach is based on a simple counting experiment for angular asymmetries of CP-sensitive observables. It was shown that the presence of CP violating terms may result in angular asymmetries exceeding 10%. The 95% CL exclusion ranges for the mixing angle at different parameters of spin-0 Higgs boson model including the Standard Model CP-even term and anomalous CP-odd term g 4 are calculated. These results are also presented in terms of the effective cross section fraction f g4 . The obtained limits are comparable with the ATLAS and CMS projections for Run III at the LHC and the High-Luminosity LHC presented in [4,42]. The sensitivity of individual observables to the presence of anomalous contributions to the HZZ vertex was studied by fitting the shape of these observables as a function of the mixing angle. It is demonstrated that using a single most sensitive observable, this approach gives f g4 limits comparable with asymmetries method and with the ATLAS and CMS projections. Compared to the method of angular asymmetries, this approach has an advantage of using the complete shape information of CP-odd observables. It is demonstrated that some of the observables, that do not generate significant angular asymmetry in presence of significant CP-mixing, can still provide restrictive f g4 limits when their complete shape is analysed. Combining several CP-odd observables in the same fit or combining several angular asymmetries would likely further improve sensitivity to the CP violating coupling. It is noted that careful experimental investigation of all observables, even not the leading ones, is important, since they probe different terms of the HZZ vertex.
2015-05-28T15:21:34.000Z
2015-02-10T00:00:00.000
{ "year": 2015, "sha1": "83189c605c1a530888869acda60612368cf63f5d", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.91.115014", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "83189c605c1a530888869acda60612368cf63f5d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221128726
pes2o/s2orc
v3-fos-license
Trehalose Induces Autophagy Against Inflammation by Activating TFEB Signaling Pathway in Human Corneal Epithelial Cells Exposed to Hyperosmotic Stress Purpose Autophagy plays an important role in balancing the inflammatory response to restore homeostasis. The aim of this study was to explore the mechanism by which trehalose suppresses inflammatory cytokines via autophagy activation in primary human corneal epithelial cells (HCECs) exposed to hyperosmotic stress. Methods An in vitro dry eye model was used in which HCECs were cultured in hyperosmolar medium with the addition of sodium chloride (NaCl). Trehalose was applied in different concentrations. The levels of TNF-α, IL-1β, IL-6, and IL-8 were detected using RT-qPCR and ELISA. Cell viability assays, immunofluorescent staining of LC3B, and western blots of Beclin1, Atg5, Atg7, LC3B, and P62 were conducted. The key factors in upstream signaling pathways of autophagy activation were measured: P-Akt, Akt, and transcription factor EB (TFEB). Results Trehalose reduced the proinflammatory mediators TNF-α, IL-1β, IL-6, and IL-8 in primary HCECs at 450 mOsM. This effect was osmolarity dependent, and a level of 1.0% trehalose showed the most suppression. Trehalose promoted autophagosome formation and autophagic flux, as evidenced by increased production of Beclin1, Atg5, and Atg7, as well as higher LC3B I protein turnover to LC3B II, with decreased protein levels of P62/SQSTM1. The addition of 3-methyladenine blocked autophagy activation and increased the release of proinflammatory cytokines. Trehalose further activated TFEB, with translocation from cytoplasm to the nucleus, but diminished Akt activity. Conclusions Our findings demonstrate that trehalose, functioning as an autophagy enhancer, suppresses the inflammatory response by promoting autophagic flux via TFEB activation in primary HCECs exposed to hyperosmotic stress, a process that is beneficial to dry eye. D ry eye is a multifactorial disease of the ocular surface characterized by a loss of homeostasis of the tear film; it is accompanied by ocular symptoms in which tear film instability 1 and hyperosmolarity, ocular surface inflammation and damage, 2−4 and neurosensory abnormalities play etiological roles. 5−7 The prevalence of dry eye involving symptoms with or without signs has been estimated to be as high as 50%, 8 a public health burden that has had a serious impact on the lives, studies, and work of many people. 9 Due to deficient tear production 10 or tear overevaporation, 11 tear film hyperosmolarity has been shown to be an important factor in the initiation of ocular surface inflammation in dry eye patients, 12,13 as well as in mouse models. 14−16 Physiotherapy strategies, artificial tears as supplementation, anti-inflammatory eye-drop therapy (including nonsteroidal anti-inflammatory drugs and glucocorticoid), and even surgeries are current therapies for various dry eye patients to improve their symptoms. 17−19 However, many of these approaches are palliative rather than disease modifying, and they do not provide adequate symptom relief or prevent disease progression. 20 Studies on the pathophysiology role of hyperosmolarity have led to new preventive and therapeutic approaches for patients with dry eye syndrome. 12,21−23 Osmoprotectants are small organic molecules that are used by many types of cells to restore isotonic cell volume and stabilize protein function, thus allowing adaptation to hyperosmolarity. 24,25 Our previous studies showed that osmoprotectants such as pterostilbene, 26 L-carnitine, 27 erythritol, and betaine 28 can suppress the inflammatory response via their uptake, accompanied by a decreasing concentration of intracellular inorganic salts in human corneal epithelial cells (HCECs) exposed to hyperosmotic stress. The main biological purpose of trehalose, a disaccharide comprised of two glucose molecules, 29 is water regulation, 30 as it seems to form a gel phase during cellular dehydration that protects the organelle and then allows rapid rehydration when a proper environment is reintroduced. 31 It can also serve a hydration function in the eyes of dry eye patients. 32 Trehalose protects corneal epithelial cells in culture from death by desiccation, 33,34 and it has been used as an effective and safe eye drop for the treatment of moderate to severe dry eye syndrome since 2002. 35 In addition, trehalose also appears to increase autophagy in a manner independent of mechanistic target of rapamycin complex 1 (mTORC1) inhibition. 36−38 Autophagy is a wellconserved self-degradative pathway 39 and is one of the main intracellular quality control systems in almost all eukaryotic cell types. 40 The major physiological role of autophagy is ensuring the maintenance of cell and tissue homeostasis by recycling macromolecules, especially in response to many stressors, including starvation, and under several types of cell stresses such as hypoxia, infection, and inflammation. 41 Moreover, autophagy also suppresses inflammation, which causes collateral cell and tissue damage. 42 Conversely, a properly mounted, focused, and transient inflammation promotes cell and tissue repair and regeneration. 43 Previous studies 44,45 reported that autophagy ensures a well-balanced inflammatory response that is accompanied by restoration of homeostasis. The ability of autophagy to prevent excessive inflammation was first observed in mice rendered deficient in Atg16/1 in 2008. 46 Catalytic inhibition of mTORC1 in cells leads to basic helix-loop-helix transcription factor EB (TFEB) activation; however, because rapamycin is quite ineffective at activating TFEB, 47−49 it is important to explore an alternative approach to activating TFEB. Here, we suggest that the serine/threonine kinase Akt (protein kinase B) could serve as an actionable target that controls TFEB activity independently of mTORC1. We have found that trehalose, an mTORindependent autophagy inducer, promotes cytoplasm-tonucleus translocation of TFEB while inhibiting Akt and finally reducing inflammation to protect primary HCECs from hyperosmotic stress via activation of autophagy. Primary Cultures of HCECs and In Vitro Hyperosmotic Stress Model Fresh human corneoscleral tissues (<72 hours after death) not suitable for clinical use from donors 18 to 65 years of age were obtained from the Lions Eye Bank of Texas (Houston, TX). Based on our previous methods, 50 primary HCECs were cultured in 12-well plates using explants from corneal limbal rims in a supplemented hormonal epidermal medium (SHEM) containing 5% fetal bovine serum (FBS). Confluent cultures at 14 to 18 days were switched to an equal volume (0.5 mL/well) of serum-free medium (SHEM without FBS) for 24 hours and then treated for 4 or 24 hours with isosmolar medium (312 mOsM) and hyperosmolar medium (450 mOsM), which was achieved by adding 69-mM sodium chloride (NaCl), with or without 1-hour prior incubation with trehalose (provided by Allergan, Dublin, Ireland). In some experiments, an autophagy inhibitor, 5-mM 3-methyladenine (3-MA; Sigma-Aldrich, St. Louis, MO, USA) was included. The osmolarity of the culture medium was measured by a vapor pressure osmometer located in the Body Fluids Clinical Chemistry Laboratory of the Houston Methodist Hospital-Texas Medical Center (Houston, TX). 3 The cells treated for 4 hours were lysed in Buffer RLT Plus from the RNeasy Plus Mini Kit (Qiagen, Hilden, Germany) for total RNA extraction. The cells treated for 24 hours were used for immunostaining or were lysed in radioimmunoprecipitation assay (RIPA) buffer (Sigma-Aldrich) for western blot analysis. The supernatant of the conditioned medium was stored at -80°C before ELISA. RNA Extraction, Reverse Transcription, and Quantitative Real-Time PCR Total RNA was extracted with the Qiagen RNeasy Plus Mini Kit according to the manufacturer's instructions, quantified with a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), and stored at -80°C before use. Quantitative real-time PCR (RT-qPCR) was performed as previously described. 51 In brief, firststrand cDNA was synthesized by reverse transcription from 2.0 μg of total RNA using Ready-To-Go You-Prime First-Strand Beads (GE Healthcare, Piscataway, NJ, USA), and qPCR was performed in a StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) with 10.0-μL reaction volume containing 4.5 μL of cDNA, 0.5 μL gene expression assay, and 5.0 μL TaqMan Fast Universal PCR Master Mix (Applied Biosystems). Taqman Gene Expression Assays (Applied Biosystems) used for this study included glyceraldehyde-3-phosphate dehydrogenase (GAPDH; Hs99999905_m1), TNF-α (Hs00174128_m1), IL-1β (Hs01555413_m1), IL-6 (Hs00174131_m1), and IL-8 (Hs00174103_m1). The thermocycler parameters were 50°C for 2 minutes and 95°C for 10 minutes, followed by 40 cycles of 95°C for 15 seconds and 60°C for 1 minutes. A nontemplate control was included to evaluate DNA contamination. The results were analyzed by the comparative threshold cycle (C t ) method and normalized by GAPDH as an internal control. Enzyme-Linked Immunosorbent Assay Double-sandwich ELISA for human TNF-α, IL-1β, IL-6, and IL-8 (BioLegend, San Diego, CA, USA) was performed to determine the protein concentration of the pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) and chemokine (IL-8) in the conditioned medium from different treatments according to the manufacturer's protocol and our previous publication. 52 Absorbance was read at a wavelength of 450 nm with a reference wavelength of 570 nm using Infinite M200 PRO multimode microplate readers (Tecan US, Morrisville, NC, USA). Cytoplasmic and Nuclear Protein Extraction Cytosolic and nuclear protein was fractionated with NE-PER Nuclear and Cytoplasmic Extraction Reagents (Pierce Biotechnology) according to the manufacturer's instructions. Immunofluorescent Staining HCEC cultures on eight-chamber slides were fixed with methanol at -20°C for 5 minutes. Indirect immunofluorescent staining was performed using our previous methods. 53 Primary rabbit polyclonal antibody against human LC3B (Novus Biologicals) was used. Alexa Fluor 488-conjugated AffiniPure Goat anti-Rabbit IgG (H+L) secondary antibody ( Jackson ImmunoResearch Laboratories, West Grove, PA) was applied, and 4 ,6-diamidino-2-phenylindole (DAPI; Invitrogen, Carlsbad, CA, USA) was used for nuclear counterstaining. Secondary antibody was applied alone as a negative control and compared to isotype goat IgG. The staining was photographed with a Nikon A1 Confocal Laser Microscope System (Nikon Instruments, Melville, NY, USA) and processed with Image J software (National Institutes of Health, Bethesda, MD, USA). MTT Assay HCEC cultures were seeded at a density of 2 × 10 4 cells/mL onto 96-well plates with 200 μL of culture medium per well overnight and were treated for 24 hours with isosmolar medium (312 mOsM) or hyperosmolar medium (450 mOsM), with or without 1-hour prior incubation with trehalose. The proliferative activity of the cells was quantitatively determined by a Cell Growth Determination Kit, MTT based (Sigma-Aldrich). The optical density of absorbance at 570 nm was measured using Infinite M200 PRO multimode microplate readers (Tecan US). 54 Statistical Analysis Student's t-test was used to compare differences between two groups. One-way ANOVA was used to make comparisons among three or more groups, followed by Dunnett's post hoc test. P < 0.05 was considered statistically significant. Trehalose Increases Osmolarity in a Concentration-Dependent Manner Higher concentrations (2.0%-5.0%) of trehalose appeared to lose the protective effects in a concentration-dependent manner; in fact, 5.0% trehalose may have stimulated higher expression and production of these inflammatory-related markers. Further, we found that osmolarity-dependent inhibition of inflammatory-related markers by trehalose was due to its aggravated effect on SHEM osmolarity. Trehalose was observed to increase osmolarity in deionized water, SHEM, and hyperosmolar medium in a concentration-dependent manner. Osmolarity in deionized water increased with higher concentrations of trehalose, with an especially great increase being observed at the 5% concentration ( Fig. 2A). Similarly, osmolarity increased in SHEM and hyperosmolar medium when different concentrations of trehalose were added (Figs. 2B, 2C); 1.0% trehalose slightly increased osmolarity in SHEM (from 312 mOsM to 335 mOsM) and in hyperosmolar medium (from 450 mOsM to 473 mOsM). Trehalose Enhances Autophagosome Formation and Autophagic Flux in HCECs Exposed to Hyperosmotic Stress In the baseline condition, the production of autophagyrelated proteins, such as Beclin1, Atg5, and Atg7, was not stimulated by 1.0% trehalose in primary HCECs cultured in SHEM (312 mOsM isosmolar medium) (Fig. 3A). However, H) were significantly stimulated at mRNA and protein levels in primary HCECs exposed to 450-mOsM hyperosmotic medium. (A, C, E, G) RT-qPCR analyses. The relative fold differences in mRNA expression were determined using 312-mOsM isosmolar medium as an internal control. (B, D, F, H) ELISA analyses. These markers were largely suppressed by trehalose at 0.5% to 1.5%, but 1.0% showed the most inhibition. Data are mean ± SD of six independent samples. * P < 0.05, compared with 450-mOsM hyperosmotic medium without prior incubation with trehalose. when HCECs were exposed to hypertonic medium (450 mOsM), 1% trehalose significantly promoted autophagosome formation, as evidenced by the increased production of Beclin1, Atg5, Atg7, and LC3B. In addition to the higher turnover of LC3B I protein to form LC3B II, p62 protein (also known as SQSTM1) was observed to decrease significantly, indicating that trehalose enhances autophagic flux (Fig. 3B). This pattern of autophagy activation with the increased production and turnover of LC3B II, as well as the decreased p62 levels, became more pronounced in primary HCECs exposed to hyperosmotic stress with the addition of 1.0% trehalose when compared to HCECs without trehalose (n = 6) (Figs. 3C-3H). The immunofluorescent staining showed that the percentage of cells containing LC3B puncta significantly increased with the 1.0% trehalose treatment in primary HCECs exposed to hyperosmotic stress. The number of LC3B puncta cells was greater compared to HCECs not receiving 1.0% trehalose and unexposed HCECs, which had the least: control versus hyperosmolarity, P < 0.05; control versus trehalose, P < 0.05; hyperosmolarity versus trehalose, P < 0.05 (n = 6) (Figs. 4A, 4B). Additionally, LC3B puncta were more abundant in primary HCECs with the 1.0% trehalose treatment compared to those without the 1.0% trehalose treatment and unexposed HCECs, which again had the least (n = 3) (Fig. 4C). As evaluated by the MTT cell survival assay, 1.0% trehalose effectively restored cell viability in conditions of hyperosmotic stress (hyperosmolarity vs. trehalose, P < 0.05) (Fig. 4D), which induced cell damage with lower survival rates. Trehalose-Induced TFEB Nuclear Translocation and Activation of the Autophagy-Lysosome Pathway Is Mediated by Akt Inhibition To investigate the mechanism by which trehalose modulates autophagy, we investigated whether TFEB, a known autophagy activator, plays a role. We found that trehalose was able to induce TFEB translocation of cytoplasm to nucleus, as indicated by the upregulated level of nuclear TFEB after trehalose treatment as evaluated by western blot (Fig. 5A). Trehalose also enhanced autophagic flux, as indicated by P62/SQSTM1 and LC3B II turnover: control versus hyperosmolarity, P < 0.05; control versus trehalose, P < 0.05; hyperosmolarity versus trehalose, P < 0.05 (n = 6) (Fig. 5C). Finally, we sought to investigate the mechanism by which trehalose signals for TFEB nuclear translocation. Interestingly, we found that trehalose inhibited Akt substrates and pathways, and the phosphorylation of Akt was reduced in primary HCECs exposed to hyperosmotic stress (hyperosmolarity vs. trehalose, P < 0.05; n = 6) (Figs. 5A, 5B), as evaluated by western blot. The total level of Akt did not differ across these groups, suggesting that Akt signaling is inhibited with trehalose. Together, these results suggest that trehalose may lead to increased TFEB by reducing Akt signaling and further activation of autophagy. Trehalose Induces Autophagy Against Inflammation in HCECs Exposed to Hyperosmotic Stress To establish a direct link between autophagy induction by trehalose and a decrease in inflammatory cytokine release, 5-mM of 3-MA was applied as an autophagy inhibitor by blocking autophagosome formation. As evaluated by western blot, the activation of autophagy induced by trehalose was inhibited by 3-MA. LC3B induction and the high turnover of LC3B II were significantly suppressed, whereas the decreased levels of P62/SQSTM1 were largely restored by 3-MA in primary HCECs exposed to hyperosmotic stress (Figs. 6A-6D). As a consequence, the release of inflammatory cytokines suppressed by trehalose was also dramatically stimulated by 3-MA, evidenced by much higher protein levels of TNF-α (103.07 ± 30.64 pg/mL), IL-1β (90.10 ± 16.06 pg/mL), IL-6 (6.98 ± 1.84 ng/mL), and IL-8 (9.13 ± 0.72 ng/mL) in the medium supernatants of HCECs treated with NaCl, Trehalose and 3-MA, compared to without 3-MA (Figs. 6E-6H). The results further suggest that autophagy activation by trehalose suppresses the production of proinflammatory cytokines in HCECs under hyperosmotic stress. DISCUSSION Trehalose as an organic osmoprotectant has been proven to play protective roles against the production of proinflammatory mediators in primary HCECs exposed to hyperosmotic stress, as well as in dry eye patients and animal models. 33−35 Trehalose is a natural disaccharide used in food production that recently was found to be capable of enhancing the autophagic activity in various mammalian cells, including epithelial cells. 37 However, the pathogenesis of trehalose-induced autophagy is not well understood in an in vitro dry eye model. Comprehensive results from the present study show that trehalose stimulates the cytoplasmto-nucleus translocation of TFEB and the expression of The production of Beclin1, Atg5, and Atg7 was not stimulated by 1% trehalose in primary HCECs cultured in SHEM (312-mOsM isosmolar medium), with β-actin as the internal control. (B) Western blot. It was found that 1.0% trehalose enhanced the activation of autophagy, as evidenced by the increased protein levels of autophagy-related genes (Beclin1, Atg5, Atg7, and LC3B), the higher turnover of LC3B II, and the decreased protein levels of P62/SQSTM1 in primary HCECs exposed to hyperosmotic stress with β-actin as the internal control. (C-H) Quantitative analysis of relative expression levels of (C) Beclin1, (D) Atg5, (E) Atg7, (F) LC3B, (G) LC3B II/LC3B I, and (H) P62/SQSTM1. The data are presented as mean ± SD of six independent experiments. * P < 0.05, compared within groups. Trehalose induced autophagosome formation in HCECs exposed to hyperosmotic stress. (A) Representative immunofluorescence images of the punctate staining of LC3B in primary HCECs exposed to 450-mOsM hyperosmotic medium with or without prior incubation with 1.0% trehalose. (B) Percentage of autophagic cells. The fields were randomly selected, and at least 100 cells for each sample were analyzed (n = 3). (C) Average puncta per autophagic cell. Three cells were randomly chosen for analysis. (D) Cell viability analyzed by MTT assay. The cell viability was restored in primary HCECs exposed to 450-mOsM hyperosmotic medium with prior incubation with 1.0% trehalose. The data are presented as mean ± SD. * P < 0.05, compared within groups. genes regulating autophagy by inhibition of Akt. The activation of autophagy protects the HCECs from hyperosmolarityinduced cell damage, a potential mechanism underlying rises in pro-inflammatory cytokines and chemokine expression in dry eye disease and in an in vitro model of this condition. Many studies, including ours, have reported that hyperosmotic stress can elicit an inflammatory response through different proinflammatory cytokines and chemokines. 12,21−23 An increase in these molecules has been found in the HCEC culture model and the in vivo murine dry eye model, as well as in the tear fluid of dry eye patients. 52,55 This study confirms previous findings and further reveals that the addition of trehalose was found to reduce secreted cytokine levels in primary HCECs exposed to hyperosmotic stress, suggesting that trehalose may be acting on the intracellular signaling pathways, thereby reducing the levels of inflammatory markers. 34,56 However, higher concentrations (2.0%-5.0 %) of trehalose appeared to lose the protective effects in a concentration-dependent manner, and 5.0% trehalose may stimulate higher expression of these inflammatory markers, as the accumulation of trehalose increased osmolarity in deionized water, SHEM, and the hyperosmolar medium. The optimum concentration, then, was 1.0% trehalose. Autophagy activity is linked to inflammation, and this interaction may be both inductive and suppressive and associated with the pathogenesis of several inflammatory diseases. 43 It is established that Th1 cytokines, such as TNF-α, IL-1, and IL-6, exhibit effects of autophagy inducement, 57 which could be related to the significant increase of autophagy activation in HCECs under hyperosmotic stress. Our data using primary HCECs exposed to hyperosmotic stress as an in vitro model of dry eye demonstrate that trehalose promoted autophagosome formation and autophagic flux, as evidenced by enhanced production of the autophagy-related protein markers Beclin1, Atg5, Atg7, and LC3B, as well as by LC3B I protein turnover to form LC3B II; protein levels of P62 protein and then inflammatory cytokine secretion were reduced. However, the addition of 3-MA blocked autophagosome formation and autophagic flux and increased the release of proinflammatory cytokines in HCECs exposed to hyperosmotic stress (Fig. 6). These findings establish a direct link between autophagy induction by trehalose and decreased inflammatory cytokine release. Further, the increased average of LC3B puncta per primary HCECs and the greatly restored cell viability under hyperosmotic stress, both attributable to the addition of trehalose, indicate the protective role of trehalose. Compared to the 68.3 ± 11.1% cell survival rate in the hyperosmolarity condition, trehalose was able to effectively prevent cell death and restore cell viability at 89.7 ± 4.3%. The prevention of cell death by trehalose may be attributed to the enhanced autolysosome formation and autophagic flux. Autophagy is a pivotal intracellular process by which cellular macromolecules are degraded by various stimuli. A failure in the degradation of autophagic substrates such as impaired organelles and protein aggregates leads to their accumulation and the ultimately cell death characteristic of many stress conditions and degenerative diseases. 37,43 In addition, as discussed above, autophagy activation reduced the production of inflammatory cytokines, another link to cell survival. This study suggests that Akt control of TFEB activity is an actionable target that has potential relevance for the treatment of dry eye disease. Akt is a member of the AGC serine/threonine kinase family and plays a critical role in cell survival and the inhibition of apoptosis. 58 Abnormal FIGURE 6. Trehalose induced autophagy against inflammation in HCECs exposed to hyperosmotic stress. (A) Western blot. The activation of autophagy induced by 1% trehalose was inhibited by 5-mM 3-MA, as evidenced by a decrease of LC3B and an increase of P62/SQSTM1 in primary HCECs exposed to hyperosmotic stress with β-actin as internal control. (B-D) Quantitative analysis of relative expression levels of (B) LC3B, (C) LC3B II/LC3B I, and (D) P62/SQSTM1. (E-H) ELISA analyses. The decreased release of (E) TNF-α, (F) IL-1β, (G) IL-6, and (H) IL-8 by trehalose was blocked by 5-mM 3-MA in HCECs exposed to 450-mOsM hyperosmotic medium. The data are presented as mean ± SD of four independent experiments. * P < 0.05, compared within groups. activation of Akt may occur through Akt mutation or dysregulation of upstream signaling pathways, and it is an important driving force of pathogen invasion. 59 Interestingly, previous innovative studies have indicated that Akt regulates macroautophagy, 60 a process whereby cellular material is enclosed into auophagosomes that fuse with lysosomes, creating autolysosomes. Although it is unclear how trehalose modulates Akt activation, the discovery that Akt regulates lysosomal function through TFEB is crucial to characterizing the role of Akt in autophagy pathways and offers a new perspective for understanding the cellular processes that are affected by the clinical use of Akt inhibitors. 61−63 We found that trehalose induced inactivation of Akt and TFEB nuclear localization and increased activation of TFEB function in primary HCECs exposed to hyperosmotic stress in vitro. Subsequent induction of autophagy, perhaps to heal lysosomal damage, might lead to activation of the autophagylysosome pathway and further suppression of inflammation to protect HCECs. Our findings are consistent with previous studies using primary HCECs but contrast somewhat with results by Panigrahi and colleagues, 56 who reported autophagy induction by trehalose via increased phosphorylation of Akt with higher protein levels of P62 in corneal epithelial cells exposed to desiccation stress. This discrepancy regarding the mechanism of trehalose may be due to the utilization of different cell types and different stress models. Panigrahi et al. 56 used a SV40 large T antigen immortalized HCE cell line (HCE-T), which may respond to trehalose differently than our primary HCECs. Their desiccation stress model was created by air-drying HCE-T cultures for 10 minutes at room temperature with 40% humidity after the medium completely aspirated, which is a seriously dry condition that could cause cell damage or even death. Our hyperosmotic stress model is relatively gentle, and the primary HCECs may respond to trehalose more physiologically. In conclusion, our findings reveal that trehalose induces autophagy against inflammation by activation of TFEB via Akt inhibition in primary HCECs exposed to hyperosmotic stress. Follow-up studies are necessary to examine the clinical effect of trehalose application on the activation of autophagy or the consequences of its presence on the ocular surface.
2020-08-13T10:07:38.937Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "e299e5484fada46fa05da2d9f6f1eddda36c87bd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/iovs.61.10.26", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5cdff520f41e64ce552628f2afde262910a7eec7", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
261336396
pes2o/s2orc
v3-fos-license
Improving Sexual Assault and Sexual Harassment Prevention from the Bottom-up: a Pilot of Getting To Outcomes in the US Military While the Department of Defense (DoD) has given increased attention and priority to preventing sexual assault and sexual harassment (SA/SH), it remains a problem. To build its prevention capacity, DoD piloted Getting To Outcomes® (GTO®) from 2019 to 2022 at 10 military installations. GTO is an evidence-based planning and implementation support that has been used in many civilian contexts but has only recently been adapted for military SA/SH. The purpose of this study was to describe GTO use, identify its benefits and challenges, and discuss lessons the GTO effort yielded for prevention more broadly using a framework of organizational and program-level capacities needed for successful prevention in the military context, called the Prevention Evaluation Framework (PEF). GTO was piloted with 10 military installations (“sites”) representing all Military Services, plus the Coast Guard and National Guard. GTO is comprised of a written guide, training, and ongoing coaching. The pilot’s goal was for each site to use GTO to implement a SA/SH prevention program twice. Participants from each site were interviewed and data was collected on GTO steps completed, whether GTO spurred new evaluation activities and collaborations, and the degree of leadership support for GTO. Most sites completed all GTO steps at least once. Interviews showed that DoD participants believe GTO improved prevention understanding, planning, and evaluation capacity; strengthened confidence in chosen programs; and helped sites tailor programs to the military context. Barriers were the complexity of GTO, DoD personnel turnover, and the disruption that the COVID pandemic caused in sexual assault prevention program delivery. Many respondents were unsure if they would continue all of GTO after the coaching ended, but many believed they would continue at least some parts. According to the PEF, the GTO pilot revealed several additional prevention system gaps (e.g., need for leadership support) and changes needed to GTO (e.g., stronger leader and champion engagement), to support quality prevention. The military and other large organizations will need to focus on these issues to ensure prevention implementation and evaluation are conducted with quality. Introduction While the Department of Defense (DoD) has given increased attention and priority to preventing sexual assault and sexual harassment (SA/SH), it remains a problem for the US military.DoD's epidemiological estimates among active-duty Service Members (SMs) in 2021 show 8.4% of women (about 19,000) and 1.5% of men (about 17,000) experienced unwanted sexual contact in the past year.While a somewhat different metric for contact was used in 2021 which prevents comparison to 2018, numerically, these rates appear higher in 2021.Estimated rates of sexual harassment in 2021 among active-duty women increased from 2018, from 24 to 29%.The 2021 rate for men was similar to 2018, about 7%.Female victims are at increased risk of PTSD and other mental health disorders, attempted suicide, demotion in rank, and premature attrition from service (Rosellini et al., 2017).PTSD and mental health issues are nearly entirely explained by increased exposure to SA/SH (Jaycox et al., 2022).Reported less, male victims also face similar negative outcomes (Matthews et al., 2018;Millegan et al., 2016), and both male and female SA/SH victims are more likely to voluntarily leave the military than SMs who do not experience these crimes (Morral et al., 2021).DoD recognizes that comprehensive primary prevention is needed to stop SA/SH before it occurs. DoD Faces Multiple Challenges to Preventing SA/SH in the Military First released in 2019 and updated in 2022, DoD developed the Prevention Plan of Action (PPoA).Drawing upon years of research on SA/SH prevention and implementation science, the PPoA outlines the requirements for a prevention system across several domains-e.g., infrastructure, leadership, and collaborations-for how each installation should conduct SA/ SH prevention.To create a method by which to measure the elements of the PPoA in 2020, we developed the Prevention Evaluation Framework, an assessment tool describing what prevention should look like at military installations and military service academies (Acosta et al., 2022).The tool, based on literature and a panel of experts, describes organizational and program-level capacities needed to support military SA/ SH prevention efforts, operationalized into 36 items in eight domains (see Table 1).Assessments using the PEF found that the implementation of effective prevention in DoD has been challenged by the organizational complexity of the Department of Defense and lack of prevention capacity at all levels (Acosta et al., 2021).For example, until very recently, DoD has had few designated positions whose sole function was to implement and evaluate SA/SH prevention activities.Furthermore, DoD's infrastructure for prevention-e.g., leadership support, accountability, systematic evaluation, coordinated activities-has been underdeveloped and most prevention activities focus on building awareness rather than skills-e.g., brief lecture-based presentations (Office of Force Resiliency, 2022).Finally, another challenge is that SA/SH have many risk factors-e.g., perceptions of what peers believe is acceptable behavior, willingness to intervene on behalf of a potential victim, and alcohol and drug use (Tharp et al., 2013)-that can vary across the Department.Thus, multiple organizational capacities are needed both at the program (e.g., matching programming to documented need, continuously evaluating) and system (e.g., operate with accountability up and down the chain of command, provide leadership support) levels (Acosta et al., 2022) to address these challenges. Efforts to Build Capacity for Quality Prevention of SA/SH in the Military DoD has taken multiple steps to build SA/SH prevention capacity, developing policies and guidance to support changes made at service headquarters (from the top-down), while simultaneously supporting capacity-building at individual installations (from the bottom-up). Top-Down Efforts In 2020, DoD issued the Policy on Integrated Primary Prevention of Self-Directed and Prohibited Abuse or Harm (DoDI 6400.09,2020), which aims to establish a DoD-wide prevention system that makes datainformed decisions and implements research-based policies and interventions.In 2021, Secretary of Defense Austin directed multiple actions to build capacity of the prevention workforce, including establishing the Independent Review Commission on Sexual Assault in the Military (IRC), which conducted an independent, impartial assessment and made several recommendations about improving SA/SH prevention (Rosenthal, 2021).Specifically, the IRC recommended establishing a dedicated prevention workforce and equipping leaders with tools and competencies for prevention.In 2022, DoD started an effort to hire over 2000 new prevention personnel throughout the entire Department over the next 6 years.As the Department implements the Secretary Bottom-Up Efforts The Department has also undertaken multiple efforts to build capacity from the ground up.This involved delivering webinars to the prevention workforce, developing a prevention workforce training, and developing and evaluating tools for the prevention workforce to use in prevention planning, implementation, and evaluation.The latter involved piloting Getting To Outcomes ® (GTO ® ) from 2019 to 2022 at 10 military installations.GTO is an evidence-based planning and implementation support process that has been used in many civilian contexts but has only recently been adapted for military SA/SH (Chinman et al., 2021;Ebener et al., 2022).The GTO pilot-this article's focus-represents the first systematic effort by DoD to build SA/SH prevention capacity. Purpose and Contributions The purpose of this study was to (1) describe how GTO was used at these installations; (2) identify benefits and challenges from using GTO and to what extent GTO was able to overcome those challenges and build prevention capacity; and (3) discuss lessons the GTO effort yielded for prevention more broadly.The contributions to prevention science are the lessons learned from employing a bottomup capacity-building intervention in a military context, which represents a very large, and traditionally top-down organizational structure.To our knowledge, there has not been such an effort to build prevention capacity in the military using an implementation support like GTO before. To date, GTO has generally been evaluated in organizationally flat, community-based, and low-resource settings implementing youth prevention programming (e.g., Boys & Girls Clubs).In previous trials comparing organizations randomized to implement a prevention evidence-based program (EBP) on their own with youth or to implement the EBP with GTO, organizations using GTO implemented the EBP with higher fidelity (Chinman et al., 2016(Chinman et al., , 2018a, b) , b) demonstrated better outcomes among participating youth (Chinman et al., 2018a, b), and were more likely to sustain the EBP after the end of the GTO support (Acosta et al., 2020).GTO sites made these gains despite facing organizational barriers such as a poor implementation climate (Cannon et al., 2019).In contrast, this study advances our understanding of the barriers and facilitators of using an implementation support like GTO in a large system, one of the first to do so. Pilot Design, Participants, and Setting As part of DoD's effort to improve prevention capacity, we partnered with DoD's Sexual Assault Prevention and Response Office (SAPRO) to apply GTO to 10 military installations (referred to as "sites") representing all Services, Coast Guard, and the National Guard (sites are not named for confidentiality).Because this was a pilot, the focus was on understanding the facilitators and barriers and not on capturing impacts of individual participants in the programs run by each site.The sites-installations with hundreds to thousands of active-duty SMs and DoD civilian employees-varied widely and represent varied paygrades (from junior enlisted to senior leaders) and populations (age 18 to 50 +) of the military organization. In 2018, SAPRO announced the availability of the opportunity for installations to participate and sites volunteered.Each site prioritized a different aspect of SA/ SH prevention, planned different prevention activities, and convened a small GTO team (4-8 SMs and DoD civilian employees).Each installation was assigned two GTO "coaches" from a pool of 10 masters and doctoral-level prevention researchers trained in providing GTO coaching. Sites participated in the pilot from 2019 to the middle of 2022 (two sites dropped out of the project after a few months).The pilot's goal was for each site to complete two "GTO cycles."Cycle 1 began with a needs assessment (GTO Step 1) and several planning, implementation, and evaluation activities.The interventions chosen were a mix of previously established evidence-based programs (e.g., Parent-Based Intervention, (Kuntsche & Kuntsche, 2016), those that were locally created but based on well-documented models in SA/ SH prevention (e.g., bystander interventions, (Hoxmeier & Casey, 2022), and others that were locally created based on promising, but previously untested approaches.See Table 2 for a list of the interventions by site.After the intervention was delivered to one cohort of SMs, the GTO coaches helped analyze the data and led the GTO site team through a quality improvement process (Step 9) to revise the intervention for the next cohort of SMs (Cycle 2).Sites were then asked to consider how to sustain the intervention (Step 10).Table 3 shows how each site performed the various practices in each GTO step, to implement their chosen SA/SH program. Getting To Outcomes for Sexual Assault and Sexual Harassment Prevention in the Military GTO is an implementation support process of 10 steps any organization should progress through (six for determining needs and planning; three for evaluation and improvement; one for sustainment) and then builds capacity with written tools, training, and ongoing coaching to help those organizations complete those steps with quality, as applied to their intervention.As defined by the Expert Recommendations for Implementing Change (ERIC, (Powell et al., 2015), GTO combines strategies of training and educating stakeholders, providing facilitation, multiple evaluative and iterative strategies, adapting and tailoring to the context, and supporting clinicians/practitioners (Chinman et al., 2013;Matthew Chinman et al., 2008a, b).GTO's capacitybuilding is rooted in social cognitive theories of behavioral change (Ajzen & Fishbein, 1977;Bandura, 2004;Fishbein & Ajzen, 1974) in which practitioners are asked to be active learners-i.e., GTO establishes expectations and gives opportunities and guidance for practitioners to carry out for themselves the best practices that GTO specifies across the 10 steps.GTO employs multiple approaches advocated for by (Beidas et al., 2022) to accommodate site characteristics, namely empowering sites to choose interventions that fit their needs (GTO Step 3, Table 2), offering a menu of support options (assisting with data analysis, briefing senior leaders), and using facilitation for GTO's coaching model in order to be adaptive to site needs.GTO has been applied to several content domains including the prevention of teen pregnancy, underage drinking, youth drug use, Veteran homelessness (Chinman et al., 2004;Ebener et al., 2017;Hannah G. et al., 2011;Imm et al., 2007;Mattox et al., 2013), and was tailored to the military context in multiple ways.We adapted the GTO manual (and the online, streamlined version)-i.e., defining SA/ SH drivers, describing specific evidence-based SA/SH prevention programs, and presenting evaluation measures relevant for SA/SH prevention (e.g., intentions to intervene in a risky situation, (Chinman et al., 2021;Ebener et al., 2022).The manual was based on a literature review and input from experts in prevention of SA/SH and military SA/ SH.The manual also presented data on SA/SH prevalence Each site considered various sustainability factors such as securing adequate funding, staffing, and installation command buy-in, to make it more likely that their SA/SH program would be sustained from DoD's bi-annual Workplace Gender Relations Assessment (see Table 3 for a list of GTO information applied to SA/SH in the military, by step).Each site's GTO team received a full day training to introduce them to the 10 steps, familiarize them with the GTO manual, and practice using GTO tools by working through a fictional example of a military installation trying to improve their prevention.After the training, there were bi-weekly coaching meetings at each site to complete the steps. Data Collection GTO Coach Self-Reported Progress Form GTO coaches from each site were asked to complete a short, structured form asking for site name, prevention program name (and whether the program's start was facilitated by GTO or pre-dated GTO), GTO steps completed, whether the site used GTO to conduct new evaluation activities, and whether GTO spurred any new collaborations.In previous studies, GTO has demonstrated success in helping practitioners start new program evaluations and forge new partnerships (Chinman et al., 2013;Matthew Chinman et al., 2008a, b) and these are key domains in the Prevention Evaluation Framework described above.This selfreport was checked by two researchers by comparing the selfreports with online document registries containing completed GTO tools and training materials for each site. Site Interviews We utilized a multiple case study approach (Merriam, 1998;Yin, 2009), purposively sampling and recruiting respondents most closely involved in GTO at each site-i.e., GTO Site Team members.Using a common interview guide, GTO team members were interviewed about the following: what did and did not work in utilizing GTO to plan, implement, and evaluate prevention activities; the likelihood of sustainability and compatibility of GTO with the military; leadership support, communication, and collaboration across functional areas or helping agencies; personnel support, capacity, and turnover; the role of the site champion (i.e., SM or DoD civilian employee committed to shepherding the GTO prevention processes from start to finish); site culture; and recommendations for supporting sustainability of GTO.The interview protocol was developed by the authors, who have extensive expertise with GTO, in collaboration with SAPRO.The team also included DoD GTO coaches who reviewed questions for relevancy.Interviews were conducted by three researchers who were not involved in coaching.We conducted 21 semi-structured discussions with leaders and team members at nine sites.One site was not available.Two sites ended their GTO participation early and were interviewed soon after they ceased participation (mid 2020), and the remaining eight sites were interviewed in mid to late 2021 in eight group discussions with key site participants (n = 14 individuals across all group discussions), documented through detailed notetaking.Six months later, we revisited the eight sites, conducting an additional 13 individual discussions, which were audio recorded and transcribed.We removed all identifying information in documentation and quotations presented in this report to preserve the anonymity of the participants. Data Analysis GTO Coaches Self-Reported Progress Form Forms were synthesized into Table 2 to systematically describe each site's progress in utilizing GTO-e.g., number of GTO cycles and steps completed.A GTO step is "completed" when the tools from the GTO manual are finished and of sufficient quality as deemed by the GTO coach.Cycle 1 was considered complete when a site proceeded through GTO Steps 1-6, conducted the intervention with a cohort of SMs, and then used evaluation data to complete Steps 7-9.Cycle 2 was considered complete when the site conducted the intervention, collected data, and engaged in quality improvement a second time, followed by GTO's sustainment step (Step 10).Each coach's responses about GTO's impact on the program, the process and outcome evaluation of that program, and new partnerships were coded by the lead author and double coded by the second author, both experts in GTO.The coding scheme for all four items was as follows: GTO led to the creation of the program/evaluation/partnership; GTO was used to revise existing program/evaluation/partnership; or the program/ evaluation/partnership pre-dated the use of GTO.Also, for certain items at certain sites, there was no activity, meaning that the site did not progress through the relevant GTO Steps related to program, evaluations, or partnerships. Site Interviews All transcripts and notes were coded in Dedoose qualitative data analysis software.We identified a deductive coding scheme including descriptive (e.g., Services), thematic (e.g., leadership support), and analytic (e.g., level of sustainability) codes (Saldana, 2021).To assess inter-rater reliability across coders, two researchers coded 10% of interviews and compared the level of agreement in coding at 75% agreement or above across the different codes (O'Connor & Joffe, 2020).Any disagreements in coding were discussed and adjudicated to ensure intercoder reliability.These two researchers coded the remainder of the transcripts.Finally, we conducted a thematic analysis using analytic memoing to identify common patterns in the benefits and challenges of GTO and likelihood of continued use of GTO (Braun & Clarke, 2006).We then summarized coded data at the site level utilizing a cross-case (i.e., site) meta-matrix to examine patterns between the benefits, challenges, and sustainability of GTO with various aspects of organizational structure and dynamics (Bush-Mecenas & Marsh, 2018;Miles et al., 2018).To strengthen the validity of our findings, we triangulated data across interview transcripts and notes as well as GTO documentation, where possible (Denzin, 1978;Patton, 1999).In our analysis process, we attempted to craft rival hypotheses (alternative theories, research bias, threats to validity) and real-world rival hypotheses (alternative theories, implementation issues), to test and validate our analyses (Yin, 2013). Given the importance of leadership support, we specifically coded this domain, by site, as low (= 1), medium (= 2), or high (= 3) (see Table 2).High included senior leadership being aware of GTO and projects from the beginning-e.g., asking what step the GTO Team was on or tracking project outcomes.Medium was some general awareness, but less supportive-e.g., a leader expressing support for the prevention program, but being unaware of the connection to GTO.Low was where respondents experienced limited to no buyin or support from senior leaders.The leadership support data was also added to Table 2. GTO Progress and Leadership Support As shown in Table 2, GTO coaches reported that 9 sites (90%) completed GTO steps 1-6, six sites (60%) completed all 10 steps of GTO (completing a full cycle of GTO) and three sites (30%) completed all ten steps twice-i.e., completing two full cycles of GTO.Seven sites (70%) used GTO to either implement a new prevention program or revise a previously chosen program.Eight sites (80%) used GTO to either begin or revise process and outcome evaluations of these programs.Leadership support varied, with five sites rated at the highest level, one site rated at the mid-level, and two sites at the lowest level.No site rated at the lowest level conducted a second cycle of GTO. Qualitative Data Benefit Sites Accrued from Using GTO Improved Prevention Understanding Respondents at six sites noted that GTO improved their understanding of prevention and evidence-based programs and strengthened their capacity to implement prevention activities.One respondent shared, I think [the GTO] process has really expanded my knowledge … The process itself and being able to go through the tools to complete with our advisors, being able to ask questions about the process, getting advice ... It has really helped us to understand primary prevention and because of that we were able to develop a prevention strategy. Also, respondents at half of the sites noted that because of using GTO, they had moved from a focus on compliance (e.g., enforcing participation in annual online training) toward a focus on understanding whether the prevention efforts being used were effective. (Before GTO) there was nothing that really got off the ground in terms of prevention.With the GTO project, we actually could go step by step in determining whether or not something was actually effective. Improved Planning Respondents at seven sites reported improving their knowledge specifically about program design and planning.Prior to GTO, respondents indicated there was no strategic prevention planning process in place, and that prevention was planned in a reactive or haphazard manner.One site described this process as their "throw spaghetti on the wall and see if it sticks approach."GTO coaches provided a structure that respondents described as building their understanding of what constitutes quality prevention.Others commented on how GTO helped them to "expand the seats around (their) prevention table" and attend more readily to coordination between the personnel responsible for prevention and others at the installation that could provide useful insights, data, referrals, and other resources. Improved Evaluation Capacity Four sites noted that, prior to using GTO, they had engaged in very limited or no evaluation of prevention activities.These sites reported that gathering data about program effectiveness helped them adapt and improve their programs.For example, at one site, the GTO team was able to document the relationship between program dosage and prevention knowledge among program attendees (e.g., what constitutes sexual harassment), which strengthened leader support.Respondents at five sites reported that they were able to draw upon data and resources gathered through GTO to lessen the burden of compliance activities.For example, administering feedback surveys provided useful data and information to include in program reports or process evaluations. Strengthened Confidence in Chosen Programs At most sites, respondents noted that using the GTO process had helped increase confidence in the selected programs, and prevention activities in general, among the implementation team members as well as leadership.For example, respondents described the importance of being able to prove their prevention efforts are effective because these efforts compete with other duties.Sites indicated that GTO specifically helped by supporting the selection of evidence-based programs and implementation of process and outcome evaluations.These efforts were described by one site as adding "richness and validity to the services (the installation) provides" and helping to motivate personnel by helping them to feel proud of their work quality. Assistance Tailoring Programs to the Military Context A third of sites noted that using the GTO process helped site participants to adapt and modify programs for the military context.One respondent stated that "…[GTO] allowed my team to make an established program more efficient and effective to …[our enlisted personnel]."Another site described how the GTO process helped them to better tailor their program to younger soldiers, who they found challenging to reach using prior interventions. Champions Played an Important Role The respondents stated that champions-SMs and DoD civilians in mid-level leadership roles-played a crucial role in maintaining a commitment to GTO and, in some sites, ensuring new site participants gained GTO training.At the site level, champions often were responsible for communicating the benefits of GTO and the efficacy of prevention programs up the chain of command to senior leaders.Champions also often served as consistent team members and advocates for the work, especially at sites where there was high personnel turnover on the GTO teams.In some sites, champions also took responsibility for ensuring that new personnel were trained in or guided through the GTO process.This was especially important where responsibilities for prevention programs were assigned to individuals not familiar with program planning and/or prevention practice.Like GTO coaches, champions also played an important role in maintaining enthusiasm, commitment, and accountability among team members. Challenges to Using GTO Complexity of GTO At about half of the sites, respondents shared that GTO was complex and more academic than their typical procedures.It took time and effort to build the capacity (i.e., experience, knowledge, and skills) of site personnel to use GTO and to onboard new personnel following turnover.While respondents were enthusiastic about their own growth and learning through GTO training and use, they noted that the process might be less accessible to individuals with less experience. Turnover and Time Turnover occurred frequently at every site and was consistently described as a major challenge to implementing and sustaining GTO.Several sites noted that even after GTO helped build site prevention capacity, turnover necessitated starting over with new team members, which slowed the GTO work.At every site, respondents also noted that it was often a challenge to fully and consistently engage due to competing priorities and demands.This was especially true where personnel were assigned to participate in prevention activities as a collateral duty-which was the case at almost every site. COVID Challenges in GTO implementation were further exacerbated by the COVID-19 pandemic.Respondents across all sites described how the pandemic slowed their GTO and prevention work and, in some sites, hurt implementation.For example, three sites had to change to online program delivery or reduce the number of units receiving the prevention program.The elongated timeline for the GTO work caused by the pandemic also amplified the challenges around personnel turnover.One respondent described, "the biggest challenge we had is because of the COVID.Everything went to a standstill….welost some members [of our GTO team]… and some of the members got new job duties…" While scheduled turnover (i.e., movement to other installations) is characteristic of military contexts, the slowing of the GTO process during the pandemic meant that teams experienced greater turnover than expected. Likelihood of Sustaining GTO to Support Quality Prevention Many respondents were unsure whether their site would continue utilizing GTO after the coaching ended.Overall, sites where respondents reported greater benefits of GTO were more likely to report a higher likelihood of sustaining GTO.However, benefiting from GTO was not a guarantee for sustaining GTO.For example, personnel capacity and shortages could derail sustainability, as one respondent articulated: I would say [we may not continue to use GTO].They might use aspects of the project, of the model.They will probably, most definitely use the program that was crafted out of this model, but I don't know if they would access this model to look at something new and different, because there's nobody there onsite [due to personnel turnover]. In a few sites, respondents noted that they would keep using GTO, or elements of GTO, in their individual work planning processes.About half also noted that their sites might continue to use specific activities learned through GTO, such as fielding surveys and feedback forms to capture evidence of program effectiveness. Discussion This project was the first systematic effort to build capacity for SA/SH prevention in the US military using a bottomup approach like GTO.Almost all participating sites were able to implement a prevention program, some with multiple cohorts of SMs, despite COVID-19 restrictions.Although mostly qualitative, the data is consistent with previous GTO randomized and quasi-experimental trials in civilian contexts in which those who were more engaged in the GTO process experienced greater improvements in their programming and had larger gains in capacity (Acosta et al., 2013;Chinman et al., 2013;Matthew Chinman et al., 2008a, b). The challenges that GTO faced in the 10 sites reveal larger prevention systems issues within DoD that have implications for prevention across the Department.These larger issues are not just relevant for the US military.They could as easily apply to the public school systems, who have also been looked to for implementing various prevention programs, but often have not been able to do so with any impact (Chinman et al., 2019).The eight domains of the Prevention Evaluation Framework (in italics below) are a useful guide for how to view these key systems issues of prevention-which of these a bottom-up approach like GTO can impact-and how the lessons learned from the GTO pilot can inform plans for the future of prevention in DoD and in large organizations in the civilian sector as well as what changes are needed for implementation strategies like GTO to make those strategies more accommodating to sites. While the top DoD leadership (e.g., Secretary of Defense Austin) strongly endorses a robust prevention system,1 leaders lower in the chain of command (i.e., base commanders) often have a more direct impact.In the pilot, sites with more engaged, knowledgeable installation leaders used GTO more comprehensively, operated with greater accountability, and were more likely to endorse continuing to use GTO.Although many correctly point to "leadership" as being an important predictor of evidence-based practice uptake (Hannes et al., 2010;Vroom et al., 2021), this study highlights the need for all organizations, including DoD, to activate the "middle" leadership layer.Included in this layer are champions, those who support, market, and support program implementation and help to overcome resistance to prevention efforts in an organization (Bonawitz et al., 2020).This study showed how champions were effective-e.g., by communicating the benefits of GTO and prevention up the chain of command and providing consistency in the face of GTO Team member turnover.As is in this study, organizations that have both champions and supportive leadership appear better poised to conduct effective prevention. Given the importance of leadership and champions, additions to the GTO implementation strategy could include securing preliminary agreements up front and making changes to the construction of the GTO implementation teams.In the current and past projects (Chinman, Acosta, et al., 2018;Chinman, Ebener, et al., 2018), the GTO implementation teams have been made of naturally emerging champions and individuals who were directly responsible for implementation who would then reach out to leadership for assistance.In these projects, the participating organizations were much flatter than DoD.Thus, in the current project, while certain champions did emerge and facilitate, it would likely improve implementation if GTO coaches engaged in a more intentional process of identifying champions a priori. Including key opinion leaders across multiple levels as part of the team, and strategically identifying them through a diffusion of innovation lens (i.e., early adopters) and matching their characteristics to contextual factors of the organization as recommended by (Bunce et al., 2020), could improve leadership support.Furthermore, studies have shown that multiple champions are often needed for successful implementation (Damschroder et al., 2009;Shaw et al., 2012;Soo et al., 2009), especially in a hierarchical organization like DoD with multiple levels of command.This approach could help ameliorate the fact that the senior leaders who volunteered the participation of their sites were not involved in GTO or the implementation of the chosen intervention at the site in their command-a circumstance that is common especially in large organizations. Another critical factor for GTO in these sites and across DoD is the availability of a dedicated, trained prevention workforce.While GTO was able to successfully train GTO teams at each site, turnover, busy schedules, and the lack of dedicated personnel were constraints over time.Whether asking middle school teachers to incorporate drug prevention into their health class or asking DoD sexual assault response personnel to add prevention to their portfolio, organizations attempting prevention will be less successful without qualified and dedicated prevention personnel-with or without implementation support approaches such as GTO.The time required and complexity of GTO were drawbacks mentioned by sites; however, a qualified and dedicated workforce may be more effective in utilizing such support.The effort underway by DoD to hire new dedicated prevention personnel-~ 350 as of May 2023, ~ 2000 planned (Department of Defense, 2022)-is an opportunity, but must be done with care as new staff who are tasked to implement a new kind of service can be siloed or unwelcome.For example, the Department of Veterans Affairs (VA) hired and deployed 1200 "Peer Specialists," individuals with mental illnesses and substance abuse disorders who are trained to use their experience to help other Veterans with similar problems (M.Chinman et al., 2008b).Using implementation science methods, many researchers have documented how traditional VA staff have been extremely hesitant about incorporating this new kind of provider, despite evidence showing they improve outcomes and are greatly valued by Veteran patients (e.g., Chinman et al., 2006).Implementation strategies, such as collaboratively planning the new service, have been helpful in mitigating these challenges (Chinman et al., 2010) and would likely be useful in deploying the prevention workforce in DoD. After the foundational domains of leadership and the prevention workforce, three additional domains-collaborative relationships, data, and resources-must be considered.GTO was able to make some impact on all three: brokering collaborations across silos, ensuring program evaluation data was collected and analyzed to support data-driven decisions, and helping GTO site team members to request more resources.However, to truly have a functional prevention system, organizations like DoD must integrate previously siloed efforts-including having personnel tackling different related domains (e.g., sexual assault, alcohol), coordinating their programming, and sharing data.While adequate resources are needed to support personnel and programming (e.g., Chinman et al., 2012), this study shows that resources do not exist in a vacuum, but are tied to engaged and supportive leadership, which in turn often requires ongoing access to data showing the impact of prevention on outcomes. Lastly, the final three domains-comprehensive approach to prevention, quality implementation, and continuous evaluation-all relate to the conduct of prevention activities on the ground.GTO's training, tools, and coaching were able to support better quality prevention than had previously been implemented.GTO guidance strongly encourages organizations to implement comprehensive prevention that is consistent with evidence; however, in the military, that was difficult.Most evidence-based prevention programs were developed outside the military and must be adapted (Acosta et al., 2021;Perkins et al., 2016), requiring a higher level of skill among those doing the adapting.Implementation supports like GTO can help, as shown in this study, but having a larger number of military-tested, evidence-based programs available would greatly enhance adoption.Implementing with quality and conducting continuous evaluation-key elements of any prevention effort-often requires a culture that genuinely uses the results of these activities (i.e., evaluation data) and rewards them.As demonstrated, GTO was able to support these activities, but ultimately, meeting the demands of these three domains across the entire military will largely be dependent on the other domains of Prevention Evaluation Framework-e.g., supportive leadership, appropriate workforce, and resources. Limitations and Future Research Although the first to evaluate implementation support-GTO-for prevention in the military, this study used a small number of sites and did not assess SM outcomes but focused on the impacts of GTO on sites' prevention capacity and performance.Future studies in the military, and other large organizations, should include large, cluster-randomized trials where sites tasked with prevention are randomized to receive GTO or not.Similar to GTO studies in community settings (Acosta et al., 2013;Chinman, Acosta, et al., 2018;Chinman, Ebener, et al., 2018), such trials should assess site-and implementer-level characteristics, implementation outcomes (e.g., fidelity, dose), outcomes of individual participants, while adding social network analyses to assess impacts of champions. Conclusion We piloted GTO at 10 military bases across DoD to support better SA/SH prevention.While there were certain challenges (time, complexity, COVID), GTO was generally successful at improving the quality of specific prevention activities.However, the use of GTO revealed that successful implementation of prevention in a military context (and likely any organizational context) also requires a prevention infrastructure highlighted by the first five elements of the PEF-e.g., leadership, prevention workforce, collaboration, data, and resources.Given these elements were nascent during the GTO pilot, the military (or any organization) will need to focus on these issues to ensure prevention implementation and evaluation are conducted with quality-the final three elements of the PEF.Also, while GTO has features that accommodate setting characteristics recommended by (Beidas et al., 2022), this study revealed changes needed within GTO to better accommodate large organizations like the military, including more intentional engagement of leadership and identification of champions. Table 1 Prevention evaluation framework-organizational factors Commencing DoD Actions and Implementation to Address Sexual Assault and Sexual Harassment in the Military," September 22, 2021), it will require tools to support the development of comprehensive prevention efforts that are tailored to each military community. Table 2 GTO progress, impact, and support from leadership across DoD sites
2023-08-31T06:18:30.630Z
2023-08-29T00:00:00.000
{ "year": 2023, "sha1": "0459d56e88fb2eec85d1a08aeca95ab4687b58ce", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11121-023-01577-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "bfaa96fb8d731bda66796e3aa081e333c54ba6bf", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
118375350
pes2o/s2orc
v3-fos-license
Core-collapse supernova enrichment in the core of the Virgo Cluster Using a deep (574 ks) Chandra observation of M87, the dominant galaxy of the nearby Virgo Cluster, we present the best measurements to date of the radial distribution of metals in the central intracluster medium (ICM). Our measurements, made in 36 independent annuli with $\sim$250,000 counts each, extend out to a radius r$\sim$40 kpc and show that the abundance profiles of Fe, Si, S, Ar, Ca, Ne, Mg, and Ni are all centrally peaked. Interestingly, the abundance profiles of Si and S - which are measured robustly and to high precision - are even more centrally peaked than Fe, while the Si/S ratio is relatively flat. These measurements challenge the standard picture of chemical enrichment in galaxy clusters, wherein type Ia supernovae (SN Ia) from an evolved stellar population are thought to dominate the central enrichment. The observed abundance patterns are most likely due to one or more of the following processes: continuing enrichment by winds of a stellar population pre-enriched by SNCC products; intermittent formation of massive stars in the central cooling core; early enrichment of the low entropy gas. We also discuss other processes that might have contributed to the observed radial profiles, such as a stellar initial mass function that changes with radius; changes in the pre-enrichment of core-collapse supernova progenitors; and a diversity in the elemental yields of SN Ia. Although systematic uncertainties prevent us from measuring the O abundance robustly, indications are that it is about 2 times lower than predicted by the enrichment models. INTRODUCTION The discovery of Fe-K line emission in galaxy clusters (Mitchell et al. 1976;Serlemitsos et al. 1977) showed that their X-ray emitting intracluster medium (ICM) contains significant amounts of processed elements created by supernovae. Different types of supernovae synthesize different ratios of elements. In particular, type Ia supernovae (SN Ia) produce large amounts of Fe and Ni. Meanwhile, core-collapse supernovae (SNCC) produce lighter elements such as O, Ne and Mg. Si-group elements (Si, S, Ar, and Ca) are produced by both supernova types. By measuring specific elemental abundances within the ICM, we can separate the relative contributions to the chemical enrichment by different types of supernovae (e.g. Werner et al. 2008; and references therein). Such measurements place important constraints on models of supernova explosions (e.g. Dupke & White 2000;de Plaa et al. 2007;Simionescu et al. 2009). Previous results for clusters and groups pointed to a centrally peaked Fe abundance distribution, coupled with a relatively flat O abundance profile, in cool core systems. Based on such measurements, it was proposed that the relative contribution of SN Ia to the enrichment of clusters increases towards their central regions (Finoguenov et al. 2000;Böhringer et al. 2001;Tamura et al. 2001;Finoguenov et al. 2002;Matsushita et al. 2003). The central Fe abundance peaks in cool core clusters were expected to form primarily by SN Ia with long delay times, as well as stellar mass loss in the cD galaxies (Böhringer et al. 2004;De Grandi et al. 2004). SNCC products such as O, Ne, and Mg were, on the other hand, produced early in the formation history of clusters at z ∼ 2-3 and were thought to be well mixed throughout the ICM. Because SNCC also produce large amounts of Si and S, this scenario predicts shallower central gradients for the distributions of Si and S than for Fe, resulting in Si/Fe and S/Fe ratios that increase with radius. The predicted Si/Fe gradient has, however, not been seen and the observed radial profiles of the Si/Fe ratios are in most clusters consistent with being flat, indicating that Si is just as peaked as Fe (e.g. Böhringer et al. 2001;Finoguenov et al. 2002;Tamura et al. 2004;Sanders et al. 2004;Durret et al. 2005;de Plaa et al. 2006;Werner et al. 2006a;Sato et al. 2007Sato et al. , 2008Matsushita et al. 2007). Moreover, contrary to previous claims, Simionescu et al. (2009) show that in Hydra A and in a sample of other nearby clusters of galaxies the O abundance is also centrally peaked. The observed centrally peaked distribution of SNCC products suggests that the metallicity gradients form early in the history of clusters and persist for a long time, or that the enrichment by the winds of the evolved stellar population is significantly more efficient than previously thought (Simionescu et al. 2009). Centrally peaked SNCC products are also observed in the core of the Centaurus Cluster and have been interpreted as being either due to continuous or intermittent star formation over the past ∼8 Gyr, or due to the early enrichment during the formation of the central galaxy (see Sanders & Fabian 2006). This is the third in a series of papers analysing a deep (574 ks) Chandra observation of M 87, the dominant galaxy of the Virgo Cluster. The first two papers (Million et al. 2010, hereafter Paper I;Werner et al. 2010, hereafter Paper II) focus on the effect of the AGN on the surrounding hot plasma. This paper focuses on the history of chemical enrichment of the central regions of the Virgo Cluster. Many previous studies of this subject utilize XMM-Newton or Suzaku because of their better spectral resolution. However, the far superior spatial resolution of Chandra, which allows us to better separate out obvious complications due to substructure (as well as the sheer number of photons already collected with Chandra) allow us to probe the detailed distribution of metals with greatly improved accuracy over previous work. The structure of this paper is as follows. Sect. 2 describes the data reduction and spatially resolved spectroscopy. Sect. 3 presents our detailed abundance and abundance ratio profiles. Sect. 4 describes the implications of our measured radial abundance profiles on the history of chemical enrichment through supernovae. Sect. 5 summarizes the results and the conclusions. Throughout this paper, we assume that the cluster lies at a distance of 16.1 Mpc (Tonry et al. 2001), for which the linear scale is 0.078 kpc per arcsec. Data reduction A detailed description of the data reduction and analysis is described in Paper I. Spectral analysis In order to minimize systematic uncertainties associated with the multi-temperature structure of the X-ray emitting gas, we conservatively exclude the X-ray bright arms and the innermost multiphase core (more details can be found in Paper I). As discussed in Paper I, beyond the innermost core and the X-ray bright arms, the gas can be well described by an isothermal model at each radius. We use partial annuli that vary in width from 10 to 30 arcsec, with ∼250,000 net counts in each of a total of 36 regions. Background spectra were determined using the blanksky fields available from the Chandra X-ray Center (see Paper I for details). Due to the high X-ray brightness of the target, uncertainties in the background modeling have little impact on the determination of the quantities presented here. The spectra have been analyzed using XSPEC (version 12.5; Arnaud 1996). Each annulus is fit with a photoelectrically absorbed (Balucinska-Church & McCammon 1992) single temperature APEC thermal plasma model (Smith et al. 2001; AtomDB v2.0.1 was used). We fix the Galactic absorption to 1.93 × 10 20 cm −2 (determined from the Leiden/Argentine/Bonn radio H i survey; Kalberla et al. 2005). In order to investigate modeling uncertainties in the determination of chemical abundances, we also repeated the spectral fits using the MEKAL thermal plasma model (Kaastra & Mewe 1993;Liedahl et al. 1995). All spectral fits were carried out in the 0.6 − 7.0 keV energy band. The extended C-statistic available in XSPEC, which allows for background subtraction, was used for all spectral fitting. The temperature, the normalization, and the abundances of O, Ne, Mg, Si, S, Ar, Ca, Fe, and Ni are free parameters for every annulus. Fig. 1 shows an example spectrum containing ∼250,000 net counts with the metal line emission labeled. The spectra allow us to determine the Fe abundance to within ≤ 3 per cent, the abundances of Si and Table 1. Summary of the results of the measured radial profiles of the abundance ratios in the ICM. The data were fit in the 4-40 kpc radial range to a linear model of the form Z = a+br, where Z is the abundance ratio and r is the radius in kpc. Columns note the specific abundance ratio, best fit parameters a (in Solar units) and b (in units of 10 −3 Solar kpc −1 ), and the χ 2 /ν. In cases where the APEC and MEKAL plasma codes disagree, we have included the best fit parameters for each code. The errors on parameter a are at the 68 per cent confidence level. The errors on the slope b are at the 68 per cent and at the 95 per cent (in the parenthesis) confidence level. The 95 per cent confidence upper and lower limits of the slope b are overplotted on each abundance ratio profile (Figs. 2d-f, 3c-e, 4d-f). We note that only the Ni/Fe abundance ratio profile is not fit well by a simple linear model. S to within ≤ 5 per cent, the abundances of Ne, Mg, and Ni to within ≤ 10 per cent, and the abundances of Ar and Ca to within ≤ 20 per cent statistical precision. The errors on the abundance profiles were determined from a Markov Chain Monte Carlo (MCMC) analysis. Measurement errors were determined from the 68 per cent confidence posterior distribution of the MCMC analysis. Chain lengths were at least 10 4 samples after correcting for burn-in. Abundances in the paper are given with respect to the 'proto-Solar' values of Lodders (2003). The AtomDB v2.0.1 atomic database Our modelling makes use of the recently updated AtomDB v2.0.1 atomic database used by the APEC thermal plasma model implemented in XSPEC. This represents a major update from the previous AtomDB v1.3.2 with nearly all atomic data replaced. 1 The update of the atomic database affects most significantly the Fe abundance, which is on average lower by ∼ 20 per cent in v2.0.1 compared to v1.3.2. This change has a slight dependence upon the temperature of the plasma. The abundances of Si, S, Ar, Ca, Ne, Mg, and Ni are smaller by less than ∼10 per cent in the updated version. The abundance ratios with respect to Fe are significantly higher as a result. We note that our main conclusions based on the centrally peaked abundances and abundance ratios are not sensitive to our choice of plasma code. METALLICITY OF THE X-RAY GAS We determine the abundance profiles for Fe, Si, S, Ar, Ca, Ne, Mg, and Ni and find that all elements have a centrally 1 www.atomdb.org peaked distribution. We also measure the abundance ratios of the individual elements with respect to Fe. In the 4-40 kpc radial range, we fit a linear model of the form Z = a + br to these measurements, where Z is the abundance ratio, r is the radius in kpc, a is the normalization of the linear trend, and b is the slope. Table 1 summarizes the best-fit linear relations for each of the abundance ratio profiles. For the Ne/Fe and Ni/Fe ratios, which have significant systematic uncertainties, we present the best-fit parameters for profiles determined using both the APEC and MEKAL plasma codes. The absolute abundances determined by the MEKAL plasma code are larger by 1-2 per cent for Si, by ∼3 per cent for S, by ∼15 per cent for Ar and Ca, and by ∼20 per cent for Fe than the values obtained using APEC. The slopes of the abundance ratio profiles for these elements are, however, consistent with those determined by the APEC code. The abundances of Ne, Ni, and Mg are significantly different when determined by the MEKAL plasma code. A discussion of modeling biases for the abundances of these elements can be found in Sect. 3.3.1. For gas with kT ∼ > 2.0 keV, the O abundance is extremely difficult to determine with Chandra. At the O viii line energy of 0.65 keV the effective area of the ACIS detectors is significantly affected by buildup of contamination. Additionally, in the lower surface brightness areas at larger radii the O abundance measurements are sensitive to the assumed Galactic foreground model. Because our O abundance measurements have large systematic uncertainties and may be strongly biased, we do not report their best fit values. We have examined the abundance ratios separately to the north and south of M 87. Although the overall abundances are larger to the south (see Paper I), the abundance ratios determined from the northern and southern sectors agree well with each other and the azimuthally-averaged analysis presented here. Fe, Si, and S abundances The top row of Fig. 2 shows the Fe, Si, and S abundance profiles, respectively. The bottom row shows the abundance ratio profiles of Si/Fe, S/Fe, and Si/S. We emphasize that these three elements have the most robustly determined abundances, with statistical uncertainties of less than 5 per cent. The Fe abundance profile ( Fig. 2a) peaks at ZFe > 1.2 Solar in the central regions. The Si and S abundance profiles ( Fig. 2b-c) peak at a slightly larger central value of ZSi;S ∼ 1.5 Solar. These profiles then exhibit steady declines to ∼ 0.6 Solar by r ∼ 25 kpc. A significant enhancement, or 'bump', in the Fe and Si (and possibly S) abundances is seen at r ∼30 kpc. This is approximately the radius at which the bright X-ray arms terminate. As discussed in Paper I, this bump may be due to the uplift of cool, metal-rich material in the wake of buoyantly rising radio bubbles. Most importantly, we observe, for the first time, a radially decreasing trend in the Si/Fe and S/Fe abundance ratios ( Fig. 2d-e). They both peak at ∼ 1.3 Solar near the core and decline to ∼ 1.0 Solar by r ∼ 35 kpc. Both the Si/Fe and S/Fe profiles are well described by a linear decline as a function of radius. As seen in Table 1 slopes to one another and are significantly steeper than a constant ratio (at the 5-11σ level). The Si/Fe and S/Fe ratios of ∼0.9 Solar measured at radii between 50-200 kpc with Suzaku indicates that beyond r ∼ 35 kpc these abundance ratio profiles flatten. The Si/S abundance ratio profile ( Fig. 2f) is an interesting cross check upon the determination of these abundances. Because both elements are created in roughly equal quantities by both SNCC and SN Ia (see Iwamoto et al. 1999;Nomoto et al. 2006), the Si/S abundance ratio is insensitive to the relative fraction of supernovae that explode as SNCC and SN Ia. Any trend in the Si/S abundance ratio profile is instead primarily affected by a change in the average yields of SNCC and SN Ia as a function of radius. The Si/S abundance ratio is consistent with being flat as a function of radius and its observed mean value is ZSi/ZS = 1.002 ± 0.009 Solar. The systematic uncertainties involved in the modelling of Si and S lines are relatively small due to the favorable location of their lines in the 2-3 keV range, where no residual Fe-L emission is present. Possible biases due to multitemperature structure are minimized by our choice of extraction regions which avoid clear density and temperature inhomogeneities and have an approximately isothermal azimuthal temperature structure (see Paper I). Nonetheless, we performed detailed spectral simulations of multitemperature plasmas to investigate possible biases due to residual structure and projection effects. We mixed APEC thermal models with slightly different temperatures and metallicities, where the cooler plasma is more enriched than the hotter one. The simulated gas mixtures span a grid of possible projection effects and unresolved temperature structure. We fitted these simulated spectra with a single temperature model. The potential bias in Si/Fe, S/Fe and Si/S ratios was always less than 10 per cent. Furthermore, we note that systematic uncertainties related to the effective area calibration of the ACIS detectors around the Si edge would be likely to affect the Si and S abundance profiles differently. Therefore, the striking similarity of the Si/Fe and S/Fe profiles strongly indicates that the detection of these trends is robust. Ar and Ca abundances The top row of Fig. 3 shows the Ar (left) and Ca (right) abundance profiles. The bottom row shows the radial distributions of the Ar/Fe (left), Ca/Fe (middle), and Ar/Ca (right) abundance ratios. Because the Ar and Ca lines have small equivalent widths, their abundances are more difficult to measure precisely than Fe, Si, and S: our data enable us to measure the Ar and Ca abundances to within 20 per cent statistical uncertainty. Our spectral fits reveal that the Ar and Ca abundances are relatively insensitive to small changes in the temperature and Fe abundance. Therefore, systematic uncertainties on these measurements are likely small. The Ar and Ca abundance profiles ( Fig. 3a-b) are also centrally peaked. The Ar and Ca abundances peak at ZAr ∼ 1.2 and ZCa ∼ 1.4 Solar, respectively. They decrease to ZAr ∼ 0.5 and ZCa ∼ 0.6 Solar respectively by r ∼ 35 kpc. As with Fe and Si, both profiles show a marginal, but plausible increase in abundance at r ∼ 30 kpc that may be due to the uplift of cool, metal-rich material, as described in Paper I. Both the Ar/Fe and Ca/Fe abundance ratio profiles ( Fig. 3c- Similarly to the Si/S abundance ratio profile, the Ar/Ca abundance ratio profile (Fig. 3e) is also an interesting crosscheck upon the determination of these abundances. Like Si and S, Ar and Ca are also created in similar quantities by SNCC and SN Ia. Therefore, the Ar/Ca abundance ratio profile is insensitive to the relative number of supernovae that explode as SNCC and SN Ia. Instead, the Ar/Ca abundance ratio is primarily sensitive to changes in the average yields of SN Ia. The Ar/Ca abundance ratio is consistent with being flat as a function of radius and its observed mean value is ZAr/ZCa = 0.71 ± 0.02 Solar. Ne, Mg, and Ni abundances The top row of Fig. 4 shows the abundance profiles of Ne (left), Mg (middle), and Ni (right), respectively. The bottom row shows the abundance ratio profiles of Ne/Fe (left), Mg/Fe (middle), and Ni/Fe (right). There are significant systematic uncertainties present in the determination of the Ne, Mg, and Ni abundances (see Sect. 3.3.1). The inferred abundance profiles of Ne, Mg, and Ni are all centrally peaked at ZNe ∼ 2.3 Solar, ZMg ∼ 1.2 Solar, and ZNi ∼ 2.8 Solar, respectively. The profiles decline to ZNe ∼ 1.0 Solar, ZMg ∼ 0.6 Solar, and ZNi ∼ 1.0 Solar by r ∼ 25 kpc. Although there are differences between the abundances measured with APEC and MEKAL, strong central peaks in the abundance profiles of Ne, Mg, and Ni are found with both codes. These profiles also show the enhancements at ∼30 kpc. The Ne/Fe abundance ratio determined by both plasma codes is centrally peaked. The Mg/Fe ratio is marginally consistent with being flat as a function of radius and its mean values is ZMg/ZFe = 0.988 ± 0.012 Solar. This is in agreement with previous results once the differences between plasma codes are taken into account. (see e.g. Matsushita et al. 2003). The Ni/Fe results derived using both plasma codes are poorly fit to a linear model. The reduced χ 2 s are high, χ 2 /ν ∼ 60/34. Primarily this is due to increased scatter in the Ni measurements at large radii (r > 30 kpc). Within the radial range of 5 < r < 30 kpc and using the APEC code the Ni/Fe abundance ratio is fit well by a single, constant value of ZNi/ZFe = 1.92 ± 0.02 Solar. However, a large drop in the Ni/Fe abundance ratio at radii r ≥ 30 kpc is reproduced by both plasma codes. Modelling uncertainties in Ne, Mg, and Ni measurements The spectral resolution of the CCD type detectors on Chandra does not allow us to resolve the individual Ne and Ni lines within the Fe-L complex. The Mg lines are also blended with Fe-L line emission. Therefore, the Ne, Mg, and Ni abundances dependent critically on the modeling of the Fe-L complex. The Ne, Mg, and Ni abundances determined by the MEKAL and APEC plasma codes disagree, with the MEKAL Mg abundances being a factor of two lower than the values determined with APEC. Both plasma models remain incomplete and therefore the results must be treated with caution. Previous results from the Reflective Grating Spectrometer (RGS) aboard XMM-Newton, which can resolve separately the Ne lines within the Fe-L complex, reveal that the Ne/Fe abundance ratio is above the Solar value (see Werner et al. 2006b), consistent with our Ne abundance measurement. Incorrect ionization balance for Ni in the plasma models can lead to temperature dependent biases in its abundance determination (see . At kT ∼ 2.1 keV, the Ni abundances determined with APEC are 20 per cent lower than those determined using MEKAL. This difference decreases linearly with temperature. At kT ∼ 2.7 keV, the best fit Ni abundances determined using the two plasma codes are consistent. Our Ni abundances are determined from their L-shell lines. Using additional spectral data between 7.0 − 8.0 keV to include the K-shell lines does not improve our determination of the Ni abundance. IMPLICATIONS FOR SUPERNOVA ENRICHMENT MODELS The elements observed in the X-ray emitting gas of clusters of galaxies reveal the integrated chemical enrichment history by supernovae. The standard picture of chemical enrichment in clusters of galaxies for at least the past 12 years has been an early enrichment by SNCC products that are now well mixed in the ICM, in combination with a later contribution by SN Ia, which have longer delay times and primarily enrich the region surrounding the cD galaxy. Our data clearly disagree with this picture. Both Si and S show central abundance peaks that are larger than that of Fe (see Figs. 2e-f, 4d). The abundances of other elements (Ar, Ca, Ne, and Mg) show central abundance peaks as well (see Figs. 3c-d, 4e). These results force us to rethink our models of the chemical enrichment in clusters of galaxies. Here we discuss possible chemical enrichment scenarios that may explain the observed abundance ratio profiles. These include centrally peaked SNCC products due to stellar winds, intermittent star formation, and early enrichment of the low entropy gas. We also discuss how radial changes in the stellar initial mass function, and pre-enrichment of SNCC progenitors, or a diversity in the population of SN Ia can affect the observed distributions of metals. Centrally peaked SNCC products The most straight forward interpretation of the centrally peaked Si/Fe and S/Fe abundance ratio profiles, is that there is a dominant contribution of SNCC products to the enrichment of the lowest entropy gas in the central regions of the galaxy. The contribution of SN Ia products rises with increasing radius, out to r ∼ 35 kpc. We calculate the fraction of all supernovae that explode as SN Ia as a function of radius using the equation where (MZ/MFe) obs is the mass ratio of element Z with respect to Fe converted from our abundance measurements using the proto-Solar values by Lodders (2003), f = NSNIa/(NSNIa +NSN CC ) is the fraction of all supernovae that explode as SN Ia, and MZ;SNIa and MZ;SN CC are the average theoretical yields for SN Ia and SNCC, respectively. For SN Ia, we use the yields of the WDD2 delayed-detonation model by Iwamoto et al. (1999), except where explicitly stated. Because Si is the second most reliably measured chemical element after Fe, we calculate the number fraction of SN Ia, f (r), using the Si/Fe ratio. The average theoretical yields of SNCC are calculated using the values by Nomoto et al. (2006) under the assumption of the Salpeter initial mass function (IMF). In detail, where MZ;SN CC is the theoretical yield of element Z for the weighted-average of core-collapse supernovae, MZ(M ) are the atomic yields as a function of stellar mass for corecollapse supernovae, and α = 2.35 for the Salpeter IMF. We calculate a central fraction of SN Ia of f = 0.10 at radius r ∼ 50 arcsec (3.9 kpc). This fraction rises to f = 0.19 at radius r ∼ 500 arcsec (38.8 kpc). Based on these fractions, we can predict the radial profiles of the abundance ratios for other elements. The results are shown in Fig. 5. The S/Fe and Si/S abundance ratio profiles predicted by the central increase in the relative fraction of SNCC agree well with our observed profiles. While the observed slopes of the Ca/Fe and Ar/Fe profiles are also consistent with the centrally peaked SNCC enrichment, their normalizations are higher than those predicted by the model. Our Ca/Fe abundance ratio is, however, consistent with the average value measured in 22 clusters of galaxies by de Plaa et al. (2007) once systematic differences in the plasma codes used are taken into account. These authors showed that the measured abundance ratios can be well fit with a one-dimensional delayed-detonation SN Ia model, calculated on a grid introduced in Badenes et al. (2003), with a deflagration to detonation density of 2.2 × 10 7 g cm −3 and kinetic energy of 1.16 × 10 51 ergs, which was shown to fit best the properties of the Tycho supernova remnant (Badenes et al. 2006). The predicted Mg/Fe abundance ratio profile disagrees with the measurements. The Mg abundance measurements are, however, affected by systematic uncertainties arising from the incomplete modeling of the Fe-L shell transitions. Centrally peaked SNCC products due to intermittent star-formation If both the low entropy gas in the cluster core and the bulk of the surrounding ICM were enriched around the same time, with a similar mixture of SNCC and SN Ia, then the expected Si/Fe and S/Fe ratios would be constant with radius. However, out to a radius of r = 35 kpc, the observed Si/Fe and S/Fe ratios decreases from ∼1.3 to ∼1.0 Solar, before staying flat out to at least a radius of 200 kpc . In order to produce the observed Si/Fe peak on top of this flat profile, an additional 1.6 × 10 7 SNCC would have to explode in the centre of M 87. This is about 10 per cent of all SNCC that are required for the chemical enrichment of the innermost r < 40 kpc region. Assuming a Salpeter IMF, such number of supernovae require ∼ 10 9 M⊙ of star formation. Because of the continuing production of Fe by SN Ia in the cD galaxy, this value is a lower limit. The SNCC enrichment would also have to occur on a time scale shorter than the production of Fe by SN Ia. While, there is no evidence for current star formation in M 87, and the 95 per cent confidence upper limit on radiative cooling from the coolest ICM phase is 0.06 M⊙ yr −1 ), we can not rule out past intermittent star forming episodes in the central regions of the galaxy. Star formation, at rates reaching several tens and up to hundreds of solar masses per year, has been observed in the brightest cluster galaxies of some other cooling core clusters (e.g. McNamara et al. 2006, Ogrean et al. 2009, Ehlert et al. 2011. Therefore, several intermittent star forming episodes at ∼10 M⊙ yr −1 lasting for a total of ∼ 2 × 10 8 yr would not be surprising. Centrally peaked SNCC products due to stellar winds Stellar mass loss is an important source of metals in the hot gas surrounding giant elliptical galaxies like M 87. The material that formed the current stellar population of M 87 was most likely pre-enriched primarily by SNCC products. Some of these metals are then returned into the ICM/ISM by stellar winds. Assuming a single-age passively evolving stellar population with a Salpeter initial mass function, Ciotti et al. (1991) predict a stellar mass loss rate oḟ where LB is the present-day B-band luminosity in units of LB⊙ (LB = 8.4 × 10 10 LB⊙ for M 87; Gavazzi et al. 2005) and t15 is the age of the stellar population in units of 15 Gyr (the formula is valid in the range from ∼0.5 to over 15 Gyr). Integrating this equation under the assumption that the current stellar population is 10.5 Gyr old (formed at z = 2.1), we obtain a gas mass contribution from stellar winds of 1 × 10 11 M⊙ during the past 10 Gyr. Assuming that the material from which the current stellar population of M 87 formed was pre-enriched to conservatively Figure 5. Predicted abundance ratio profiles using the calculated fraction of SN Ia, f (r), from the Si/Fe abundance ratio profile with the measured data points at 4 kpc and 35 kpc over-plotted. The predicted abundance ratios of S/Fe and Si/S agree well with observations. However, the predicted radial distributions of the abundance ratios of the other elements do not match the observations well. ZSi ∼ 0.5 Solar, with a Si/Fe abundance ratio of 1.5 Solar, the total mass of Si returned to the ISM/ICM by the stellar winds is 4.3 × 10 7 M⊙. The total mass of Fe returned is 4.8 × 10 7 M⊙. Under these assumptions, the total mass of Si produced by stellar winds in excess of a Si/Fe ratio of 1 Solar is 1.4 × 10 7 M⊙. The observed total Si mass in excess to a flat profile with Si/Fe=1 Solar around M 87 is 3.6×10 6 M⊙. Therefore, despite all the uncertainties in the estimates of the metal mass loss, the excess Si observed around M 87 could most likely be produced by stellar winds. Because the initial starbursts, that enriched the material from which the current stellar population formed, produced predominantly SNCC products, and the Mg2 index indicates that the stellar population of M 87 is enriched to more that 1 Solar (Kobayashi & Arimoto 1999, Matsushita et al. 2003 this scenario can plausibly produce centrally peaked abundance profiles of SNCC products. Centrally peaked SNCC products due to strong early enrichment of the low entropy gas Another possible mechanism contributing to the observed centrally peaked distribution of SNCC products is strong early enrichment of the lowest entropy X-ray emitting gas and inefficient mixing of this material with the surrounding ICM. If the lowest entropy X-ray emitting gas currently seen in the Virgo Cluster core was at high redshift located in the environments of massive galaxies during their epoch of maximum star formation, then this material may have become enriched significantly by SNCC products. As the cluster formed, this low entropy SNCC enriched gas will naturally sink and accumulate at the base of the cluster potential. Assuming that it does not become well mixed with the surrounding ICM as it does so and it does not cool out of the hot phase, this may lead to the observed peak in metal abundances. Other mechanisms affecting the radial profiles of abundance ratios We explore other possible explanations for the observed abundance ratio profiles. Each of these mechanisms predicts . Predicted abundance ratios as a function of the slope of the stellar initial mass function. The value 2.35 corresponds to the Salpeter value. Flatter mass functions, i.e. larger relative fractions of massive stars exploding as SN CC , predict a rise in Si/Fe and S/Fe ratios as we observe in the centre of the galaxy. Based on SN CC yields by Nomoto et al. (2006) and assuming that SN Ia (WDD2 model by Iwamoto et al. 1999) make up 15 per cent of all supernovae. a central enhancement in the Si/Fe, S/Fe, Ar/Fe, and Ca/Fe ratios and each of them might, to some degree, contribute to the observed radial trends. Changing IMF as a function of radius A stellar IMF, which is flatter (has a smaller α in Equation 2, therefore producing more massive stars) in the central regions of the cluster would produce a central increase of Sigroup elements. The ratios of chemical elements produced by SNCC are a strong function of the mass of the progenitor. To examine the effect of the IMF on the predicted abundance ratios, we vary its slope α. Fig. 6 shows the predicted abundance ratios assuming Nomoto et al. (2006) and assuming that SN Ia (WDD2 model by Iwamoto et al. 1999) make up 15 per cent of all supernovae. a steepening IMF as a function of radius and a radially constant relative fraction of SN Ia, f = 0.15. This fraction was chosen to lie within the range suggested by the Si/Fe abundance ratio profile and it only affects the normalization of the predicted profiles. The explanation of the observed range of values in the radial profiles of the Si/Fe and S/Fe ratios by a radial trend in the IMF would require extremely steep IMF at larger radii. Moreover, this scenario would produce very large central increases of the Ne/Fe and Mg/Fe ratios, which we would clearly detect even given the systematic uncertainties in the Ne and Mg abundance determinations. While we cannot rule out a small radial trend in the stellar IMF, it cannot be responsible for the large observed ranges in the abundance ratios. Radial trends in the SNCC pre-enrichment We examine the effects of a possible radial trend in the SNCC pre-enrichment, i.e. the metallicity of the stars that produce the supernovae on the radial profiles of abundance ratios in the ICM. Fig. 7 shows the predicted abundance ratios assuming that the metallicity of the SNCC progenitors is increasing as a function of radius. The yields of SNCC as a function of the initial metallicity are from Nomoto et al. (2006) and the absolute metallicity of Zi = 0.02 is equal to Solar. We assume a constant relative fraction of SN Ia (f = 0.15) and the Salpeter IMF. The plot shows that SNCC with a lower initial metallicity produce higher Si/Fe and S/Fe ratios. Enrichment from infalling, low-entropy systems, which may be dominated by SNCC with relatively low metallicity progenitors during starbursts, could have contributed to the observed central peaks in the abundance ratios. We note that SNCC with a very low initial metallicity (Zi < 0.004) would produce large Si/Fe, S/Fe, Mg/Fe, Ca/Fe, Ar/Fe, Si/S, and Ar/Ca ratios and small Ne/Fe and Figure 8. Predicted abundance ratios for a variety of SN Ia yield models. We considered the WDD1, WDD2, WDD3, and W7 models of Iwamoto et al. (1999). For SN CC we use the yields by Nomoto et al. (2006) and we assume that SN Ia make up 15 per cent of all supernovae. Ni/Fe ratios that do not match our observations. Therefore, bulk of the metals observed in the ICM could not have been produced by extremely low metallicity stars. Diversity in the SN Ia population There is a growing evidence for a diversity in SN Ia explosions (see e.g. Sullivan et al. 2006;Pritchet et al. 2008;Mannucci et al. 2006). While a population of brighter SN Ia with a slow luminosity decline is more common in late-type spiral and irregular galaxies with recent star formation (indicating a short delay time between their formation and the explosion), a fainter and more rapidly decaying population of SN Ia is more common in early-type galaxies (Hamuy et al. 1996;Ivanov et al. 2000). This diversity should be reflected in the abundance yields with the brighter SN Ia producing more Ni and less Si-group elements than the fainter ones. Based on early work with XMM-Newton, Finoguenov et al. (2002) argue that the diversity in the SN Ia population would explain the distribution of chemical elements in the Virgo Cluster. The variation of the peak brightness, which correlates with the production of 56 Ni and anti-correlates with the production of Si-group elements, can also be explained in the framework of the delayed detonation models by a variation of the deflagration-to-detonation transition density (transition from subsonic to supersonic flame velocities). Fig. 8 presents the expected yields for a variety of SN Ia explosion models from Iwamoto et al. (1999), with the WDD1, WDD2, WDD3 and W7 models on the x-axis. We assume a constant relative fraction of SN Ia, f = 0.15. The W7 model represents a pure deflagration explosion mechanism. The WDD models represent delayed-detonation explosions and the last digit indicates the density at which the flame velocity becomes supersonic (deflagration-to-detonation tran-sition density) in units of 10 7 g cm −3 . This transition density is likely dependent on the composition of the progenitor (see Jackson et al. 2010). Matching our observed Si/Fe and S/Fe profiles, given the existing SN Ia yields, would require that the centre of the galaxy has been enriched almost solely by WDD1 supernovae while the outer regions almost solely by WDD3. The contribution by SN Ia with longer delay times and larger Si/Fe ratio would be the largest in the centre of the galaxy, but any realistic enrichment scenario predicts an enrichment by a mixture of different types of SN Ia at all radii. This model predicts a large central increase in the Ar/Fe and Ca/Fe abundance ratios that seem to be in conflict with the observed relatively flat profiles. The predicted Mg/Fe profile is relatively flat in agreement with observations. The strongest prediction of this model is the ∼ 30 per cent rise in the Ni/Fe abundance ratio with the increasing radius. Our observed Ni/Fe profile, which is unfortunately dominated by systematic uncertainties, suggests a relatively flat radial distribution. The O abundances The O/Fe ratio of 0.60 ± 0.03 Solar determined in the centre of M 87 with the XMM-Newton Reflection Grating Spectrometers (Werner et al. 2006b; these data resolve the O viii line and the individual lines of the Fe-L complex) is significantly lower than the values predicted by the proposed enrichment scenarios. These best fit O/Fe ratios are consistent with the values determined using the CCD type detectors on XMM-Newton (Matsushita et al. 2003). Either the measurements strongly underestimate the O abundance or some key aspect of the chemical enrichment of the hot ICM/ISM is not understood. Using the recently updated AtomDB atomic database, the best fitting O/Fe ratios are 50 per cent larger compared to the previous version, indicating that at least part of the discrepancy might be a modeling issue. Furthermore, all of our proposed enrichment scenarios are incompatible with the rising O/Fe abundance profile reported from XMM-Newton (Böhringer et al. 2001, Finoguenov et al. 2002, Matsushita et al. 2003. However, measurements of the O viii line emission at E ∼ 0.65 keV with CCD detectors suffer from significant systematic uncertainties due to a combination of limited spectral resolution, residual gain uncertainties, coupled with incomplete modelling of the detector oxygen edge and possible incomplete subtraction of the O viii line emission from the Galactic foreground (which could bias the O abundance measurement in the outskirts of M 87 high). These systematic uncertainties force us to treat all current O abundance measurements with caution. More robust measurements of the O/Fe profile will be possible with the calorimeters on the Astro-H satellite. SUMMARY Using a deep (574 ks) Chandra observation of M 87, we performed the best measurements to date of the radial distributions of metals in the ambient central ICM of the Virgo Cluster. We conclude that: • The abundance profiles of Fe, Si, S, Ar, Ca, Ne, Mg, and Ni are all centrally peaked. • The abundance profiles of Si and S are more centrally peaked than Fe, challenging the standard picture of chemical enrichment in galaxy clusters, wherein SN Ia products are thought to dominate the central enrichment. Rather, despite a negligible current star formation rate in M 87 and the continuing enrichment by SN Ia, the integrated relative contribution of core collapse supernovae to the enrichment is higher in the central low entropy core than in the surrounding ICM. The observed abundance patterns are most likely due to one or more of the following processes: continuing enrichment by winds of a stellar population which has been pre-enriched mainly by SNCC products; intermittent formation of massive stars in the central cooling core on time scales shorter than the continuing enrichment by SN Ia; or strong early SNCC enrichment of the lowest entropy X-ray emitting gas that is subsequently not well mixed and does not cool out of the hot ICM phase. • Other processes, such as stellar initial mass function that changes with radius, changes in the pre-enrichment of core-collapse supernova progenitors, and diversity in the elemental yields of SN Ia, might have also contributed to the observed radial profiles. • Although systematic uncertainties prevent us from measuring the O abundance robustly, indications are that it is about 2 times lower than predicted by the enrichment models.
2011-08-22T20:41:34.000Z
2011-08-22T00:00:00.000
{ "year": 2011, "sha1": "1ab23839ed0929082fe40a5155b63575cc5f68f4", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/418/4/2744/18450305/mnras0418-2744.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "1ab23839ed0929082fe40a5155b63575cc5f68f4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264572695
pes2o/s2orc
v3-fos-license
Not dear neighbours: Antennation and jerking, but not aggression, correlate with genetic relatedness and spatial distance in the ant Lasius niger Neighbour–stranger response differences (NSRDs) are when individuals are either more aggressive (“Nasty Neighbour”) or less aggressive (“Dear Enemy” or “Dear Neighbour”) to direct neighbours than to other competitors perceived as “strangers” by the residents. Such effects are often reported in ants which, being fixed‐location central‐place foragers, may compete directly with their neighbours for resources or raid each other for brood. Overlayed onto this are potential spatial distance and relatedness effects on aggression, which are often not differentiated from NSRDs. The literature on NSRDs and distance effects in ants does not reveal a systematic pattern across all ants due to their diversity of life histories, requiring each species to be evaluated individually. Lasius niger is a common Eurasian ant species, which can form very dense populations of colonies and shows pronounced nestmate recognition, so may be expected to show NSRDs. Here, we take advantage of a semi‐regular colony array to examine the effect of spatial distance and relatedness on aggression and probe for NSRDs. Overt aggression does not vary with relatedness or spatial distance, and there is no evidence that direct neighbours represent a special case in terms of aggression. However, antennation and jerking decrease between less related and more spatially distant pairs, but are almost completely absent from allospecific interactions. We tentatively propose that antennation and jerking together represent a ‘negotiation’ phase, which may either precede or reduce the need for overt aggression. While a Nasty Neighbour effect might occur, a Dear Neighbour effect is unlikely in this species, and overall NSRDs do not play a large role in the ecology of this species. More broadly, this work highlights the importance of considering non‐overtly aggressive responses when studying NSRDs. INTRODUCTION Competition for resources between animals may result in both interand intraspecific aggressions.However, since aggression can be costly, aggressive responses are modulated depending on the potential costs and gains each actor faces.For species that protect long-term territories or those that use their territories for multiple purposes, a major proximate aspect modulating aggression is whether the interaction is between direct neighbours or unknown individualsso called neighbour-stranger response differences (NSRDs) (reviewed in Christensen & Radford, 2018;Temeles, 1994).Such NSRDs can occur at both the intra-(e.g., Yagound et al., 2017) and interspecific (e.g., Tanner & Adler, 2009) levels.Compared with interactions involving stranger (i.e., non-neighbours) individuals, direct neighbours may either be responded to with reduced aggression, termed the Dear Neighbour Effect, or with increased aggression, termed the Nasty Neighbour Effect.Whether a Dear Neighbour or Nasty Neighbour effect is found relates to the ecology and environment of the studied system. A Dear Neighbour effect (senso reviewed in Temeles, 1994, also termed the "Dear Enemy" effect) is often to be expected, because the behaviour of established neighbours is better known.They are, thus, more predictable (Amorim et al., 2022;Getty, 1987) and are less likely to compete for resources, since they have their own (Jaeger, 1981;Switzer et al., 2001).However, in some situations, direct neighbours represent a greater threat, and thus, a Nasty Neighbour response is expected (Müller & Manser, 2007).This might occur in situations where neighbours can usurp territory and gain by this (e.g., by access to more, better or more reliable feeding sites), or when established neighbours are more powerful than non-neighbours.For example, banded mongooses often usurp the territory of their neighbours (Cant et al., 2002) and pose a bigger threat than roving bands of dispersing splinters (Cant et al., 2001).They, thus, do not show a Dear Neighbour effect, but rather a Nasty Neighbour effect, where aggression is higher towards direct neighbours (Müller & Manser, 2007).Similarly, less resource-rich areas lead to an increasing competition for food, and thus aggressiveness in Formica aquilonia (Sorvari & Hakkarainen, 2004).In essence, whether NSRDs occur, their direction and their strength depend primarily on the difference in threat levels between neighbours and strangers.This is also influenced by how much the focal animal stands to lose.The strength and direction of NSRDs strongly depend on species and breeding stage (Werba et al., 2022). While NSRDs were initially studied in solitary species, recently, a greater focus has been on the response of social species (reviewed in Christensen & Radford, 2018).The NSRDs of social animals can differ greatly from non-social animals for two reasons.First, inter-individual differences in a group, in terms of personality, status and resources (Beehner & Kitchen, 2007;Desjardins et al., 2008), mean that some individuals in a group are more likely to respond to territorial incursion than others-depending on what they stand to gain or lose (Mares et al., 2011;York et al., 2019).For example, dominant male meerkats respond more strongly to cues of male invaders than females or nondominants, because they risk losing their position as the main group breeder (Mares et al., 2011).Second, social groups must respond collectively to incursions, both to better handle the threat and to improve information transfer (Graw & Manser, 2007).Finally, cooperative, not just competitive, behaviour can vary between neighbours and strangers.For example, tree ants Oecophylla smaragdina can rescue conspecifics trapped in spiderwebs.They show this behaviour not just to nestmates but also to ants from neighbouring colonies, but not from distant colonies (Uy et al., 2019).Similarly, great tits are more likely to assist conspecifics in nest defence if they are very familiar with the conspecifics (Grabowska-Zhang et al., 2012). However, as in most social insect colonies reproduction is performed overwhelmingly by the reproductive caste (Wilson, 2000), the motivations of all individuals are usually aligned in foraging and defence contexts, and the colony is in many respects effectively one solitary superorganism (Boomsma & Gawne, 2018).Competition for territory and resources, including tournaments and raids, are repeatedly reported in ants (Czechowski, 1984;Hölldobler, 1976;Pollock & Rissing, 1989), making them especially interesting for the study of NSRDs.However, no clear pattern of NSRDs can be seen for ants, with several studies reporting Dear Neighbour effects, others reporting Nasty Neighbour effects, and yet others reporting complex interactions or no effect (Table 1).This is not surprising, given that the presence, direction and strength of NSRDs depend on the specific ecology of the species: whether conspecifics are the largest competitor (Fogo et al., 2019), whether the species maintains distinct territories (Boulay et al., 2007), whether the distance between these territories is small or large (Zorzal et al., 2021) and whether competition over rare resources or raiding is common (Tumulty & Bee, 2021). NSRDs in ants are also strongly influenced by seasonal effects (Ichinose, 1991;Katzerke et al., 2006;Thurin & Aron, 2008).We note, however, that several studies do not differentiate spatial distance from neighbour-non-neighbour differences.This is an important distinction, as apparent NSRDs may be a result of spatial distance or genetic differences: in ants and other eusocial insects, nestmate recognition, and thus aggression to incursions by non-nestmates, is driven by colony-specific cuticular hydrocarbon (CHC) profiles (reviewed in van Zweden & d'Ettorre, 2010).CHC profiles may be influenced by genetic and environmental differences.Thus, overlaid on top of NSRDs, aggression is expected to increase with both relatedness and spatial distance since spatial distance can correlate with environmental differences.All such correlations have been reported (genetic: Aksoy & Çamlitepe, 2018;Saar et al., 2014, spatial distance: Jutsum et al., 1979;Sanada-Morimura et al., 2003, substrate andforage: Heinze et al., 1996;Jutsum et al., 1979;Stuart, 1987), but importantly, none of these measures are universally predictive of aggression.As such, to understand the ecology of a specific species, direct experimentation is required. Here, we study the role of being close neighbours, spatial distance, and relatedness on aggression in the ant Lasius niger.We take advantage of a regularly spaced array of L. niger nests, allowing an T A B L E 1 Literature summary of neighbour-stranger response differences in ants. Species Details Reference Nasty neighbour effect (higher aggression to neighbours) Pristomyrmex pungens Aggression decreased with spatial distance.Experiential exposure to a colony subsequently increased aggression towards that colony. ( Sanada-Morimura et al., 2003) Linepithema humile This ant species forms large supercolonies.Workers from different supercolonies are always aggressive to each other, but aggression is highest between ants from neighbouring nests, implying experience drives increased aggression.Relatedness and genetic similarity do not predict aggression.(Thomas et al., 2007) Oecophylla smaragdina Higher proportion of non-nestmates recognised as such from neighbouring colonies. Once recognised as non-nestmates, higher aggression towards ants from neighbouring colonies.(Newey et al., 2010) Formica pratensis Higher aggression between direct neighbours than between 'second neighbours' and non-neighbours.(Benedek & Kobori, 2014) Cataglyphis niger Aggression between colonies from different populations lower than between colonies from the same population.Genetic and CHC profile differences were larger between than within populations.(Saar et al., 2014) Crematogaster scutellaris Aggression decreases with increasing spatial and CHC differences.CHC profiles do not covary with relatedness.(Frizzi et al., 2015) Azteca muelleri Higher aggression between non-sympatric ant pairs than sympatric ant pairs.No relationship between aggression and overall CHC similarity, but signs of higher methylated alkane similarity linked to higher aggression.(Zorzal et al., 2021) Dear neighbour effect (lower aggression to neighbours) Acromyrmex octospinosus Aggression increases with distance between nests.Laboratory studies suggest both forage type and endogenous (presumably genetic) differences drive aggression.(Jutsum et al., 1979) Temnothorax curvispinosus Lower aggression between colonies collected in the same area.After extended housing in the lab, aggression between stranger colonies decreased.(Stuart, 1987) Temnothorax nylanderi Both increasing spatial distance and nesting material differences increased intra-specific aggression.Spatial distances between colonies tested ranged from 0 ->3 meters.(Heinze et al., 1996) Paired colonies transferred into different nesting material increased aggression, pairs with matching nesting materials did not. Iridomyrmex purpureus Experiments using live ants and CHC extracts both find increasing aggression with spatial distance.Aggression increased in areas with a high density of conspecific nests.(Thomas et al., 1999) Formica exsecta Aggression increases with spatial distance, but only in spring, not summer or autumn.Aggression was not correlated with genetic distance or intranest relatedness.(Katzerke et al., 2006) Acromyrmex lobicornis Aggression increases with spatial distance.Genetic distance did not correlate with spatial distance.(Dimarco et al., 2010) Formica pratensis Aggression increases with spatial and/or genetic distance, which themselves covary. Moving ants to the laboratory removes this effect, implying either context dependence or an effect of nesting substrate on aggression.(Aksoy & Çamlitepe, 2018) Oecophylla smaragdina Rescue behaviour is directed towards nestmates and neighbours, but not conspecifics from distant colonies. Pogonomyrmex barbatus Encounters with neighbours on a foraging trail reduce foraging more than encounters with non-neighbours.This stronger response may reduce costly aggression.(Gordon, 1989) Pheidole tucsonica & P. gilvescens P. tucsonica show higher aggression towards conspecific from distant areas.When only ants from within a local area were tested, no effect of distance on aggression was found.(Langen et al., 2000) P. gilvescens ants show no effect of distance on aggression. Iridomyrmex purpureus Higher aggression between ants from adjoining territories than ants from non-adjoining territories, but within these groupings more aggression between more distant colonies. (van Wilgenburg, 2007) No influence of genetic similarity on aggression. No evidence of experience modulating aggression. unusually controlled spatial distance between the colonies.This model species is of particular relevance to study NSRDs for several reasons. Biological material The ants used for the experiments came from 11 L. niger colonies sampled between 12 and 24 August 2021 in Regensburg, Germany (coordinates: 48.994129 N, 12.091213 E).Here, we took advantage of regularly spaced concrete slabs, which had been placed onto a grassy field to encourage L. niger colonies to settle underneath. The slabs had been in place for over 5 years, and all had been colonised by L. niger.These slabs serve as a source of experimental ants for behavioural experiments.Regular reinforcement of lab colonies, preceded by aggression tests, demonstrated that colony identity has been stable for many years.As such, we could well expect ants from neighbouring colonies to have extensive experience with each other (Czechowski, 1984).Eleven colonies, in a roughly linear array (see Figure 1), were selected for this study.We chose both end colonies (#1 and #11) and the central colony (#6) as target colonies (see Figure 1).Five ants from each target colony were tested individually against five ants from all other colonies, resulting in five pairs of ants per colony combination, with colonies 1 and 11 having one direct neighbour colony tested (2 and 10, respectively), and colony 6 having two (5 and 7).In addition, five pairs of ants from each target colony were tested (nestmates), and five heterospecific interactions per target colony were tested, using Formica rufibarbis, also found on the University of Regensburg campus. Experimental procedure To reduce observer bias, two experimenters conducted the aggression assays.One experimenter randomised the testing and collected ants from the colonies for each test session.The other remained completely blind to all ant identities. Two test sessions were conducted per day, one in the morning and one in the afternoon.Each test session began by collecting ants from the respective test colonies (decided a priori) from the outside slabs and returning them to the lab.We chose to test aggression immediately, rather than allow acclimatisation to the lab, as one factor affecting intraspecific aggression in ants is differences in colony odour derived from their nesting substrate (Heinze et al., 1996).Keeping the ants in artificial nests in the lab may have eliminated a potentially important cue for the ants.As we conducted five encounters per colony pair, five ants of each colony per target-competitor pair were collected, along with some additional ants so that the tested ants would never be alone before testing.Ants were encouraged to leave the nest by briefly raising their nesting slab, causing ants at the surface to exit the nest onto the slab surface, from where they could be collected.Unused ants were released after each test session. Collected ants were stored in fluon-coated boxes with pieces of paper acting as a shelter.No less than 15 min and no more than 2 h elapsed Species Details Reference Camponotus cruentatus No difference in aggression between sympatric neighbours, sympatric non-neighbours, or allopatric pairs.(Boulay et al., 2007) Liometopum microcephalum Arboreal ants with limited nest sites, so high competition.Compared aggression to direct neighbours, colonies in shared tree patch, and colonies in a distant tree path.Slightly higher aggression to neighbouring nest than stranger nest in shared patch.More pronounced increase in aggression to ants from distant patches (Keresztes et al., 2020) Note: This table includes only studies of intraspecific aggression. between collection and testing.Distances between each slab and its neighbour were measured in centimetres from the centre of the slabs, using a field tape measure. Ant pairs were placed separately in two halves of a divided arena, consisting of a plastic container (floor diameter 65 mm and wall height 25 mm), with the floor covered with a disposable graph paper inlay. The chamber was divided using a piece of fluon-coated acetate sheet. Ants were allowed a 30 s habituation period, after which the divider was removed.The ants were then free to interact for 2 min, and the behaviour of the ants was scored in real time.We scored all behaviours, but not the identity of the ant performing the behaviour.Both experimenters recorded the number of occurrences of each of the behaviours separately.However, these rarely diverged.On the rare occasions where counts diverged, these were averaged.A set of eight interactions were defined and noted a priori (see Table 2).However, fight duration and avoidance were not analysed and were excluded from this study, since prolonged fights were very rare (see below). However, when they did occur, they continued for the entire length of the recording session.Fighting duration was removed because extended fighting events were rare (only 10 fights longer than 1 s). Avoidance was removed because it was recorded only six times.After the first day of the experiment, we added a seventh interaction type post hoc, neutral body contact, which we considered to be an informative behaviour about the lack of aggression between two individuals. After 2 min of observation, each individual was placed separately in a 1.5 mL Eppendorf Safe-Lock Tube.Each tube also included a label that identified the date, the colony pair and, if possible, whether the respective ant was from the target or competitor colony.The tubes containing the ants were placed on ice for the remainder of the session (maximum 2 h) and after each stored at À20 C for later genetic analysis. Note: All behaviours were defined a priori, except for neutral body contact, which was added on the second day of data collection.The number of times the behaviour occurred (except for Fighting duration, see Table 2 for details) was recorded independently by two experimenters.Note that avoidance behaviour and fight duration were recorded, but never analysed (see main text).Jerking and antennation can be seen in Video S1. Molecular analyses Due to resource limitations, only a subset of tested pairs underwent molecular analysis.We selected the pairs to be analysed by calculating the proportion of overtly aggressive behaviours (mandible gaping, brief biting, gaster flexing and fighting [in seconds) from the total behaviours recorded and then selecting the 22 pairs showing the least and most aggression.Note that this selection was performed a priori, well before formal statistical analysis.Since this selection did not cover all sampled colonies, several additional pairs were added to the molecular analysis where this was needed to cover the entire range of colonies with at least six individuals per colony.Finally, DNA was extracted from 5 to 22 workers in each of the 11 colonies (see Table S1 for details) using a CTAB method (modified from Green & Sambrook, 2012).Eleven highly variable microsatellite markers were used to determine the genetic relatedness and the colony structure: Ant1343, Ant3993, Ant 859, Ant 10878, Ant 575, Ant 8424, Ant 2794 (Butler et al., 2014), L10-282, L10-174, L1-5 and L10-53 (Fjerdingstad et al., 2003).Primer sequences are available in Table S2. For each locus, we recorded the sample size, the number of alleles, the effective number of alleles, the observed and expected heterozygosities, and the fixation index (GenAlEx).In addition, we confirmed the absence of stuttering and large allele dropout and estimated the frequency of null alleles (Microchecker 2.2.3;Van Oosterhout et al. 2004) (Table S4).Linkage disequilibrium was tested between each pair of markers (Genepop 4.7.5;Rousset 2008, Table S5). The relatedness between pairs of colonies was calculated using the estimator of Queller and Goodnight (Queller & Goodnight, 1989) provided by GenAlEx.The Queller and Goodnight estimator (Queller & Goodnight, 1989) allowed us to determine the genotypic similarity of microsatellite markers between pairs of individuals compared with an expected value between two individuals taken at random from the population.Negative values indicated that the degree of kinship between the two individuals tested was less than that of individuals drawn randomly from the population.The relatedness between the two colonies was calculated based on the average relatedness of all pairs of individuals belonging to these colonies. Statistical analyses As a preliminary analysis, a first principal component analysis (PCA1) was conducted a priori on the seven behavioural variables before using the status of the interactants (nestmate, non-nestmate or heterospecific) as a classification variable.This analysis was used to ensure the consistency of the behaviours recorded during the interactions and to confirm that they varied properly according to the context.In effect, the allospecific interactions acted as the positive control (expected to show high aggression) while the nestmate interactions were the negative control (expected to show no aggression). We then focused on the intraspecific, non-nestmate interactions (n = 150).Among the seven behavioural variables recorded, only the number of antennation and jerking events were not zero-inflated and were not a priori related to aggression.The number of each of the five other behaviours (neutral contacts = no aggression, mandible gaping, brief biting, gaster flexing and fighting = higher aggression level) was highly zero-inflated and, therefore, difficult to analyse separately as dependent variables of models.We therefore used the ade4 package (Dray & Dufour, 2007) to perform a second PCA (PCA2) on these five variables, to derive the most informative summary variable describing the aggressiveness of the focal ant.Visual inspection of the scree plots showed an inflexion point that justified retaining two factors, but only the first dimension explained more than 30% of the variability and was biologically meaningful (see results), and was, therefore, retained as an "aggressiveness level" variable in the subsequent analyses.Note that the variable was reversed in subsequent analyses so that higher numbers represented higher levels of aggression. The impact of spatial distance (in centimetres) and genetic relatedness between the nests on the behavioural responses were analysed using three (generalised) linear mixed models (M1 associated with the number of antennation events; M2 associated with the number of jerking events and M3 associated with the level of aggressiveness).As the level of aggressiveness fitted a Gamma distribution (package fitdistrplus, Delignette-Muller & Dutang, 2015), M3 was adjusted using the Gamma family after the transformation of the variable towards a positive variable by adding the minimal possible value of 1.6.In all three models, the identities of the focal nest were included as random effects.The spatial distance between the nests and the genetic relatedness between them were included as dependent variables. We then ran the three models a second time (M'1-M'3) but with the neighbour status of the pair of colonies as a dependent variable (neighbour adjacent colonies; non-neighbours, i.e., strangers: all other pairs).Because of an extremely high genetic relatedness between two colonies compared with all other colony pairs (genetic relatedness around 0.2, i.e., circa half siblings, whereas all other pairs presented negative relatedness values varying between À0.3 and 0), we removed this one colony pair from all the models as it was heavily distorting the results.The results of the models including this pair are reported in Tables S6 and S7.All statistical analyses were carried out using the R v.3.5 software (R Core Team, 2018).All GLMM models were analysed using the package glmmTMB (Magnusson et al., 2020), and the quality of the model estimates was monitored using Pearson residuals using the DHARMa package (Hartig, 2020).Figures were created using the Effects package (Fox et al., 2016).For all statistical tests, the level of significance was set at p < 0.05. RESULTS The preliminary PCA1 (Figure 2) confirmed the relevance of the behavioural variables chosen.The allospecific interactions were characterised by more biting and fighting events, more mandible gaping and a few neutral body contacts compared with intraspecific interactions.Among the intraspecific dyads, the non-nestmates interactions showed more antennation, jerking and mandible gaping, whereas the interaction between nestmates showed more neutral body contacts. Regarding the intraspecific interactions between the colonies (corresponding to the "non-nestmate" interactions of the previous analysis), the PCA2 (Figure S1) conducted on the five variables relating to aggressiveness (neutral contacts = no aggression, mandible gaping, brief biting, gaster flexing and fighting = higher aggression level) suggested that only the first axis should be maintained and fed into subsequent analyses.Indeed, the first dimension was relevant in terms of both contribution to the global variability (38.67%) and was biologically meaningful, as all the variables relating to aggressiveness were grouped on the left and opposed to the neutral contacts on the right, suggesting a strong division between the aggressive (left) and non-aggressive (right) interactions (Figure S1).The "aggression" variable was, therefore, derived from the first dimension of the PCA2. The models confirmed that two of the three behavioural variables studied (M1: number of antennation events and M2: number of jerking events) were significantly impacted either by the spatial distance between the colonies or by the genetic relatedness between them, or both (Table 3).Specifically, as spatial distance increased, antennation decreased (M1: estimate = À1.030,p = 0.009), as well as the number of jerking events (M2: estimate = À1.394,p = 0.034; Figure 3).As genetic relatedness between the colonies increased, so did the number of jerking events (M2: estimate = 21.027,p = 0.014; Figure 3).However, genetic relatedness did not impact the number of antennation events (M1: estimate = 7.956, p = 0.122).Aggression was not significantly predicted by either spatial distance or relatedness (M3: spatial distance; estimate = À0.046,p = 0.491; relatedness; estimate = À1.294,p = 0.112; Figure 3).It slightly trended to decrease as genetic relatedness increased.None of the three behavioural variables studied (number of antennation events, number of jerking events and level of aggressiveness) were impacted by the neighbour status of the colonies (M1'-M3', Table 3).Spatial distance and relatedness were not correlated (estimate = 0.00543, p = 0.410). DISCUSSION The behaviours we measured captured differences in aggression among nestmate, non-nestmate and allospecific encounters (Figure 2): encounters with allospecific ants were much more aggressive, according to the composite aggression score (Figure S1) than the encounters with either nestmates or conspecific non-nestmates.This implies that, perhaps surprisingly, some allospecifics represent a larger threat or competitor than conspecifics.Conversely (see below), Lasius niger may have developed intraspecific communication strategies to avoid costly fighting, although raids and fighting between the colonies of different sizes may be lethal (Seifert, 2007).Nestmate and non-nestmate encounters were mostly differentiated by increased jerking and antennation behaviours in non-nestmate encounters and increased neutral body contacts for nestmate encounters.This would seem to imply that jerking and antennation behaviour can be taken as a measure of moderate aggression, as assumed for this and other species previously (Devigne & Detrain, 2002;Holway et al., 1998).However, the pattern of these behaviours in interactions between the non-nestmates does not support this assumption: as spatial distance increases, antennation and jerking decrease.This requires a more nuanced interpretation of these behaviours. Antennation can be assumed to be involved with information gathering.Nestmate recognition is driven by the ants' cuticular hydrocarbon profiles used for nestmate recognition, which are influenced both by genetic factors and extrinsic environmental factors (van Zweden & d'Ettorre, 2010).It is thus reasonable that antennation, and thus information gathering, increases for non-nestmates from spatially closer ants, as they will be harder to distinguish from nestmatesalthough we note that the opposite pattern could also have been explained, had it been found, by assuming more experience with closer ants.Jerking behaviour is more difficult to interpret.This behaviour is often observed being performed by active foragers returning to the nest (TJC, AK, unpubl.obs.).It can also often be triggered by allowing light to enter laboratory nests.Similar jerking behaviours have been reported in other ant species as recruitment signals (Hölldobler, 1971(Hölldobler, , 1976(Hölldobler, , 1983;;Liefke et al., 2001) and as a response to light or puffs of air entering the nest (Weber, 1957). Between nestmates, it thus seems to represent a form of communication, potentially a generalised activity upregulation signal.This is supported by the fact that this behaviour is hardly ever directed towards allospecifics (6.6% of encounters) but is almost ubiquitous in conspecific encounters between non-nestmates (93.3% of encounters) and between nestmates (100%).In Linepithema humile, jerking behaviour has been reported to be more common between nestmates after feeding on higher quality food (Sola & Josens, 2016).It is less clear what this jerking behaviour means between non-nestmate conspecifics.It has been reported to play a role in tournaments between ants in Lasius niger (Czechowski, 1984).According to Devigne and Detrain (2002), jerking behaviour is more common between non-nestmates than nestmates, and they assume that jerking behaviour between non-nestmates is a form of low-level aggression.Given this and the pattern of decreasing jerking and antennation with decreasing relatedness and increasing spatial distance we describe, we tentatively propose that the combination of antennation and jerking can be approximated to 'negotiation,' where ants are gathering information about each other and attempting to avoid overt aggression.In a normal encounter, the ant pairs would be able to communicate, 'negotiate' and then withdraw.However, withdrawal was not possible in our T A B L E 3 Effects of the distance (spatial distance between the colonies) and the relatedness (genetic relatedness between the colonies based on the Queller and Goodnight estimator) on the three behavioural descriptors: M1: number of antennation events, M2: number of jerking events and M3: aggression level.Kobori, 2014;Frizzi et al., 2015) while others do not (e.g.Langen et al., 2000;Martin et al., 2012).However, absence of evidence is not evidence of absence.Aggression in ants often varies strongly with season (Ichinose, 1991;Katzerke et al., 2006;Thurin & Aron, 2008), as raiding of conspecifics for brood and territorial disputes over valuable food resources likely occur mostly in spring (Czechowski, 1984). In Plagiolepis pygmaea, significant seasonal variations are expected in the levels of aggression among workers of different colonies according to the biological cycle of the species (Thurin & Aron, 2008).In Formica exsecta, aggression levels significantly correlated with spatial distance between nests in spring, but not in summer or in autumn (Katzerke et al., 2006).Aggression among Formica polyctena colonies is highest in the spring when nests become active and taper off in the summer, and indeed in autumn, neighbouring F. polyctena colonies share foraging trails without aggression (Mabelis, 1978).In Paratrechina flavipes, workers were aggressive to related individuals only during the season when the nest was active (Ichinose, 1991).It is thus reasonable to expect that such a seasonal effect could also occur in other species, including L. niger, which hibernates from October until the end of March, inducing a Nasty Neighbour effect (higher aggression to direct neighbours) that might manifest only in spring.However, we think it unlikely that a Dear Neighbour effect (lower aggression to direct neighbour) would manifest at different seasons, and no record in the literature suggests such a pattern.The experiment was conducted in high summer, which should be the ideal time to detect a Dear Neighbour effect once territorial disputes are concluded.Thus, while we are not confident about excluding a Nasty Neighbour effect, we believe a Dear Neighbour effect is unlikely to play a role in L. niger.Finally, an important caveat is that we did not attempt to locate all L. niger colonies in the area, and we did not include all the known colonies in the area in the experiment (Figure 1b).Thus, it is possible that what we consider to be direct neighbours may have had a buffer colony in between them.Added to this is the fact that some colonies likely had more neighbours than others, potentially influencing NSRDs.For example, in Iridomyrmex purpureus, aggression towards non-nestmates was influenced by the density of surrounding conspecific nests, inducing more aggressive behaviour when nest density was higher (Thomas et al., 1999).NSRDs also occur between different species, such as between the dominant Formica integroides and the submissive F. xerophila (Tanner & Adler, 2009).Considering the local ant community and interspecific aggression could, therefore, provide more information, as L. niger Finally, it must be mentioned that due to limited resources, the study had a reduced power.Likewise, we lack data on neutral body contact from the first day of data collection.We note a non-significant trend for antennation to drop with relatedness and aggression to rise.While we remain ambivalent about whether these represent biologically meaningful patterns, we again caution that an absence of evidence is not evidence of absence. The fact that aggression did not correlate with spatial distance or relatedness, but that antennation and jerking did, highlights the importance of considering non-overtly aggressive behaviours when examining neighbour-stranger response differences, or the correlation of responses to physical and relatedness.While the theoretical framework for the field of NSRDs arises out of competition research (Temeles, 1994), it could also be used to examine the strategies used by animals to avoid aggression, as proposed here.Indeed, perhaps more emphasis needs to be given to increased cooperation between neighbours, which is expected both theoretically (Eliassen & Jørgensen, 2014;Getty, 1987) and observed empirically (Booksmythe et al., 2010;Elfström, 1997). Overall, we find evidence that jerking and antennation behaviours are better measures for describing non-aggressive or pre-aggressive interactions among conspecifics than traditional measures of aggression such as biting and mandible gaping.These behaviours decrease with physical distance and increase with relatedness.We propose, as a working hypothesis, that these behaviours together can be considered 'negotiation' behaviour.Future studies, in which ants have the possibility of escaping, could shed light on this idea.We found no evidence of either a Dear neighbour or a Nasty Neighbour effect, although for the latter, we suggest future studies should evaluate whether neighbours and strangers present varying degrees of threat and explore the occurrence of a potential "Dear enemy" effect in Spring.While physical distance and relatedness affect behaviour during encounters in the ecologically important ant L. niger, NSRDs do not seem to play a major role in their behavioural ecology. U R E 1 (a) Encounter schematic (36 combinations  5 replicates = 180 pairwise encounters tested).Each square represents a colony in the linear array.Colonies 1, 6 and 11 are target colonies, and encounter each of the other colonies in the array, as well as ants from their own colony (nestmate encounters) and Formica rufibarbis (heterospecific encounter).Each arrow represents 5 pairwise encounters.(b) Position of all slabs in the test field.Non-numbered slabs were not used in this experiment.T A B L E 2 Behaviours recorded. Hot Start Colourless Master Mix (M7433, Promega); 4.5 μL ddH2O, 1.0 μL unlabelled reverse primer (10 μM); 1.0 μL labelled forward primer (10 μM; labelled HEX, FAM and TET and the final concentration of 0.67 μM for each primer) and 1 μL DNA (2-10 ng).The PCR consisted of initial denaturation at 94 C (4 min), 33 cycles at 94 C (denaturation, 30 s), 55 C (annealing, 30 s) and 72 C (elongation, 30 s), and a final step at 72 C (1 min).The PCR products were analysed in an ABI PRISM 310 Genetic Analyser (PE Biosystems) after DNA denaturation at 90 C (1 min).Allele sizeswere determined using the genescan 3.1 software (PE Biosystems).In case of PCR failure or unclear results, the molecular analysis was repeated to ensure that genotypic information was obtained for all individuals at all loci.All 11 loci were polymorphic and showed considerable variation with an average of 11.45 alleles across all samples from the 11 colonies(ant 1343: 6; ant 3993: 4; ant 859: 7; ant10878: 22; ant 575: 8; ant 8424: 5; ant 2794: 16; L10-282: 11; L10-174: 15; fighting / prolonged biting neighbours and strangers present equivalent threat levels or a lack of differentiation in threat levels posed by neighbours and strangers.The latter case could be partly explained by the homogeneous environmental conditions within the studied area.InCordonnier et al. (2022), higher levels of aggression were observed between allopatric individuals compared with individuals sharing similar environmental characteristics.Here, the relative similarity between nests in terms of substrate or available food could also have induced a homogenisation of cuticular hydrocarbons, with a consequent reduction in recognition of non-nestmates and aggressiveness(van Zweden & d'Ettorre, 2010).Nonetheless, such an environmental impact is not consistent in the literature, with some studies on the relationship between environment and aggressive behaviours suggesting correlations (e.g., Benedek & Results of the predicted effects for the three models M1-M3, investigating the relationship between the behaviour observed during the encounter and the spatial distance (left) and the genetic relatedness (right) between the colonies (n = 150).The figure provides predictions for the response variable (Y-axis) across the range of values of each explanatory variable (X-axis), whilst holding values of the other explanatory variable constant.The confidence band (shaded-95% CI) and the regression line (bold) have been calculated based on the values predicted by the models.Note that the genetic relatedness axis has been reversed to make spatial distance and genetic relatedness easier to compare. shows a stronger dominance and aggressiveness towards other species, allowing a better differentiation of NSRDs at the interspecific level.All experiments are a trade-off between ecological realism and tight control: the current experiment is a field study which, although taking advantage of a semi-regular array to increase control, nonetheless cannot guarantee full control.
2023-10-30T15:22:14.828Z
2023-10-27T00:00:00.000
{ "year": 2024, "sha1": "7cd48988ffe421183c2438118159ee077c07fc85", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/een.13291", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "4e107be47fab7c51f78e03df3d69d51b5a9f3280", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
11703879
pes2o/s2orc
v3-fos-license
Improving pentose fermentation by preventing ubiquitination of hexose transporters in Saccharomyces cerevisiae Background Engineering of the yeast Saccharomyces cerevisiae for improved utilization of pentose sugars is vital for cost-efficient cellulosic bioethanol production. Although endogenous hexose transporters (Hxt) can be engineered into specific pentose transporters, they remain subjected to glucose-regulated protein degradation. Therefore, in the absence of glucose or when the glucose is exhausted from the medium, some Hxt proteins with high xylose transport capacity are rapidly degraded and removed from the cytoplasmic membrane. Thus, turnover of such Hxt proteins may lead to poor growth on solely xylose. Results The low affinity hexose transporters Hxt1, Hxt36 (Hxt3 variant), and Hxt5 are subjected to catabolite degradation as evidenced by a loss of GFP fused hexose transporters from the membrane upon glucose depletion. Catabolite degradation occurs through ubiquitination, which is a major signaling pathway for turnover. Therefore, N-terminal lysine residues of the aforementioned Hxt proteins predicted to be the target of ubiquitination, were replaced for arginine residues. The mutagenesis resulted in improved membrane localization when cells were grown on solely xylose concomitantly with markedly stimulated growth on xylose. The mutagenesis also improved the late stages of sugar fermentation when cells are grown on both glucose and xylose. Conclusions Substitution of N-terminal lysine residues in the endogenous hexose transporters Hxt1 and Hxt36 that are subjected to catabolite degradation results in improved retention at the cytoplasmic membrane in the absence of glucose and causes improved xylose fermentation upon the depletion of glucose and when cells are grown in d-xylose alone. Electronic supplementary material The online version of this article (doi:10.1186/s13068-016-0573-3) contains supplementary material, which is available to authorized users. Background During the last three decades, biofuels have received a lot of attention since these are derived from renewable lignocellulosic biomass and can potentially replace conventional fossil fuels. However, the lignocellulosic biomass (from hardwood, softwood, and agricultural residues) which is used to produce bioethanol contains up to 30 % of xylose next to the glucose [1]. In industrial fermentation processes, Saccharomyces cerevisiae is generally used for ethanol production but this yeast cannot naturally ferment pentose sugars, like, xylose. Although S. cerevisiae has been engineered into a xylose-fermenting strain via either the insertion of a xylose reductase and xylitol dehydrogenase [2,3] or a fungal xylose isomerase [4][5][6], glucose remains the preferred sugar which is consumed first. Therefore, during growth of contemporary xylose-fermenting S. cerevisiae strains on a second generation feed stock, consumption rates of xylose in the presence of high glucose concentrations always remained moderate [7]. Instead, bi-phase sugar consumption is observed which relates to sequential sugar uptake wherein first glucose and subsequently xylose is Open Access Biotechnology for Biofuels *Correspondence: a.j.m.driessen@rug.nl 1 Molecular Microbiology, Groningen Biomolecular Sciences and Biotechnology, University of Groningen, Zernike Institute for Advanced Materials and Kluyver Centre for Genomics of Industrial Fermentation, Groningen, The Netherlands Full list of author information is available at the end of the article sequestered by the cells. All wild-type S. cerevisiae hexose transporter (Hxt) proteins show a higher K m value for xylose uptake as compared to glucose which explains the preference for glucose over xylose [7,8]. More recently, co-fermentation of these sugars has been reported through the engineering of endogenous Hxt transporters (Hxt's) [9,10] yielding non-glucose inhibited xylose transporters. A Hxt-deletion strain, in which the HXT1-17 and GAL2 genes were removed, is unable to grow on xylose and glucose while growth on xylose could be complemented with Hxt4, Hxt5, Hxt7, and Gal2 [7]. Saloheimo and coworkers [8] additionally showed that also Hxt1 and Hxt2 are able to transport xylose in a strain in which the main hexose transporter genes HXT1-7 and GAL2 were deleted. In none of these studies, Hxt3 was analyzed. In general, the HXT family of sugar transporters facilitates glucose transport in S. cerevisiae [11,12] while Hxt1-7 and Gal2 are the main and highest expressed transporters exhibiting different affinities for glucose thus covering a wide range of glucose concentrations [12,13]. Hxt transporters can be divided into three groups on the basis of their glucose transporter affinity and expression, namely: low affinity glucose transporters Hxt1 and Hxt3 (K m 40-100 mM) which are expressed at high glucose concentrations but disappear from the membrane at low glucose concentration; moderate affinity glucose transporters Hxt4 and Hxt5 (K m 10-15 mM) with a varied expression profile; and high affinity transporters Hxt2 and Hxt7 (K m 1-3 mM) that are solely expressed at lower glucose concentration [11,[13][14][15][16]. Gal2 which is a galactose transporter also shows a high affinity for glucose (K m 1.5 mM) [9]. However, the GAL2 gene is expressed only when galactose is present [13,17]. Expression studies have shown that the HXT1-4 genes are mainly repressed by the Rgt1 repressor, which recruits the general corepressor Cyc8-Tup1 complex and the co-repressor Mth1 to the HXT promoters in the absence of glucose [18][19][20][21][22][23]. HXT genes are induced by three major glucose signaling pathways (Rgt2/Snf3, AMPK, and cAMP-PKA) which bring about glucose induction by inactivating the Rgt1 repressor [24][25][26] and as demonstrated for Hxt1 and Hxt3, subsequent endocytosis and vacuolar degradation of cytoplasmic membrane localized transporters when the glucose concentration is low [20,27,28]. Protein degradation in S. cerevisiae is brought about via the ubiquitination of the target proteins [28]. Ubiquitin is typically linked to the target protein through an isopeptide bond between the ɛ-amino group of a substrate lysine residue and the carboxyl terminus of ubiquitin [29]. Hxt1 has previously been shown to be ubiquitinated when the glucose levels in the medium are low [20]. A potential issue with the use of Hxt-derived, engineered xylose transporters is that their overexpression not always matches to the growth phase and/or carbon source under study. For example, if a xylose transporter would be derived from Hxt3 by protein engineering, one should note that Hxt3 intrinsically is a low-affinity glucose/xylose transporter induced at high glucose concentrations while the protein is rapidly degraded and removed from the plasma membrane in the absence of glucose [28]. Hxt3 indeed supports only limited or no growth when cells are supplied with solely xylose [7]. The Hxt1 [20] and Hxt3 [28] transporters have in common that upon depletion of glucose in the medium, they are removed from the membrane and for Hxt1 it was shown that it is indeed ubiquitinated at two lysine residues in the N-terminus [20]. This pathway involves endocytosis and vacuolar degradation. Hxt36 is a chimeric protein constituting the N-terminus of Hxt3 (438 amino acids) and the C-terminus of Hxt6 (130 amino acids). This chimeric protein occurs in specific xylose-consuming S. cerevisiae strains that have been evolved for industrial bioethanol formation [10]. Hxt36 is highly homologus to Hxt3, and, is like Hxt3 [28], also susceptible to degradation in the absence of glucose. Thus, the presence of the C-terminus of Hxt6 did not rescue Hxt36 against turnover. Moreover, in a Hxt-deficient strain, Hxt36 supported only slow xylose consumption in a fermentation in the absence of glucose [10]. Hxt5 is an intermediately expressed hexose transporter at low glucose concentration exhibiting a moderate affinity for glucose (~10 mM) [30] and is regulated by growth rate [14]. Hxt5 is degraded at high concentrations of glucose in the medium [31]. In stationary-phase cells, Hxt5 is transient phosphorylated on serine residues and no ubiquitination was detected [31]. As a possible strategy to prevent the protein degradation of the aforementioned transporters Hxt1, Hxt36, and Hxt5, we have mutated lysine residues predicted to be potential targets for ubiquitination and expressed these mutant proteins in xylose-fermenting yeast. The presented data show that mutagenesis results in a marked retention of these transporters at the cytoplasmic membrane both at high and low glucose concentration and improved growth on solely xylose in anaerobic fermentations. Thereby, a method is proposed to retain potentially interesting HXT-based, engineered transporters with affinity for xylose at the membrane in mixed sugar fermentations with varying glucose concentrations. Mutagenesis of putative ubiquitination sites on Hxt transporters and growth on xylose Hxt transporters in S. cerevisiae are regulated both at the transcriptional and posttranslational level [16]. Here, individual HXT genes were removed from their native transcriptional regulation and constitutively expressed under control of the truncated (−390) HXT7 promoter in a ∆hxt1-7; ∆gal2 deletion strain with the aim to investigate the impact of the posttranslational degradation process on sugar fermentation. The Hxt36 amino acid sequence was analyzed for putative ubiquitination sites using UbPred (http://www.ubpred.org/) predicting possible ubiquitination of the lysine residues at position 12, 35, 420, 557, and 561 ( Fig. 1; numbered according to Hxt1 that acts as a reference). Lysine 420 showed a low confidence score and is localized to an external loop of Hxt36. Therefore, we focused on the lysine positions in the N-and C-terminus of Hxt36. These residues were mutagenized to arginine, and various combinations of mutagenized Hxt36 proteins were generated. Via overlap-PCR the following combinations were made: K12R, K12,35,56R, K514,533,557,561,567R, and K12,35,56,514,533,557,561,567R that were cloned into pRS313-P7T7 [10] and renamed 1K, 3K 5K, and 8K, respectively. Furthermore, all aforementioned mutations were also introduced in the HXT36 N367A gene, which enables co-fermentation of d-glucose and d-xylose due to an improved substrate specificity towards d-xylose over d-glucose [10]. All mutants and wild-type HXT36 genes were transformed into strain DS68625 which lacks the HXT1-7, GAL2 genes and is equipped with a xylose fermentation pathway [11]. Subsequently, cells were grown on 2 % d-xylose (Fig. 2). In this strain, growth on xylose is dependent on the introduction of a Hxt transporter. In the case of both Hxt36 variants, the triple lysine mutations in the N-terminus of Hxt36 (3K) enabled efficient growth on d-xylose as sole carbon source. Notably, with the mutant HXT36 N367A gene, growth solely on d-xylose occurred with an increased lag-phase (Fig. 2b). The Hxt36 wild type bearing the mutations in all C-and N-terminal lysine residues (8K) also enabled improved growth on d-xylose but not as well as the 3K mutant (Fig. 2a). The 5K with mutations only in the C-terminus and the 1K mutant showed only small improvements. The data show that replacement of, in particular, the three N-terminal lysine residues of Hxt36 for arginine results in improved growth on d-xylose. A similar approach was followed for Hxt1 and Hxt5, and the N-terminal lysine residues were replaced by arginine residues at positions 12, 27, 35, and 59; and 28, 48, 61, 69, 77, 78, and 80, respectively (Fig. 1). These mutants are further referred to as Hxt1 K4 and Hxt5 K7. All mutant proteins were cloned into pRS313-P7T7 and transformed to the DS68625 strain. In contrast to the wild-type Hxt1, the strain carrying the Hxt1 K4 mutant was capable of growth on 2 % d-xylose (Additional file 1: Figure S1A). The Hxt5 K7 mutant did not improve upon the wild-type Hxt5 (Additional file 1: Figure S1B) confirming an earlier study [31]. These data show that mutating potential ubiquitination sites on Hxt transporters unmasks the ability of such transporters to support growth of yeast on solely xylose. In the remainder of this study, we, therefore, focused on the Hxt36 3K mutant which showed the most prominent effect. Membrane localization and retention of mutated Hxt transporters To determine if the improved growth on d-xylose relates to an improved retention of the mutated Hxt transporters at the cytoplasmic membrane, which is expected when turnover is prevented, the different mutants were fused C-terminally to the fluorescent reporter GFP. The various fusion proteins were expressed in the DS68625 strain, and cells were pre-grown with d-glucose and Fig. 1 Alignment of Hxt1, Hxt36, and Hxt5 transporters. The lysine residues mutated in the respective hexose transporters are boxed, and the transmembrane domains (TMDs) are shaded gray. The position of the asparagine residue in Hxt36 that was mutated to an alanine to obtain Hxt36-N367A mutant is indicated with an arrow. Numbering of the targeted lysine residues is for Hxt36 transferred to a medium with 2 % d-glucose or d-xylose. Next, at various time intervals, the cellular localization of the Hxt-GFP fusion protein was assessed by fluorescence microscopy. Since, the growth rate on d-xylose (Fig. 2) and on low concentrations of d-glucose (data not shown) by the respective lysine mutants was significantly increased as compared to the wild-type Hxt36 and Hxt36 N367A, it is difficult to obtain S. cerevisiae cells which are in exactly the same, active budding, state. Therefore, microscopic investigations were performed over a large time span of growth to observe the general trend. On d-glucose, Hxt36 was readily degraded as a major share of the GFP fluorescence was recovered in the vacuole already after 16 h of growth. At that time point, the d-glucose concentration was close to zero (data not shown). Progressively less GFP signal was retained on the plasma membrane over time and after 40 h hardly any GFP fluorescence could be localized at the cytoplasmic membrane (Fig. 3a). Wild-type Hxt36 supported only slow growth on xylose (Fig. 2). At the start of the growth experiment on d-xylose (Fig. 3b, T0), still, a plasma membrane signal was observed due to pre-culture conditions on glucose but later degradation was severe as after 16 h hardly any GFP could be localized to the cytoplasmic membrane (Fig. 3b). In contrast, the Hxt36-3K GFP fusion localized almost exclusively to the plasma membrane independent of carbon source and the time the strain was grown (Fig. 3a, b). The Hxt36 N367A mutant showed a similar phenotype as the wild-type protein, and thus mutagenesis of the N-terminal lysine residues also resulted in stable cytoplasmic membrane localization. Thus, the mutagenesis of the three lysines in this mutant had a marked effect on membrane retention of Hxt36. Wild-type Hxt1 was readily degraded when cells were grown on d-xylose or when the d-glucose was exhausted from the medium, whereas the Hxt1 K4 mutant stably localized to the cytoplasmic membrane, even after 40 h (Additional file 1: Figure S2A). In contrast, with the Hxt5-GFP fusion a slower protein degradation rate was noted when the d-glucose was utilized, and even after 40 h, still some of the GFP localized to the cytoplasmic membrane. Similar results were obtained when cells were grown on d-xylose (Additional file 1: Figure S2A). The Hxt5 K7 mutant shows improved retention (Additional file 1: Figure S2B) but this effect was not as marked as with Hxt36 and Hxt1. In line with this observation that Hxt5 is more stably expressed than Hxt36 and Hxt1 when cells are grown only on d-xylose, growth on d-xylose was not markedly improved by the Hxt5 K7 mutant (Additional file 1: Figure S2B). It is concluded that mutagenesis of the N-terminal lysine residues to arginine in Hxt36 and Hxt1 which is predicted to interfere with ubiquitination results in improved cytoplasmic membrane localization of the aforementioned transporters. Sugar uptake by the mutant Hxt proteins The improved xylose fermentation characteristics of the cells bearing the Hxt36 and Hxt1 transporters with mutated ubiquitination sites is likely due to decreased protein degradation and hence more stable membrane localization. However, the mutations may also have altered the transport characteristics of the transporter. To circumvent the observed differences in protein degradation in the absence of d-glucose, and to assure identical growth conditions, all strains carrying the wild-type and mutated Hxt transporters were inoculated and grown in minimal medium containing 4 % d-maltose and grown for only 15 h to prevent depletion of the d-maltose. Although there is a marked difference in the uptake of d-glucose (Fig. 4a) and d-xylose (Fig. 4b) between the Hxt36 and the Hxt36 N367A mutant due to the altered substrate specificity [10], the N-terminal lysine mutations had little impact on the affinity of transport (Table 1). Compared to the previously described Hxt11 transporter [32], the K m value for d-xylose by Hxt36 is similar (i.e., 71.8 ± 3.6 mM for Hxt36 vs 84.2 ± 10.0 mM for Hxt11) whereas the V max for xylose transport is higher for Hxt11 (i.e., 48.1 ± 8.0 nmol/mgDW.min for Hxt36 vs 84.6 ± 2.4 nmol/mgDW.min for Hxt11) ( Table 1). A similar difference in V max is apparent when the mutated versions of Hxt36 (N367) and Hxt11 (N366) are compared (Table 1). Hxt1 and Htx5 were not analyzed in detail with respect to the sugar transport kinetics, but uptake was assessed only at 100 mM of d-glucose or d-xylose, again using cells grown on maltose. Also in this case, the N-terminal lysine mutations did not affect the uptake (see Additional file 1: Figure S3A, B). These data show that the lysine mutations introduced in the N-terminus have little effect on the actual transporter affinity. Rather, the mutations affect stability of Hxt1 and Hxt36 in the absence of glucose, and thus will support increased transport rates on solely d-xylose. Sugar fermentations To investigate if the low levels of Hxt protein degradation also impact the profile in mixed sugar fermentation, the Hxt36 wild-type and N367A mutant with and without the N-terminal lysine mutations were grown anaerobically with 3 % d-glucose and 3 % d-xylose. Wild-type Hxt36 supported the characteristic sequential consumption of d-glucose and d-xylose (Fig. 5a). At the end of fermentation, i.e., after 48 h, the d-xylose was not yet completely consumed (3.32 g/l d-xylose left). The Hxt36-3K mutant also showed bi-phase sugar consumption, but with improved d-xylose consumption such that at the end of the fermentation nearly all d-xylose were utilized with the concomitant increase in growth (Fig. 5a) and ethanol productivity (0.54 ± 0.03 g/l.h in Hxt36-3K vs 0.48 ± 0.04 g/l.h in Hxt36) (Additional file 1: Table S2). Compared to an earlier study using the wild-type and the N366A mutant of Hxt 11 [32], the ethanol productivity appears about three times lower in this study. However, the ethanol productivity is depending on the inoculation OD 600 and the total sugar concentration, both of which were higher in the aforementioned study. Ethanol yield (Y EtOH ) and specific ethanol production rate (Q EtOH ) were, however, similar (Additional file 1: Table S2). In contrast to Hxt36, the Hxt36 N367A variant and its 3K mutant, both, showed co-consumption of d-glucose and d-xylose, and there was no increased d-xylose consumption rate at the end of the fermentation (Fig. 5b). Both strains consumed all d-glucose and d-xylose within 48 h showing in the end, similar growth (Fig. 5a) and ethanol productivity as the Hxt36-3K mutant (Additional file 1: Table S2). It should be stressed that the greatest benefit by the lysine mutations in d-xylose consumption is expected in the strains that show bi-phasic sugar utilization, as d-xylose consumption commences only once the d-glucose is exhausted which in turn induces turnover of the respective transporters. When cells coconsume both sugars, d-xylose transport will hardly be affected as the presence of d-glucose will ensure membrane retention of the Hxt36 transporter. When grown solely on 3 % d-xylose, the ethanol production rate (Q EtOH ) and ethanol productivity by both the 3K mutants were significantly increased compared to the Hxt36 wild-type and N367A mutant transporters (Additional file 1: Table S2). This is because the wild-type Hxt36 only supports poor growth on solely d-xylose (Fig. 2). The Q EtOH is, however, corrected for the biomass and therefore less improved. On 3 % d-glucose, there are no major differences observed among the various transporters as expected as the transporters then remain at the membrane. The yeast strains bearing Hxt1 and Hxt5 and their respective lysine mutants were also subjected to mixed sugar fermentation. Compared to Hxt36, Hxt1 showed an extended fermentation time but the Hxt1K4 mutant consumed the d-xylose faster compared to the wild type (Additional file 1: Figure S4A) also resulting in more biomass (OD 600 ). The fermentation profile of Hxt5 is similar to Hxt36, but both Hxt5 and the Hxt5 K7 mutant consumed both sugars within 48 h. However, the remaining d-xylose was consumed somewhat faster by the Hxt5 K7 mutant (Additional file 1: Figure S4B) compared to the wild type in which the d-xylose consumption rates in the absence of d-glucose (at time points 24-32 h) were 1.35 g/h ± 0.16 and 1.18 ± 0.15 g/h, respectively. Taken together, these data suggest that mutations that prevents ubiquitination in Hxt36, Hxt1, and Hxt5 result in enhanced rates of xylose consumption in the late stages of fermentation when cells are grown on a mixture of d-xylose and d-glucose. Discussion In the yeast Saccharomyces cerevisiae, the expressed transporters, Hxt1-7, function as facilitators for d-glucose. With a reduced affinity, these systems also mediate influx of d-xylose which is a critical step when cells grow on processed lignocellulosic biomass that contains both d-glucose and d-xylose. For industrial bioethanol production, xylose-fermenting S. cerevisiae strains are used but in such strains d-xylose consumption commences only when the d-glucose is exhausted. For a more robust fermentation, co-consumption of both sugars is desired. Although several S. cerevisiae strains metabolize d-xylose efficiently, uptake and, therefore, consumption of d-xylose is strongly inhibited by d-glucose [10]. Although, recently, some co-consumption could be observed depending on the specific S. cerevisiae strain examined [33], this problem is much more efficiently tackled by mutagenesis of endogenous Hxt transporters resulting in the specific d-xylose uptake even in the presence of d-glucose [9,10,32,34]. However, many of the Hxt proteins are rapidly degraded in the absence of d-glucose [28] and therefore their capacity to mediate d-xylose transport is underestimated as these transporters are readily removed from the cytoplasmic membrane by a mechanism that involves the ubiquitination-dependent degradation pathway once the d-glucose is depleted in the medium. Here, we have shown that catabolite degradation of the chimeric Hxt36 transporter can be overcome by substituting the three N-terminally located lysine residues (3K) for arginine which should effectively prevent ubiquitination. This mutagenesis indeed has a major effect on the growth of S. cerevisiae on solely d-xylose. To sustain growth on d-xylose, small amounts of d-maltose were needed [10] to shorten the lag phase of cells bearing Hxt36 grown on 2 % d-xylose. Although this allowed a more rapid start of growth, overall growth on d-xylose remained limited. In contrast, the Hxt36-3K mutant used in the current study supports rapid growth even without the addition of maltose. In an industrial-like fermentation, with high levels of d-glucose and d-xylose, the Hxt36-3K mutant showed sequential utilization of d-glucose and d-xylose, but the overall fermentation time was slightly shortened due to an increased rate of d-xylose in the absence of d-glucose at the end of the fermentation. In contrast, our previously reported Hxt36 N367A mutant [10] expressed in the DS68625 strain shows co-consumption of d-glucose and d-xylose. With this mutant, the lysine mutations did not affect the overall fermentation likely because d-glucose remained present throughout the fermentation. Under those conditions, Hxt36 will remain on the membrane and thus sustain coconsumption. Using GFP fusions to detect the membrane localization and expression of Hxt36, the mutagenesis of the N-terminal lysine residues was indeed found to stabilize the expression of Hxt36 and Hxt36 N367A at the cytoplasmic membrane when cells are grown only on d-xylose which is consistent with the presumed role of ubiquitination in catabolite degradation. A recent study that analyzed glucose starvationinduced turnover of Hxt1 showed that the two N-terminally located lysine residues at position 12 and 39 are required for ubiquitination and thus degradation [20]. We mutated all N-terminally lysine residues (at positions 12, 27, 35 and 59) and obtained similar results with respect to membrane retention. The mutations improved Hxt1 dependent d-xylose fermentation but overall the effect was smaller than compared to Hxt36. This latter might be due to the poorer K m value of Hxt1 for d-xylose, i.e., 880 ± 8 mM [8] vs 108 ± 12 mM for Hxt36 [10]. Thus, with Hxt1, the K m value is far above the residual d-xylose concentrations at the end of the fermentation (<50 mM), likely causing slow uptake of the d-xylose at the final stages of fermentation leading to incomplete fermentation. The hexose transporter Hxt5 shows a moderate affinity for glucose affinity (10 ± 1 mM [30]), and this transporter is differently regulated as compared to Hxt1 and Hxt3 [36] which are low-affinity glucose transporters and expressed early during fermentations at high glucose concentrations [20,35]. Hxt5 is mainly expressed at non-fermentable carbon sources and at low growth rates [14,30]. Also degradation of Hxt5 appears to be different compared to Hxt1 and Hxt36 since in the stationary-phase, after addition of d-glucose, Hxt5 is transiently phosphorylated on serine residues while no ubiquitination could be detected [31]. However, it was proposed that the ubiquitination might have been below the detection limit and therefore ubiquitination could not be excluded. In this respect, the Hxt5 mutant with multiple lysine mutation at the N-terminus as reported in this study, clearly showed improved membrane localization and significantly less vacuolar degradation in the late stages of growth suggesting that ubiquitination may also be involved in the degradation of Hxt5. Nevertheless, this phenomenon has little impact on the growth on solely d-xylose or in anaerobic fermentation with d-glucose and d-xylose. Most likely the amount of Hxt5 on the plasma membrane, in the absence of d-glucose, is still sufficient to maintain the d-xylose uptake and therefore metabolism. Conclusions Membrane localization of the low affinity hexose transporters Hxt1, Hxt36, and Hxt5 is improved by arginine replacement of the N-terminally located lysine residues that are potentially involved in ubiquitination. Interference with ubiquitination results in reduced catabolite degradation and retention of the hexose transporters also in the absence of d-glucose in the medium. Consequently, an improved growth on d-xylose occurs with cells bearing Hxt1 and Hxt36 as sole transporters. The improved growth rate on d-xylose, in the absence of d-glucose, also improves the fermentation time in an industrial-like setting when cells are grown on both d-glucose and d-xylose. Molecular biology techniques and chemicals DNA polymerase, restriction enzymes, and T4 DNA ligase were acquired from ThermoFisher Scientific and used following manufacturer's instructions. Oligonucleotides used for strain constructions were purchased from Sigma-Aldrich (Zwijndrecht, the Netherlands). Strains and growth conditions All S. cerevisiae strains used in this study were provided by DSM Bio-based Products & Services and described, in detail, elsewhere [10]. They are made available for academic research under a strict Material Transfer Agreement with DSM (contact: paul.waal-de@dsm.com). In short, the xylose-fermenting S. cerevisiae strains are capable of converting xylose into xylulose via an introduced xylose isomerase (XI), whereupon xylulose is phosphorylated into xylulose-5P by xylulose kinase (Xks1). Xylulose-5P then enters the pentose phosphate pathway. In addition, the key enzymes of the pentose phosphate pathway (Tal1, Rpe1, Rki1, and Tki1) are overexpressed. Yeast expression plasmid pRS313 is described elsewhere [36] and modified using the promoter and terminator of Hxt7 [10]. Shake flask experiments at 200 rpm were done in minimal medium supplemented with d-maltose, d-xylose, and d-glucose as indicated. For the fermentation experiments, cells were pregrown on 2 % d-glucose for 16 h and then used to inoculate the main fermentation at a starting OD 600 of 0.2 using either 3 % d-glucose, 3 % d-xylose or 3 % d-glucose and 3 % d-xylose. Cells were grown anaerobically up to 48 h. Cell growth was monitored by optical density (OD) at 600 nm using an UV-visible spectrophotometer (Novaspec PLUS). Analytical methods High performance liquid chromatography (HPLC from Shimadzu, Kyoto, Japan) was performed using an Aminex HPX-87H column at 65 °C (Bio-RAD) and a refractive index detector (Shimadzu, Kyoto, Japan) was used to measure the concentrations of d-glucose, d-xylose, and ethanol. The mobile phase was 0.005 N H 2 SO 4 at a flow rate of 0.55 ml/min. Cloning of HXT36, HXT1, and HXT5 and mutants The pRS313-P7T7 plasmid, containing the Cen/ARS low copy origin and histidine selection marker, expressing Hxt36 and Hxt36 N367A were used in a previous study [10]. The genes HXT1 and HXT5 were amplified on genomic DNA of the DS68616 strain [10] using the primers listed in Additional file 1: Table S1 with the Phusion ® High-Fidelity PCR Master Mix with HF buffer. The full-length DNA of HXT1 and HXT5 was amplified using primers F HXT1 XbaI, R HXT1 Cfr9I and F HXT5 XbaI and HXT5 Cfr9I, respectively, and cloned into pRS313-P7T7. Overlap PCR with the Phusion ® High-Fidelity PCR Master Mix, in which the original HXTs in the pRS313-P7T7 plasmid were used as template, was used to amplify the hexose transporters and modify the specified lysines into arginines using the primers in Additional file 1: Table S1. The C-terminal GFP fusions with Hxt36, Hxt1, and Hxt5 and their lysine mutants were made by amplification of the corresponding genes with the Phusion ® High-Fidelity PCR Master Mix using the corresponding forward primer (with and without lysine mutations) and the reverse primer without stop codon (Additional file 1: Table S1). Fluorescence microscopy Fresh colonies of the transformants expressing the various variants of HXT36 in DS68625 were inoculated in duplicates in minimal medium with 2 % d-glucose and grown overnight. Cultures were subsequently inoculated in 2 % d-glucose and d-xylose at a starting OD 600 of 0.1. To determine the cellular localization, after 0, 16, 24, and 40 h, the fluorescence was analyzed using a Nikon Eclipse-Ti microscope equipped with a 100× oil immersion objective, a filter set for GFP, and a Nikon DS-5Mc cooled camera. Uptake measurement To determine the kinetic parameters of transport, cells were grown for 15 h in shake flasks in minimal medium containing 4 % d-maltose and were collected by centrifugation (3000 rpm, 3 min, 20 °C), washed and re-suspended in minimal medium without carbon source.
2018-04-03T00:22:41.653Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "ce1fa4d80a2af9d458b6bb9d475aa311eac67db8", "oa_license": "CCBY", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-016-0573-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce1fa4d80a2af9d458b6bb9d475aa311eac67db8", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1353466
pes2o/s2orc
v3-fos-license
Rapid diagnostics of tuberculosis and drug resistance in the industrialized world: clinical and public health benefits and barriers to implementation In this article, we give an overview of new technologies for the diagnosis of tuberculosis (TB) and drug resistance, consider their advantages over existing methodologies, broad issues of cost, cost-effectiveness and programmatic implementation, and their clinical as well as public health impact, focusing on the industrialized world. Molecular nucleic-acid amplification diagnostic systems have high specificity for TB diagnosis (and rifampicin resistance) but sensitivity for TB detection is more variable. Nevertheless, it is possible to diagnose TB and rifampicin resistance within a day and commercial automated systems make this possible with minimal training. Although studies are limited, these systems appear to be cost-effective. Most of these tools are of value clinically and for public health use. For example, whole genome sequencing of Mycobacterium tuberculosis offers a powerful new approach to the identification of drug resistance and to map transmission at a community and population level. Introduction In 2011, 8.7 million people suffered from active tuberculosis (TB) with 1.4 million deaths, with over 95% of these deaths occurring in low-and middle-income countries. TB is also a major killer of those co-infected with human immunodeficiency virus (HIV), causing one quarter of all deaths [1]. TB continues to be a significant public health and clinical problem in the industrialized world. Within countries of the World Health Organization (WHO) European Region those in the East have much higher notification rates than in the West. The Region reported 309,648 new episodes of TB (34.0 per 100,000 people) with more than 60,000 deaths estimated as being due to TB, or 6.7 cases per 100,000 people [2]. Notification rates for newly-detected and relapsed TB cases in the WHO 18 High Priority Countries (all from the central and eastern part of the European Region), remained almost eight times higher (68.5 per 100,000 people) than in the rest of the Region (8.4 per 100,000) [2]. Combination drug therapy has been the mainstay of TB treatment for decades and six-month short-course rifampicin-based regimens will cure almost all cases. However, interrupted and incomplete therapy selects for drug resistant strains, which are more difficult to treat successfully. Multidrug-resistant tuberculosis (MDR-TB) is caused by bacteria resistant to, at least, isoniazid and rifampicin, two key first-line anti-TB drugs. Although treatable with second-line drugs, therapy is less effective, more toxic and prolonged, requiring up to two years of treatment. Further resistance can develop to form extensively drug-resistant TB, (XDR-TB), that is, MDR-TB strains resistant to any fluroquinolone and amikacin or capreomycin or kanamycin. Strains effectively resistant to all available drugs have emerged. There were an estimated 310,000 cases of MDR-TB among notified TB patients with pulmonary TB in the world in 2011 with almost two-thirds of the cases occurring in India, China, and the Russian Federation and Former Soviet States, including the Baltic countries [3]. Extensive travel and migration facilitates transmission of resistant strains even to the countries of Western and Central Europe where the rates of drug resistance remain low. Exciting new advances in TB diagnostics offer the hope of earlier diagnosis, increased cure rates and greater public health benefit by reducing disease transmission. For a long time, laboratories were neglected and considered unimportant in the non-industrialized world with an over-emphasis on the importance of simple microscopy for case diagnosis. In middle-and high-income countries, development continued with innovations in microscopy (for example, light emitting diode (LED) microscopes), microbiological culture (for example, rapid automated liquid culture systems, like the Becton Dickinson MGIT 960 (Becton Dickinson, Sparks, Maryland, USA), nucleic acid amplification systems (for example, Hain Lifesciences (Nehren, Germany) line probe assays and automated systems, such as the Cepheid Xpert W MTB/RIF system (Cepheid, Inc., Sunnyvale, CA, USA). Although there are point of care (POC) diagnostic tests under development, accurate diagnosis of TB and drug resistant TB requires some form of laboratory infrastructure (ranging from a simple light microscope to molecular diagnostic instruments and/or multi-room laboratory suites to complex biosafety facilities for handling manual and automated liquid culture. This article gives an overview of new technologies for TB detection as well as drug resistance (including MDR-TB and XDR-TB), focusing on immunocompetent patients in the industrialized world. The role of new diagnostics for TB detection in HIV co-infected individuals and low income countries has been described elsewhere [4][5][6]. New microscopy Light microscopy (LM) of sputa has been the bedrock of TB laboratory diagnosis for decades. It utilizes cheap equipment and materials but is insensitive, non-specific, (especially in the context of industrialized countries where non-TB mycobacterial infections are more common), and requires patient, time-consuming observation of slides. Fluorescent microscopy (FM) is superior in that it is more sensitive than LM, and has a higher throughput, but the equipment and bulbs are expensive [7,8]. Advances in physics led to the development of light emitting diodes (LED), with appropriate fluorescent light output coupled with low power consumption, creating cheaper robust LED FM microscopes, requiring minimal mains or battery power. The WHO has recommended rolling out LED microscopes in lower income settings where they offer the throughput and sensitivity of more expensive fluorescent microscopes and are, therefore, of benefit in high HIV prevalence environments where sputum samples may carry a lower bacterial load; they can also be used successfully in middle or higher income settings [9,10]. For example, a multicenter study assessing the ease and effectiveness of LED-based fluorescence microscopy for TB detection (using PrimoStar iLED (Karl Zeiss, Oberkochen, Germany) was conducted in the Samara Region, Russia in 2008 and 2009 including two sites with no prior experience in fluorescence microscopy (unpublished data). The first phase ("ZN baseline") aimed to create a control group of Ziel-Nielsen (ZN) stained slides to evaluate false positivity and negativity rates at the demonstration sites. During the validation phase both sites switched to LED-FM after training, followed by implementation where all slides were stained by auramine only. In this Russian study, the overall false positivity and false negativity rates were 5.2% and 1.7%, respectively. The false positive rates for each successive phase were 9.2% (baseline introduction and comparison with current Ziel-Nielsen staining), 4.5% (validation), 1.1% (implementation) and 1.0% (continuation); equivalent false negative rates for each successive phase were 1.7% (baseline and comparison with current Ziel-Nielsen staining), 2.4% (validation), 1.9% (implementation) and 0.9% (continuation). The proportions of false positive and false negative results declined over the stages and the proportion of major errors was almost negligible, demonstrating that LED-FM can be easily implemented in any TB laboratory even with limited prior staff experience. All participating microscopists demonstrated a high level of satisfaction explained by the increased speed of the examination and ease of use. Novel molecular amplification test performance for TB diagnosis Rapid tools for TB detection developed over the last decade in the industrialized world are largely Nucleic Acid Amplification Tests (NAAT) based on amplification of nucleic acids (DNA or RNA), often combined with highly specific detection systems (hybridization with specific oligonucleotide probes or alternatives) to increase sensitivity and specificity of an assay. The polymerase chain reaction (PCR) is the most common methodology utilized in the NAAT; alternatives include real-time PCR (RT-PCR), isothermal, strain displacement or transcription-mediated amplification and ligase chain reaction [11][12][13][14][15] (Table 1). Speed and improved biosafety are the main advantages of molecular assays: they only require high containment initially and can detect specific nucleotide sequences in processed specimens (crude extracts or treated sputum) within a few hours so the time for the TB detection can be reduced to less than one day. Although NAAT can theoretically detect a single copy of nucleic acid in a specimen, their sensitivity could be significantly compromised by the presence of PCR inhibitors in clinical specimens and loss of nucleic acids during processing of clinical specimens and, therefore, tends to vary; specificity is usually high (Tables 1 and 2) [11][12][13][17][18][19][20]22]. Recently, line-probe assays (LPAs) and Xpert W MTB/RIF (Cepheid Inc.) have been formally endorsed by the WHO and are now in routine use in many TB laboratories in high-and middle-income countries. Two LPAs currently available on the market for TB detection in clinical specimens (INNO-LiPA Rif.TB (Innogenetics, Zwijndrecht, Belgium) and Genotype W MTBDRplus (Hain Lifescience, GmbH, Nehren, Germany) are based on the PCR of specific fragments of the Mycobacterum tuberculosis genome followed by hybridization of PCR products to oligonucleotide probes immobilized on membranes. As an example, one large national study in a nontrial context conducted by Seoudi et al. [29] examined 7,836 consecutive patient samples over a decade using INNO-LiPA Rif. TB and compared results with a reference standard (conventional liquid and solid media culture with rapid molecular identification and culture-based drug resistance testing). For all sputum specimens (n = 3,382), the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy for M. tuberculosis complex (MTBC) detection compared to reference microbiology were respectively 93.4%, 85.6%, 92.7%, 86.9% and 90.7%; the equivalent values for smear-positive sputum specimens (n = 2,606) were 94.7%, 80.9%, 93.9%, 83.3% and 91.3%. Xpert W MTB/RIF is a fully automated RT-PCRbased assay. Much of its increased sensitivity is due to the high volume of sputum that is effectively sampled and compared to other NAAT systems, which is an important lesson to developers of the next generation of tests. Diagnosing TB and drug resistance simultaneously While there have been accurate solid-media-based microbiological tests for drug resistance for decades, the use of commercial (for example, MGIT 960) [35,36] and non-commercial liquid culture systems (for example, microscopic observation drug susceptibility (MODS), Thin Layer Agar (TLA) [37][38][39][40][41] from cultures or sputum have facilitated more rapid diagnosis. Encouragingly, in May 2009, the 62 nd World Health Assembly (WHA62.15) urged member states to take action to achieve universal access to diagnosis and treatment of M/XDR-TB by 2015. However, real advances in the rapid (less than one to two days) diagnosis of clinically-significant drug resistance have been more recent, requiring identification of mutations in genes responsible for resistance [42][43][44][45][46]. In 1998, an algorithm was proposed for a centralized regional/national service using a combination of novel amplification-based technology for rapidity, coupled with automated liquid culture-based systems for sensitivity of detection and first-line drug susceptibility [47]. The world's first nationally-available service was established in the UK in 1999. Line probe assays At that time, in-house and commercial LPAs were available which could detect TB and rifampicin (RMP) resistance (as the overwhelming majority of resistance is caused by mutations in a single gene, rpoB, the RNAdependent DNA polymerase). These assays rely on the PCR-amplification of the region of interest followed by binding to DNA probes bound to a solid membrane; binding is detected colorimetrically, usually as visible bands corresponding to the presence of TB and a sensitive or resistant genotype. Currently, the main commercial LPAs for the rapid diagnosis of TB (INNO-LiPA Rif. TB (Innogenetics, Zwijndrecht, Belgium), GenoType W MTBDR/MTBDRplus and Geno-Type W MTBDRsl (both Hain Lifescience), as well as the Xpert W MTB/RIF system mentioned above, are also capable of rapid detection of resistance to rifampicin and (GenoType W MTBDRplus only) isoniazid. These tests are designed for use on M. tuberculosis isolates and/or primary respiratory specimens [11,29,31,32,48,49]. The GenoType W MTBDRsl is the only available rapid assay for detection of resistance to fluorquinolones (FQs), injectable second-line drugs (as well as ethambutol (EMB)) and so offers a rapid detection of XDR-TB in mycobacterial cultures [50,51]. The introduction of Xpert W MTB/RIF-based diagnosis increased TB case findings in India, South Africa and Uganda compared to the use of simple microscopy and clinical diagnosis from 72% to 85% to from 95% to 99% of the cohort of individuals with suspected TB [52]. In the excellent WHO-led global roll-out document for Xpert W MTB/RIF [53], the key programmatic issues for countries with a low prevalence of rifampicin resistance (and MDR) TB is the low PPV of a positive resistant result. This is the situation that applies in most industrialized countries currently and means that most "resistant" isolates will be false-positive ones. This necessitates the use of a second, usually microbiological, test to confirm resistance. The corollary is that the NPV is good. Cost of new diagnostics There have been limited studies on the cost and costeffectiveness of novel rapid tests and, in particular, the real cost of the entire process (rather than simply the cost of the material) and the overall clinical/diagnostic pathway, which will influence the uptake. As of 31 December 2012, a total of 966 Xpert W MTB/ RIF instruments (5,017 modules) and 1,891,970 MTB/ RIF cartridges had been purchased in the public sector in 77 of the 145 countries eligible for concessionary pricing [54]. Pantoja et al. recently [55] assessed the cost, globally and in 36 high-burden countries, of two strategies for diagnosing TB and multidrug-resistant TB (MDR-TB): Xpert W MTB/RIF with follow-on diagnostics, and conventional diagnostics. They showed that using Xpert W MTB/RIF to diagnose MDR-TB would cost US$0.09 billion/year globally and be of lower cost than conventional diagnostics globally and in all high TB burden countries (HBCs). Diagnosing TB in HIV-positive people using Xpert W MTB/RIF would also cost about US$0.10 billion/year and be of lower cost than conventional diagnostics globally and in 33 of 36 HBCs. Testing everyone with TB signs and symptoms would cost almost US$0.47 billion/year globally, much more than conventional diagnostics. However, in European countries, Brazil and South Africa the cost would represent <10% of overall TB funding. The authors concluded that while using it to test everyone with TB signs and symptoms would be affordable in several middle-income countries, the financial viability in low-income countries would require large increases in TB funding and/or further price reductions. Kirwan and colleagues [56] argued that studies on Xpert W MTB/RIF have shown cost-effectiveness in some but not all settings. They pointed out that serial implementation of new technologies can cause ineffective spending and fragmentation of services. The process by which new tests are incorporated into existing diagnostic algorithms would affect both outcomes and costs. They argue that more detailed data on performance, patient-important outcomes and costs when used with adjunct tests were needed for each setting before implementation and that while awaiting further clarification it would seem prudent to slow implementation among resource-constrained tuberculosis control programs [56]. Vassall et al. [52] showed that when Xpert W MTB/RIF was used as a screening tool for testing all TB suspects in India, South Africa or Uganda, the cost and costeffectiveness increased from US$28 to US$49 from US $133 to US$146, and US$137 to US$151 per TB case detected if Xpert W MTB/RIF is used "in addition to" and "as a replacement of" smear microscopy. Calculated incremental cost effectiveness ratios (ICERs) for using Xpert W MTB/RIF "in addition to" smear microscopy ranged from US$41 to $110 per disability adjusted life year (DALY) averted and were below the WHO threshold and, therefore, indicate Xpert W MTB/RIF to be a costeffective method of TB diagnosis in low-and middleincome settings. However, scale and range of current TB diagnostic algorithm practice in other settings would determine the extent of the cost-effectiveness of adding this new tool into routine practice [52]. Conversely, in high income countries the diagnostic yield and cost-effectiveness will differ as microbiological culture and/or DST will form the base case scenario. Rapid diagnostics may have even greater financial impact in identifying those patients with risk factors for MDR-TB but who subsequently were shown to have simple drug sensitive TB, that is, the shorter cost of enhanced isolation facilities within institutions may justify the increased diagnostic cost. For example, in one UK London teaching hospital the use of line probe assays would have created potential annual savings of between £50,000 and £150,000 a decade ago [57]. There are no complete costs for the diagnosis and management of TB which also include societal costs. A review in 2007 calculated a UK annual drug bill of £1.95 million/year (2002 costs) [36]. The mean costs (including in-and out-patient stay, drugs, toxicity monitoring) of managing drug sensitive and MDR-TB cases in London, UK were estimated to be approximately £6,000 and £60,000 in a study from 2000 [58]. In a more recent German study, the costs were comparable with a mean combined in-patient and out-patient costs of €7,364 and to €52,259 for the treatment of a drug sensitive and MDR-TB cases, respectively [59]. This is in broad agreement with a WHO report which showed that the drug cost for treating drug-resistant patients was approximately 50 to 200 times higher than for treating a drug-susceptible TB patient, with the overall costs of care at least 10 times higher [3]. However, the overall societal costs are more difficult to measure. In the recent German study [59], 4,444 new cases of TB were reported in 2009, with 2.1% (63 cases) MDR-TB cases. The mean costs of treatment per TB case overall, including treatment of MDR-TB, was €7,931 to which was added the mean cost due to loss of productivity (€2,313), costs per case for rehabilitation (€74) and contact tracing (€922), giving a total of €11,240 as the overall societal cost. In a report from 2012 [60], the UK reported 8,917 cases and 60 MDR-TB cases in 2009. As the UK and German TB case management approaches are broadly similar if we use the same societal cost figure, then UK societal costs for TB are approximately €100 million. The equivalent mean treatment cost would be a little over €70 million. There is a real need to model and cost end-to-end services rather than perform simple analyses around the cost of the diagnostic alone. It may be more cost effective in a high income, high cost environment to control the whole process carefully for quality and to adjust workflow. For example, the cost and cost-effectiveness of the entire process in the UK will be influenced by poor transport logistics. Alternative service delivery models involving the leasing of vehicles with mobile phlebotomist-technicians, for a blood sample-based test, who can either bring samples to the laboratory rapidly for analysis, or perform a POC test on site may be more cost-effective than the current practice. Barriers to uptake There has been a relatively slow uptake of new TB diagnostics, some of which have been available since the 1990s [47]. Within the UK health model, greater laboratory costs should be offset against increased hospital (institutional) savings to encourage innovation and reduce barriers to adoption of newer tests by demonstrating rational cost savings in place of simplistic percentage cost cuts used currently. Other insurance-based health models, such as those in France or Germany, are arguably better at implementing proven diagnostics. The underlying tension existing for all diagnostic tests continues to be the debate over the merits of point of care tests versus those performed in a more centralized laboratory environment. Assuming the technical issues are solved, arguably the greatest influence on whether it is more cost-effective to bring samples to a laboratory or use point of care tests, depends on transport logistics. Public health relevance and impact of new diagnostics Active TB Any diagnostic tool may be of value clinically, for public health or both. Clinically, we value a reduction in mortality and modeling suggests a 100% sensitive and specific test with 100% access could prevent up to 36% of TB related death [61]. Other models estimate that employing more sensitive and rapid tests would produce between a 17% and 23% mortality reduction [62,63]. However, a patient who survives but remains infectious with TB, especially highly drug resistant TB, may be of greater importance and danger from a public health perspective. Globally, introduction of new diagnostics without anticipation and planning for an increase in the number of cases diagnosed could lead to disaster at the programmatic level as more patients are placed on TB therapy, which then runs out; incomplete therapy remains the overriding cause of clinically relevant drug resistance [64][65][66]. Highly industrialized countries have the financial ability to increase expenditure to compensate for this increased treatment requirement at a programmatic level but there remains a need to understand this need and plan for it. Equally, diagnostic delay has led to failures in adopting appropriate public health measures and has been documented in many high-income jurisdictions, for example, in New York [67]. Patient and health service delays were identified in a retrospective cohort study of patients with pulmonary tuberculosis notified between 1 April 2001 and 1 March 2002 in London, UK. The median case finding delays were between 78 and 99 days. The median patient-related delay was between 34.5 and 54 days and the median health care-related delay was 29.5 days. Shorter case finding delays were found in patients born in a high prevalence country, patients presenting first to an Accident and Emergency department (A&E), with limitations in TB service capacity and organizational factors accounting for much of the delay [68]. These points and the potential effects of new versus old procedures on public health efforts are summarized in Table 3 and some of these are discussed in more detail below. Clearly a rapid, highly specific and sensitive, active TB test would be of equal clinical and public health value. However, sputum smear microscopy has been criticized because it is too insensitive and not specific enough for TB. Nevertheless, because of its relative insensitivity, it is a good test of infectivity, identifying those individuals with the highest bacterial load who are, therefore, the most infectious and a priority for public health intervention. Therefore, the urgent midnight sputum smear examination may be of less clinical diagnostic benefit but is essential to prioritize institutional and community isolation procedures. A new POC diagnostic test for TB, for example, with excellent specificity for MTBC, but similar sensitivity to smear microscopy, may be of limited clinical value but excellent public health value helping to identify priority infectious cases and limit transmission of TB further. For example, a study in South Africa attempted to use the cycle threshold values of the Xpert system as a rule-in/rule-out test for smear positivity and so infectivity; it had poor value as a rulein test but moderate value as a rule-out, although 20% of individuals would have been erroneously ruled out as smear negative [69]. An assay for drug resistance (for rifampicin, isoniazid and MDR-TB) would be equally valuable for clinical management and public health; correct therapy helps the patients and, by rendering the individual noninfectious, reduces disease transmission. However, there is a further dimension in that by establishing the correct level of isolation, disease transmission will be interrupted. Tests for second line drug therapy, for example, to establish XDR-TB, are arguably of greater clinical value in that the level of isolation and concern would have been established by the MDR-TB diagnosis. Such improvements in the speed and/or sensitivity of diagnosis of TB and drug resistance have greatest potential impact on the clinical management of those coinfected with HIV due to the high mortality associated with MDRTB/XDRTB in middle or high income countries [70][71][72][73][74]. Latent TB infection (LTBI) Although this review is focused on active TB, several industrialized countries are entering a phase of (potential) TB eradication in their TB programs; in these states new cases will come from latently infected migrants and the indigenous population as it ages. Better identification of truly latently-infected individuals offers the opportunity to interrupt the onset of active (infectious) TB. Blood tests based on Gamma-interferon release (IGRA) provide an improvement in specificity on classical tuberculin skin testing (TST) as they are not influenced by prior Bacillus Calmette-Guérin (BCG) vaccination [75][76][77]. Nevertheless, the value of IGRA tests clinically differs from their public health value and both are dependent on the willingness of well-feeling individuals to accept and complete TB chemoprophylaxis, which is challenging -both have value in identifying a significant exposure in a contact investigation and in preventing further harm, but they can also potentially provide wider programmatic understanding of undiagnosed TB transmission in a community. For example, in Baltimore, IGRA implementation of LTBI evaluation in a public health clinic significantly reduced the proportion of referred individuals with preliminary suspicion of LTBI diagnosis in whom LTBI was finally diagnosed, but IGRA testing had no impact on treatment initiation or completion [75]. Molecular epidemiological typing and Next Generation Sequencing This is a good example of new "diagnostic" techniques, which are primarily of public health importance, establishing unknown routes of disease transmission, and confirming or refuting institutional outbreaks. Arguably, the clinical value of these techniques is in primarily identifying weaknesses in institutional infection control procedures which lead to new TB cases and in establishing the value of TB programmatic changes in reducing transmission. For example, in a city, region or country improvements in TB control through early diagnosis, effective treatment and improved infection, control may be masked by new migration of TB cases. Techniques, such as variable number tandem repeat (VNTR) [78] and next generation sequencing, may [79] all be of value in establishing improvement by showing that fewer TB cases were clustered together (clustering indicating TB transmission between individuals). Reduced rates of TB case clustering together with data indicating cure rates over 85% and reduced rates of drug resistance development in cases which were initially drug sensitive, provide a portfolio of indicators demonstrating an effective TB program. In San Francisco, for example, DNA fingerprinting showed a reduction in all TB cases and clustered TB cases demonstrating an improvement in TB control [80][81][82]. Using whole genome sequencing and analyzing genetic distance between isolates from pairs of household contacts in the UK, Walker et al. [83] deduced that isolates separated by less than five SNPs (single nucleotide polymorphisms) were likely to be the result of recent transmission events and that transmission could be ruled out if isolates were separated by more than 12 SNPs. In a similar study in The Netherlands, a low TB prevalence country with robust procedures for contact tracing, 97 pairs of epidemiologically-linked isolates differed by an average of 3.4 SNPs [84]. However, no epidemiological link could be established between 82 pairs of isolates that had no SNP differences. Within a large, population sequencing study in the Samara region of Russia, we established linkages between household contacts but for many clusters of TB strains with no SNP differences, patients lived in geographically-distant parts of the region making direct transmission unlikely (data not shown). While cryptic outbreaks may be uncovered in a low incidence setting [83], the degree of relatedness between unlinked isolates in high prevalence regions may make the establishment of epidemiologic links between patients problematic. Illustrating this, we found that when we applied wholegenome sequencing to TB isolates from Estonia, collected in 2008, they differed by only 16 SNPs from that of a Lithuanian-born patient isolated in the UK in 2011, and within the Beijing clade A in Samara, Russia, the genetic distance between isolates belonging to a Latvian and Russian patient was 13 SNPs. (data not shown). None of these individuals could have been in direct contact. However, a direct transmission link may be ruled out based on a significant genetic distance between isolates. Furthermore, in low TB prevalence countries, where one migrant population dominates a putative outbreak, an understanding of the phylogeny of M. tuberculosis in the patients' country of origin may be critical to correctly interpret genetic distances and surmise transmission chains. Whole genome sequencing of M. tuberculosis can offer a powerful new approach to the identification of drug resistance and to map transmission at a community and population level when carefully interpreted. Conclusions Rapid diagnostics for TB and drug resistance have undergone extraordinary development over the last decade with a major clinical impact on improving TB diagnosis and the early identification of drug resistant TB. These improvements have led to more rapid implementation of the best therapy for given patients. There remains a need for new diagnostics to improve the sensitivity of detection for active TB in children, HIV coinfected patients and extrapulmonary disease. The ideal position should be akin to malaria diagnosis where therapy is no longer automatically given when novel diagnostic tests are negative (ruled-out). Multiple studies in both the industrialized and nonindustrialized world showed that early identification of MDR-TB and the institution of therapy based on susceptibility in laboratory drug-resistance assays led to improved survival. The early and more accurate identification of TB cases, drug resistance and institution of appropriate therapy also removes sources of TB transmission by curing them. This combination of rapid accurate diagnosis and correct treatment is the root of all successful TB programs and public health strategies. Reducing diagnostic delay remains a high clinical and public health priority. Competing interests None of the authors declare any competing interests. Authors' contributions FD prepared the initial draft. VN, HM, YB, NC, IK and OI contributed to the final drafting, and all read and approved the final version. Authors' information FD is a microbiologist, immunologist and TB physician, Professor of Global Health at Imperial College and Professor of International Health at Queen Mary and an honorary professor, University College, London. He is Director of the UK PHE National Mycobacterial Reference Laboratory and Consultant medical microbiologist and TB physician at Barts Health Trust, London. VN is a microbiologist and molecular biologist with an extensive expertise in development and validation of novel diagnostic assays, molecular epidemiology and immunology. HM is a Consultant medical microbiologist. YB is a TB physician and epidemiologist with extensive experience in implementation and management of multi-center' studies and data analysis. IK and OI are microbiologists and molecular biologists involved in diagnostic and clinical field studies implementation, coordination and data analysis. NC is a molecular biologist and microbiologist involved in the analysis of DNA sequence data.
2016-05-04T20:20:58.661Z
2013-08-29T00:00:00.000
{ "year": 2013, "sha1": "dd3d953151ecd8d4b67b0d8b76f6e7887c32016c", "oa_license": "CCBY", "oa_url": "https://bmcmedicine.biomedcentral.com/track/pdf/10.1186/1741-7015-11-190", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85e056e22d42d4909182d9f10a335d210cd4ec22", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233520488
pes2o/s2orc
v3-fos-license
The Gouy-Stodola Theorem—From Irreversibility to Sustainability—The Thermodynamic Human Development Index Today, very complex economic relationships exist between finance, technology, social needs, and so forth, which represent the requirement of sustainability. Sustainable consumption of resources, production and energy policies are the keys for a sustainable development. Moreover, a growing request in bio-based industrial raw materials requires a reorganization of the chains of the energy and industrial sectors. This is based on new technological choices, with the need of sustainable measurements of their impacts on the environment, society and economy. In this way, social and economic requirements must be taken into account by the decision-makers. So, sustainable policies require new indicators. These indicators must link economics, technologies and social well-being, together. In this paper, an irreversible thermodynamic approach is developed in order to improve the Human Development Index, HDI, with the Thermodynamic Human Development Index, THDI, an indicator based on the thermodynamic optimisation approach, and linked to socio-economic and ecological evaluations. To do so, the entropy production rate is introduced into the HDI, in relation to the CO2 emission flows due to the anthropic activities. In this way, the HDI modified, named Thermodynamic Human Development Index THDI, results as an indicator that considers both the socio-economic needs, equity and the environmental conditions. Examples of the use of the indicator are presented. In particular, it is possible to highlight that, if environmental actions are introduced in order to reduce the CO2 emission, HDI remains constant, while THDI changes its value, pointing out its usefulness for decision makers to evaluate a priori the effectiveness of their decisions. Introduction It was the XIII Century, when St. Thomas Aquinas (1225-1274) introduced in Philosophy the consideration of the impossibility for an effect to be stronger than its cause [1]. This is an implicit statement on the effect of irreversibility in Nature. St. Thomas shows how the concept of irreversibility had always been clear in the history of humans. In 1803, Lazare Carnot (1753-1823) developed the analysis of the efficiency of some pulleys and inclined planes [2], obtaining a general approach to the conservation of mechanical energy. Twenty-one years later, in 1824, his famous son Nicolas Léonard Sadi Carnot (1796-1832) introduced a reference model for the thermal engine and obtained its maximum efficiency, which, against any expectation, always results in being less than 1, and depends on the high and low working temperature [3]. In particular, for a defined heat source (constant high temperature), the environmental temperature plays a fundamental role in the inefficiency of any engine, but also of any process and transformation [4][5][6]. Real systems are very different in relation to the Carnot engine; indeed, they are finitesize devices and operate in a finite-time, characterised by dissipation and friction [7][8][9][10][11]. Consequently, theoretical and experimental attempts have been developed in order to evaluate the efficiency of the real systems [12][13][14][15][16][17][18][19][20][21][22][23] but they always confirm the Carnot general conclusion about the existence of an upper limit for the conversion rate of the heat into the mechanical energy [24]. Now, in the history of the concept of thermodynamics, two scientists appeared, showing a new analytical approach to evaluate the irreversibility, by considering a global analysis of a general system (closed or open): the French physicist Louis Georges Gouy (1854Gouy ( -1926 [25] and the Slovak engineer and physicist Aurel Boleslav Stodola (1859Stodola ( -1942 [26,27]. Indeed, in 1889, Gouy proved that the exergy lost in a process can be calculated by the product of the environmental temperature and the entropy generation [28][29][30][31]-the entropy due to irreversibility [24]. Then, in 1905, Stodola, independently, obtained the same result in designing a steam turbine [32], giving an experimental proof, too. The Gouy-Stodola theorem is the result of a continuous improvement of thermodynamics, started when Clausius [12] introduced the concept of entropy, just to analyse the dissipative processes [16,24,33]. Today, this theorem, in addition to being a powerful way to evaluate irreversibility in real processes and systems, could play a new role in sustainability. Indeed, this theorem is useful for optimizing the processes [17,34] in engineering design, but optimisation means also a decrease in the CO 2 emissions and pollutants, and a decrease in the environmental and ecological impact of anthropic activities. Indeed, since the 1970s, when Georgescu-Roegen developed his analysis of the conflict among individual, social, and environmental values [35], the Second Law of Thermodynamics was shown to be a fundamental approach to evaluating the dependence of humans on energy availability, with particular regards to available energy [36]. Moreover, the Nobel laureate Joseph Stiglitz has recently highlighted the unsustainability of the present growth, due to its impact on the environment-a change in our economic and productive system is required to assess economic and social performances [37]. In order to monitor and assess the performance of sustainable policies, indicators have been introduced in socio-economic and ecological analysis [38]. Therefore, to support decision-making towards sustainable development, organizations and researchers have proposed indexes and indicators for sustainable development. In 1989, the Index of Sustainable Economic Welfare ISEW [39] was introduced to replace the Gross Domestic Product GDP, and, later, it was improved [40] to obtain a more detailed analysis of welfare and sustainability. But some criticisms have been made of this indicator, because of its attempt to enclose too many different information into a single index [41]. In the 1990s, the Ecological Footprint EF [42] was developed in order to take into consideration the biologically productive land, required to support a given population [43] at its current level of consumption [44][45][46][47][48][49]. Criticisms against this indicator have been developed against its bases [50], in relation to its calculability. The Environmental Sustainability Index (ESI) is composed of twenty different indicators, which are combined with two to eight variables [51] and assesses sustainability by using environmental and socio-economic indicators. Its improvement is the Environmental Performance Index (EPI), which identifies economic and social driving forces and environmental pressures, in order to assess the impacts on human health and on the environment [52]. Since 1990, the United Nations Development Programme (UNDP), has introduced the Human Development Index (HDI) [53,54], as a multidimensional index to measure the development of a country from a socio-economic viewpoint, with the aim to switch the focus from a pure economic development to a more human-centred standpoint [54,55]. This indicator combines three dimensions together: • Life expectancy at birth; • Education, represented by years of schooling; • The gross national income per capita at purchasing power parity rates. Since 2010, HDI has been improved in relation to the new needs emerging in relation to sustainability, as deeply analysed in Refs. [55][56][57]. Stanton has highlighted the following two fundamental roles played by the HDI [58]: on one hand as a tool to understand human development in relation to human well-being, and, on the other hand as an alternative to GDP pc in order to measure and compare the levels of development of countries. In Table 1, the previous indicators considered are summarised in relation to their chronological introduction. Environmental Performance Index (EPI) [52] The HDI is a statistic composite index of life expectancy, education and per capita income indicators. A country scores a higher HDI when its lifespan is higher [60] but this index does not take into account any ecological impact and it is not related to any physical quantity used in engineering in order to also evaluate the technological level of a country. So, a new approach is required to evaluate human activities in relation to sustainability; indeed, the present economic indicators are not able to take into account the sustainable requirements and some new social and economic issues are also becoming relevant in energy and industrial engineering. Consequently, the requirements related to sustainability remain without any overall answer [61][62][63][64][65][66][67][68][69][70][71][72][73][74]. In this paper, in order to suggest a response to this problem, we develop an approach based on irreversible thermodynamics, introducing the measurement of pollution and anthropic footprint into the Human Development Index, in order to obtain a new indicator for sustainability, the Thermodynamic Human Development Index (THDI), which takes into account the social, economic and ecological requirements, but is also linked to the optimisation approach to engineering systems. Materials and Methods The Human Development Index is an indicator of the developing level of a country in relation to education, health and salary conditions [75]. It is the geometric mean of three normalised indices representative of each dimension [53] and its analytical definition is [76]: where LEI is the Life Expectancy Index, EI is the Education Index and I I is the Income Index. The Life Expectancy Index LEI is defined as [59,76]: where LE is the Life Expectancy at birth, which indicates the overall mortality level of a population. It corresponds to the years that a newborn is expected to live at current mortality rates [77]. Therefore, in order to normalise the Life Expectancy at birth, the UNs have set its minimum and maximum values to 20 and 85 years, respectively [76]. Indeed, in the XXI century there are no countries with a life expectancy at birth lower than 20 years, and, on the other hand, the value of 85 years is set as a realistic aspirational target [76]. The Education Index EI, is defined as [76]: where MYSI = MYS/15 is the Mean Years of Schooling Index and EYSI = ESI/18 is the Expected Years of Schooling Index [76]. The Normalised Income Index I I, is defined by the United Nations, as follows [60]: where GN I pc is the gross national income per capita at purchasing power parity (PPP), with minimum and maximum value set by the United Nations [76] as $100.00 and $75,000.00, respectively. The choice of $100, as the GN I pc minimum value, is due to the difficulty in capturing the amount of the unmeasured subsistence and non-market production, within the official data of the economies close to the minimum [76]. While, the maximum GN I pc value of $75,000 has been chosen as threshold because, for higher values, there has been shown no gain in human development and well-being [76,78]. But, this index does not take into account of the technological and ecological level of a country. Recently, with the aim of considering the technological level, a thermoeconomic indicator has been introduced, in order to link economics to a technical approach [79]: where η λ is the inefficiency [80]: whereẆ λ is the power lost due to irreversibility, and ExI is the Energy Intensity related to the power really used, whereĖx in is the exergy rate [24], GDP is the Gross Domestic Product and represents the well-being of a country or a productive system, LP is the Labour Productivity, defined as [81] LP = GDP/n wh , where n wh = n w · n h is the total number of worked hours needed to obtain the GDP, with n h number of worked hours and n w number of workers. Now, considering the Gouy-Stodola theorem, the power lost due to irreversibility is related to the entropy generation [24,82,83]:Ẇ whereṁ CO 2 is the CO 2 mass flow rate emitted for obtaining the required effectẆ and s g is the specific entropy generation due to the process developed. In order to improve the HDI by also using the indicator of Equation (5), now, we consider that the total number of workers is strictly related to the Gross National Income per capita, GN I pc , and we combine its expression in relation to the Income Index, Equation (4), obtaining: Now, we propose a Thermodynamic Human Development Index by introducing the following definition: As a result, the THDI improves the usual HDI by also considering the technical and ecological level, introducing the CO 2 flows and the s g quantities, evaluated in the I T . Results In this paper, we have introduced the Thermodynamic Human Development Index (THDI), which is an indicator related: • To the physical quantities-the entropy generation due to the anthropic activities with its related environmental impact; • And to the socio-economic quantities-life expectancy, education, and per capita income indicators, all considered as the basis for sustainable development. In particular, as presented in Equation (9), we have considered the irreversibility due to the anthropic carbon dioxide emissions and the Income Index. Subsequently, we have calculated the THDI, as presented in Equation (10). First, we wish to highlight that a fundamental requirement to define an indicator is the accessibility to the updated data of countries, in order to be able to continuously monitor their performances [84]. Here, in order to make use of the indicator THDI, the following countries are considered as examples-Algeria, Argentina, Australia, Belgium, Brazil, Canada, China, Denmark, Finland, France, Germany, Greece, India, Italy, Japan, Mexico, Norway, South Africa, Spain, Sweden and the United States of America. This analysis considers 1990 as a reference year, as defined by the United Nations [53], which is also the same reference year used for the global carbon dioxide emissions targets [85]. During the period 1990-2019, an overall rise of the HDI has occurred. The increase of this quantity from 1990 up until today can be assessed respectively as: 31% for Algeria, 18% for Argentina, 8% for Australia, 15% for Belgium, 25% for Brazil, 9% for Canada, 53% for China, 17% for Denmark, 19% for Finland, 15% for France, 17% for Germany, 17% for Greece, 50% for India, 15% for Italy, 12% for Japan, 19% for Mexico, 13% Norway, 13% South Africa, 19% Spain, 15% Sweden and 7% United States of America. We can highlight that the countries with 1990 HDI lower values present a higher percentage variation of HDI, in time [89]. Among the countries with a high level of HDI in 1990 (higher than 0.790), the Northern European countries have shown the higher percentage increase. In order to consider the national environmental footprint at a global scale, the total carbon dioxide emissions, due to anthropic activities, have been considered. In Figure 3, it is possible to observe that, during the period 1990-2019, different behaviours in carbon dioxide emissions have occurred, for the countries considered, depending on their starting development level, too. Only a few of them have reduced their emissions: −17% in Belgium, −27% in Finland, −19% in France, −40% in Denmark, −23% in Italy, −33% in Germany, −19% in Greece, −4% in Japan, −25% in Sweden . On the contrary, most of them have increased their environmental footprint, mostly due to their need for quick social and economic growth (124% for Algeria, 60% for Argentina, 48% for Australia, 125% for Brazil, 25% for Canada, 320% for China, 38% for Mexico, 20% for Norway, 53% for South Africa, 9% for Spain, 3% for United States of America). THDI has been calculated by Equation (10) , considering the primary energy supply as the useful effectẆ. The values of the Life Expectancy Index and of the Education Index have been directly taken from the United Nations data [90,91]. In order to calculate I T (Equation (9)), the data of the Gross National Income per capita GN I pc , based on purchasing power parity (PPP), referred to 2017, have been taken into account [92]. The GN I pc , based on purchasing power parity (PPP), is an economic indicator, converted to international dollars by using the purchasing power parity rates. So, this quantity allows us to compare the income of different countries, considering the same standards of living. For each country, the mean environmental temperature T 0 has been evaluated by considering the data reported by the World Bank [93]. Then, we can obtain the power lost due to irreversibilityẆ λ by using Equation (8), where the carbon dioxide emissions [88] and the properties of carbon dioxide (entropy per unit of mass s CO 2 for the calculated mean temperature) have been considered. In accordance with the United Nations indicator, also for THDI, the higher the value of the indicator (THDI) is, the more sustainable is the process considered. As for the HDI, all countries have increased their Thermodynamic Human Development Index in percentage, from 1990 to 2019. The relative variation of THDI, during this time period, has been respectively of: 40% for Algeria, 40% for Argentina, 29% for Australia, 44% for Belgium, 37% for Brazil, 27% for Canada, 189% for China, 54% for Denmark, 45% for Finland, 36% for France, 43% for Germany, 36% for Greece, 100% for India, 30% for Italy, 23% for Japan, 37% for Mexico, 23% Norway, 17% for South Africa, 48% for Spain, 42% for Sweden, and 26% for United States of America. However, the absolute value of the indicator, presents significant variations among the different countries, as shown in Figure 2. Indeed, the indicator considers the environmental footprint, in terms of carbon dioxide emissions, that has been produced to obtain the improvement on their HDI. So, the Thermodynamic Human Development Index considers the negative effect on the global environment, required in order to improve the national well-being. By considering the exergy losses due to irreversibility, it is possible to obtain a measure of the technological development of each country. In Figure 2, the variation of the Thermodynamic Human Development Index is represented in the years 1990, 2000, 2010 and 2019, for the above listed countries, in relation to their carbon dioxide emissions and to their Human Development Index. We can highlight that: Considering the European targets on climate policy strategies, a reduction of at least 40% of the greenhouse gas emissions-from 1990 levels-is expected by 2030 [94]. Thus, in Figure 4, HDI and THDI are represented for 1990 and 2019; furthermore, their evaluation, based on 2019 data but considering the European target of CO 2 reduction, has been introduced-it is represented by the series named 2019 mod. We can point out that THDI varies between 2019 and 2019 mod, due to the reduction of the carbon dioxide emissions, while HDI is not affected by this environmental action and maintains constant its value. So, we can highlight that THDI represents an improvement of HDI, because it includes the information of HDI, adding the environmental component, too. In summary, the Thermodynamic Human Development Index is an indicator related to the evolution of a process, due to its close link to the entropy generation, the thermody-namic quantity used to describe the spontaneous evolution of the natural processes [95][96][97][98]. Moreover, entropy and entropy generation represent the bases of the modern engineering thermodynamics and optimisation methods [24]. Up until now, social, environmental and technical systems have always been taken into account separately, but it is clear that they are in continuous interaction. The results obtained go beyond this limit and suggest a holistic indicator, which takes into account of economics, social, technical and environmental requirements, together. A process results sustainable, if the value of the indicator is as high as possible. Discussion and Conclusions Huge efforts have been made by the United Nations to build an indicator, which measures the human progress, and the well-being of a country, by taking into account not only the merely economic growth, but also other fundamental social requirements, such as the educational level (knowledge), and the life expectancy (population's longevity). However, some criticisms of the Human Development Index have been raised, due to the lack of information about the effects, on the environment and the related responsibilities [55,[99][100][101][102][103]. These effects must be considered to assess the level of development both for the present and the future generations [104]. The evaluation of the resource consumption can be obtained by the exergy flows [105] but, on the other hand, there is not a reference quantity to quantify socio-economic parameters and natural capital, with the consequence of maintaining the evaluation of sustainability as a present open problem [105]. The present requirement is to understand how to evaluate resources, industrial activities and services in order to consider them as forms of capital [106], for their best use for human well-being. Moreover, the environmental issues result fundamental for sustainable development. But, irreversibility plays a fundamental role in all human activities. So, it must be taken into account in any indicator for sustainability. Here, we have obtained the Thermodynamic Human Development Index, an indicator which links together the entropy generation rate, related to optimisation and the Human Development Index, related to people well-being. This indicator contains all the information of the HDI, also considering the anthropic environmental impact. In this way, we respond to the above mentioned requirements of an indicator for sustainability.
2021-05-04T22:05:39.314Z
2021-04-02T00:00:00.000
{ "year": 2021, "sha1": "42071d03a88ae4893594813a7a548a850f942f63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/7/3995/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "00607e66729d74ddcd0f1c4ad94ec968a8387c1a", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Engineering" ], "extfieldsofstudy": [ "Economics" ] }
225154766
pes2o/s2orc
v3-fos-license
Trust, knowledge sharing, and innovative work behavior: empirical evidence from Poland Purpose – Thepurposeofthispaperistoassessthe effectsoftwotypes oftrust(verticaland horizontal trust) onknowledgesharing(knowledgedonatingandknowledgecollecting)andtheimpactofknowledgesharingon innovativeworkbehavior(ideagenerationandidearealization).Thestudyalsoexploresthemediatingroleofknowledgesharing. Design/methodology/approach – Partial least squares path modeling and data collected from 252 participants at one large Polish capital group were used to test the research hypotheses. Findings – Theresultsshowedthatbothverticaltrustandhorizontaltrustarepositivelyrelatedtoknowledge donating and knowledge collecting. Contrary to knowledge collecting, knowledge donating is significantly related to idea generation, which is highly correlated with idea realization. There is no direct relation between knowledge sharing behavior and idea realization. Knowledge donating mediates the relationship between vertical trust and idea generation. Research limitations/implications – Self-reports and the cross-sectional nature of the data collection are the main limitations of this study. Practical implications – The results allow managers to better understand what factors and processes contribute to greater employee innovativeness. Originality/value – To the best of the author ’ s knowledge, the study is the first to examine the relationships among vertical trust, horizontal trust, knowledge donating, knowledge collecting, idea generation and idea realization in an integrated way. This paper answered the questions (1) which type of trust is more important for knowledge sharing, and (2) which type of knowledge sharing behavior is more important for innovative workbehavior.Thispaperinvestigatedwhetherdifferencesinthestrengthofrelationshipsbetweenconstructs aresignificant. Introduction Trust and its impact on collaboration and different work-related outcomes have attracted much research attention in the last two decades. As previous studies show, a high level of trust among employees greatly benefits an organization. Results show that (interpersonal) trust among employees is positively related to job satisfaction (Guinot et al., 2014;Safari et al., 2020;Straiter, 2005), organizational commitment (Curado and Vieira, 2019), task performance (Kim et al., 2018) and team performance (De Jong et al., 2016), and negatively related to stress (Guinot et al., 2014). Research also suggests that trust is significantly positively associated with knowledge sharing (KS) in the workplace (Abdelwhab Ali et al., 2019;Hsu and Chang, 2014;Nerstad et al., 2018;Ouakouak and Ouedraogo, 2019;Renzl, 2008;Rutten et al., 2016; Trust, knowledge sharing, and innovative work behavior Staples and Webster, 2008), although contrary findings were also found (Bakker et al., 2006;Chow and Chan, 2008). The mixed results may be due to the fact that researchers usually do not distinguish between different types of trust in terms of their impact on different knowledge-sharing behavior. Therefore, our understanding on how trust affects knowledge sharing is not adequate. In particular, it is important to distinguish between knowledge donating (KD) and knowledge collecting (KC), because these two behaviors are of a different nature and therefore may be affected by different individual, organizational and technological factors (de Vries et al., 2006;Razmerita et al., 2016). Moreover, trust can be viewed from different perspectives (Feitosa et al., 2020;McCauley and Kuhnert, 1992;Paliszkiewicz, 2018), and separately analyzing trust in co-workers (horizontal trust, HT) and trust in superiors (vertical trust, VT) is justified because it may provide different findings (Hughes et al., 2018). However, there have been few empirical studies on relationships between horizontal/vertical trust and knowledge donating/collecting (Le and Lei, 2018). Furthermore, no studies have clarified the question of whether vertical or horizontal trust is more strongly related to knowledge donating/collecting. Other streams of research focus on trust and KS as factors that may have a positive impact on innovative work behavior (IWB). Under social exchange theory, interpersonal trust between coworkers leads to greater sense of security in the workplace (Erkutlu and Chafra, 2015), organizational commitment and, consequently, engagement in innovative work behavior (Yu et al., 2018). In turn, KS leads to the exchange of experiences and skills between employees, contributes to collective learning and evokes reflection on current knowledge Michna, 2018). Thus, KS increases the chances of becoming involved in additional, non-routine activities, such as innovative work behaviors (Anser et al., 2020). As Liua and Phillips (2011) noted, in most cases the employee has too little knowledge and too few opportunities to implement innovations himself/herself. It is only by collaboration with other employees that a synergy effect appears and innovative ideas can be implemented successfully. Generally, previous research confirms the positive relationship between trust and IWB (Afsar et al., 2020;Barczak et al., 2010;Yu et al., 2018) and between KS and IWB (Anser et al., 2020;Kim and Park, 2017;Mura et al., 2013;Radaelli et al., 2014). Surprisingly, no studies exist that show in more detail which forms of trust and knowledge sharing behavior positively affect the separate processes of IWB. Furthermore, to date, research on a relationship between trust and IWB has not considered the mediating role of KS behavior. The present study addresses this gap and develops a research model to link interpersonal trust among employees in an organization, knowledge sharing within an organization and IWB. This study explores the relationships among these constructs more deeply than previous studies by analyzing two types of trust (vertical and horizontal), two KS behaviors (knowledge donating and collecting), and two IWB processes (idea generation and idea realization). Moreover, the study investigates the mediating effect of two KS behaviors on the relationship between trust and IWB. The study aims to answer the following questions: How does interpersonal trust among employees shape knowledge sharing behaviors in the workplace? Which trustvertical or horizontalis more strongly related to knowledge donating/collecting? How does knowledge sharing shape innovative work behavior? Which knowledge sharing behavior is more strongly related to idea generation/realization? Which knowledge sharing behavior mediates the relationship between trust and IWB? The main contribution of the paper lies in providing a fuller picture of the relationships among trust, KS and IWB. Although both trust and knowledge sharing have been found to be related to innovative work behavior individually, their integrated impact on IWB has not yet been investigated. In addition to using relatively less explored dimensions of trust, KS and IWB, the study investigates the mediating effects of two KS behaviors on the relationships between trust and IWB. These extensions create a more detailed picture of the conditions and processes that reinforce specific components of IWB. Therefore, the results can offer useful suggestions to managers interested in promoting KS and increasing innovation. The remainder of the paper is organized as follows. First, the literature review defines the constructs that are the subject of this study and establishes the relationships between them. The 13 hypotheses are then stated. Next, the methodology of the empirical research is described. Finally, the results are presented and discussed. Literature review and hypotheses development This study uses theories of social exchange, cognitive psychology, and self-learning mechanism to develop a conceptual model of the relationships between trust, knowledge sharing and innovative work behavior. Horizontal and vertical trust The concept of trust has many definitions (see Table 1) and can be viewed from different perspectives (Feitosa et al., 2020;Tomlinson et al., 2020), including individual expectations or as part of a social or economic exchange (Paliszkiewicz, 2018). Trust is usually defined as a Author Definition of trust Rotter (1967), p. 651 "Interpersonal trust is an expectancy held by an individual or a group that the word, promise, verbal or written statement of another individual or group can be relied upon" Zand (1972), p. 230 "Trusting behavior is defined here as consisting of actions that (a) increase one's vulnerability, (b) to another whose behavior in not under one's control, (c) in a situation in which the penalty (disutility) one suffers if the other abuses that vulnerability is greater than the benefit (utility) one gains if the other does not abuse that vulnerability" Boon and Holmes (1991), p. 194 "Trust is a state involving confident positive expectations about another's motives with respect to oneself in situations entailing risk" Mayer et al. (1995), p. 712 "Trust is the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" Mishra (1996), p. 265 "Trust is one party's willingness to be vulnerable to another party based on the belief that the latter party is (1) competent, (2) open, (3) concerned, and (4) reliable" Bhattacharya et al. (1998), p. 462 "Trust is an expectancy of positive (or nonnegative) outcomes that one can receive based on the expected action of another party in an interaction characterized by uncertainty" Das and Tang (1998), p. 494 "Trust is the degree to which the trustor holds a positive attitude toward the trustee's goodwill and reliability in a risky exchange situation" Lewicki et al. (1998), p. 439 "confident positive expectations regarding another's conduct" Rousseau et al. (1998), p. 395 "Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another" Tschannen-Moran and Hoy (2000), p. 556 "Trust is one party's willingness to be vulnerable to another party based on the confidence that latter party is (a) benevolent, (b) reliable, (c) competent, (d) honest and (e) open" Tzafrir and Eitam-Meilik (2005), p. 196 "Trust is the willingness to increase the resources invested in another party, based on positive expectations resulting from past positive mutual interactions" Six (2007), p. 290 "Interpersonal trust is a psychological state comprising the intention to accept vulnerability to the actions of another party, based upon the expectation that the other will perform a particular action that is important to you" Ahteela and Vanhala (2018), p. 4 "Interpersonal trust is defined as the positive expectations of individuals about the competence, benevolence, and reliability of the organizational members on lateral and vertical levels under risk-prone conditions" (Six, 2007). It is related to a belief and confidence that another party will behave in an ethical, predictable and fair manner (Sarwar and Mumtaz, 2017). McCauley and Kuhnert (1992) noticed that trust is a multidimensional construct, and Tschannen-Moran and Hoy (2000) claimed that definitions of trust often include such aspects as, first of all, vulnerability, but also benevolence, reliability, competence, honesty and openness. Because this paper focuses on interpersonal trust between organizational members, it adopts the definition of trust proposed by Ahteela and Vanhala (2018; see Table 1). Their definition of interpersonal trust is based on four elements: an individual disposition ("the positive expectations of individuals"), desirable features in others ("competence, benevolence, and reliability of the organizational members"), distinguishing between vertical and horizontal trust ("on lateral and vertical levels") and situational parameters ("under riskprone conditions"). In the environment of an organization, organization members may trust their supervisors but distrust their coworkers, or the converse. Therefore, it is important to distinguish between vertical and horizontal (also called lateral) trust. Vertical trust refers to trust relations between an employee and his or her immediate supervisor or top management, whereas horizontal trust refers to trust relations between an employee and his or her peers or equals who are in a similar work situation (McCauley and Kuhnert, 1992). Those two types of interpersonal trust may have a different impact on employees' behavior and performance. For example, Hughes et al. (2018) found that the relationship between IWB in a team and team performance is most positive when horizontal trust is high and vertical trust is low. To explain this result, the authors suggested that when vertical trust is high, employees might not critically evaluate their innovative behaviors because they know that their supervisors support them regardless of the results of their initiatives. Knowledge sharing behaviors The concept of knowledge sharing is widely discussed in the management literature. It is one of the key processes in knowledge management that precedes the exploitation of knowledge. Knowledge sharing is viewed as a behavior (process or operation) through which individuals mutually exchange their knowledge (information, skills, and expertise; Mirzaee and Ghaffari, 2018;van den Hooff and de Ridder, 2004). In the context of organizations, knowledge sharing among employees involves valuable implicit or explicit knowledge, leads to new knowledge creation, develops organizational knowledge and brings benefits to the organization. In particular, knowledge sharing enhances innovativeness at an individual (Kim and Park, 2017) and organizational (Lin, 2007;Michna, 2018;Pittino et al., 2018) level. Two active processes (or behaviors) are distinguished in knowledge sharing: knowledge collecting and knowledge donating. Knowledge collecting (gaining or receiving) refers to consulting others for their intellectual capital in order to learn what they know, whereas knowledge donating (disseminating or bringing) is communicating one's personal intellectual capital to others (van den Hooff and de Ridder, 2004;de Vries et al., 2006). It is important to distinguish between knowledge sharing behaviors and attitudes. Behaviors can be seen as actual actions that result from attitudes. As de Vries et al. (2006) noted, knowledge sharing behaviors greatly depend on one's attitudehis or her willingness to share knowledge. Willingness reflects an individual's preparation and readiness to grant others access to his or her intellectual capital. People who are willing to share knowledge are focused on the interest of a group and on expected reciprocity; that is, that other members of the group will also share knowledge. de Vries et al. (2006) distinguished between willingness and eagerness to share knowledge, defining the latter as an individual's strong internal drive to donate his or her knowledge without expecting reciprocity. The benefits for eager people are peer recognition and increased reputation (de Vries et al., 2006). As knowledge sharing has the potential to improve firm performance (Peralta and Saldanha, 2014), an important issue for organizations is which factors influence both knowledge donating and knowledge collecting. Those factors may be of an individual, organizational or technological nature (see Razmerita et al., 2016). Generally, information and communication technology is said to directly or indirectly facilitate knowledge sharing (Choi et al., 2010;Mirzaee and Ghaffari, 2018;Yuan et al., 2013). Knowledge sharing is supported by an appropriate organizational culture and climate (Al-Alawi et al., 2007;Mueller, 2012;Suppiah and Singh Sandhu, 2011), including a constructive communication climate (van den Hooff and de Ridder, 2004), management support (Paroutis and Al Saleh, 2009), reward systems (Amayah, 2013) and employees' affective commitment to the organization (Casimir et al., 2012). In addition, one of the factors most often indicated as affecting knowledge sharing is trust (Al-Alawi et al., 2007;Casimir et al., 2012;Chen and Hung, 2010;Paroutis and Al Saleh, 2009;Rutten et al., 2016). Trust and knowledge sharing The influence of trust on the employees' propensity to share knowledge is important for the organization's innovation. The more someone trusts another person, the greater his or her willingness to share knowledge with that person, for several reasons. First, when trusting a person, we believe that the knowledge transmitted to that person will be used appropriately (Staples and Webster, 2008) and not be used against us, even if this knowledge is incomplete, imperfect or containing errors. We believe that the knowledge provided will not be used to criticize or undermine our competences. For example, a subordinate who trusts his or her supervisor will be more willing to reveal his or her limitations in skills, abilities, and knowledge if he or she trusts the superior not to use this knowledge against him or her (McEvily et al., 2003). What's more, if we trust a person, we expect reciprocity and believe that the other party will share knowledge with us. This expectation of reciprocity is confirmed in the literature on social exchange theory (see Cropanzano and Mitchell, 2005) and social capital theory (see Hsu and Chang, 2014). Finally, trust affects knowledge collecting. The recipient of knowledge is less apt to verify the accuracy and truthfulness of knowledge that comes from a trusted source. Under such conditions, the recipient does not have to spend time and effort verifying the acquired knowledge but can use it immediately, which speeds up organizational learning, alertness and responsiveness (McEvily et al., 2003). Empirical research confirms the existence of a relationship between trust and knowledge sharing. Hsu and Chang (2014), Renzl (2008), and Staples and Webster (2008), among others, found a positive relationship between interpersonal trust and knowledge sharing. However, contrary results have also been found. For example, Chow and Chan (2008) did not find a positive relationship between social trust and knowledge sharing among Hong Kong managers. Bakker et al. (2006) claimed that "trust is a poor explanatory of knowledge sharing" (p. 594). Some researchers distinguish between trust in colleagues and trust in superiors. For example, Dirks and Ferrin (2002) found that trust in a leader is positively related with job satisfaction, organizational commitment, and confidence in information given by that leader. Whisnant and Khasawneh (2014) concluded that trust facilitates the process of supervisors accruing tacit knowledge from their subordinates. When it comes to trust among coworkers, Politis (2003) proved that faith and confidence in one's peers is positively related to communication and understanding problems. Al-Alawi et al. (2007) confirmed that trust and knowledge sharing among coworkers are related. Taking into account the above considerations, both trust between employees and trust in a superior can have a positive impact on the behavior associated with sharing knowledge with colleagues. Thus, the following hypotheses are proposed: Trust, knowledge sharing, and innovative work behavior H1. Horizontal trust is positively related to (a) knowledge donating and (b) knowledge collecting. H2. Vertical trust is positively related to (a) knowledge donating and (b) knowledge collecting. An interesting question is whether the strength of the relationship between trust and knowledge sharing behaviors depends on whether the trust is in a supervisor or colleagues. On the one hand, when employees trust their direct leaders (that is, their supervisors), they feel confident and comfortable donating their knowledge. Knowledge donating requires courage and faith that the knowledge will not be used against the knowledge sender. As Mayer et al. (1995) suggested, if supervisors have high levels of integrity, ability and benevolence, employees are more willing to trust the supervisors, take risks in their relationships with the supervisors, and reveal information, including sensitive information about mistakes or shortcomings. Hence, employees will be more willing to share knowledge if they feel support from their supervisors. On the other hand, it seems logical to claim that employees are more willing to donate their knowledge to coworkers and collect knowledge from their coworkers when they trust them. The reasoning is similar to that in the case of the relationship between the subordinate and the supervisor. If an employee communicates knowledge to his or her colleagues, he or she wants to believe that this knowledge will be used properly and will not be used against him or her. The more trust, the stronger the faith. If an employee collects knowledge from a trusted colleague, he or she believes that he or she is not being deliberately misled and that the information obtained is true. Previous research indicates that trust in colleagues is more important for knowledge sharing than trust in superiors. Wu et al. (2009) found that in high-tech industries in Taiwan both trust of colleagues and trust of a supervisor are positively correlated with knowledge sharing behavior, but that trust of colleagues has a stronger relationship with knowledge sharing than trust of a supervisor. Similarly, based on research conducted in a large Australian automotive company, Lee et al. (2010) concluded that "trust in the team is a better predictor of team knowledge sharing than trust in the leader" (p. 485). The cultural context may impact the relationship between trust and knowledge sharing among leaders and employees. Poland is a hierarchical society (Insights, 2018). Centralization is popular in organizations, and subordinates expect to be told what to do. Moreover, in Polish society there is a strong preference to avoid uncertainty; people are intolerant of unconventional behaviors and ideas, and security is very important in their lives (Insights, 2018). Under these conditions, an employee's relationship with a supervisor might have stronger impacts on the employee's attitudes toward knowledge sharing than relationships with coworkers. Although previous research conducted in Poland confirms that trust positively affects knowledge sharing (Kucharska and Kowalczyk, 2016b;Sankowska, 2013;Spałek et al., 2018), no research clearly indicates whether vertical or horizontal trust has a greater impact on knowledge donating and collecting in Polish companies. The above discussion results in the following hypotheses: H3. The strength of the relationship between vertical trust and knowledge donating is significantly different from the strength of the relationship between horizontal trust and knowledge donating. H4. The strength of the relationship between vertical trust and knowledge collecting is significantly different from the strength of the relationship between horizontal trust and knowledge collecting. Innovative work behavior Organizational innovativeness results from individual innovativeness (Hughes et al., 2018;Spanuth and Wald, 2017). Therefore, there is great interest in innovative work behavior as the source of organizational success (Dorenbosch et al., 2005;Janssen, 2000). Following Janssen (2000), and Yuan and Woodman (2010), innovative work behaviors is defined here as the intentional creation and application of new ideas or innovations (new products or processes) in the workplace to improve individual, group, or organization performance. The definition indicates that innovative work behavior is closely related with other concepts in the literature, such as employee innovativeness, innovative job performance and on-the-job innovation (Spanuth and Wald, 2017). Innovative work behavior is a complex concept that may include such behavioral activities as idea exploration, generation, promotion and implementation (Dorenbosch et al., 2005;Scott and Bruce, 1994;Spanuth and Wald, 2017). Consequently, innovative work behavior is regarded as one (Jansen, 2000;Scott and Bruce, 1994), two (Dorenbosch et al., 2005;Krause, 2004), or even four (De Jong and Den Hartog, 2010; Spanuth and Wald, 2017) dimensional construct. This paper focuses on two components of IWB: idea generation and idea realization. Idea generation is defined as a creative behavior aimed at searching for and generating new, original approaches and solutions to problems, including new working methods and techniques. Idea realization refers to implementing new ideas in the form of new products or processes in an organization. Axtell et al. (2000) noted that the distinction between idea generation and idea realization has reasoning in a different etiology. Idea generation depends on individual characteristics (an individual's creativity, self-confidence, job knowledge and job demands) than group and organizational characteristics. In contrast, idea realization, as a social process, affects other employees. Its success depends on the other's approval, engagement and support. Simple innovations are usually introduced by individual employees, but more complex ones need cooperation, and various knowledge inputs and competences (Jansen, 2000). Innovation is perceived as a multi-stage process, with the general agreement that idea generation is followed by idea realization (De Jong and Den Hartog, 2010;Krause, 2004;Spanuth and Wald, 2017). Therefore, H5. Idea generation is positively related to idea realization. Knowledge sharing and innovative work behavior The literature review confirms that knowledge sharing is an important process influencing the improvement of innovativeness both at the organizational level (cf. Michna, 2018;Pittino et al., 2018;Zhao et al., 2020) and at the individual level (cf. Anser et al., 2020;Kim and Park, 2017;Mura et al., 2013;Radaelli et al., 2014;Rao Jada et al., 2019). This importance is due to the fact that knowledge plays an important role in creating innovation. Expert knowledge, including knowledge about past solutions and events, can be the basis and inspiration for new solutions. By sharing knowledge with colleagues, the knowledge base of other employees is increased and the chance for the emergence of innovative ideas increases. As noted by Radaelli et al. (2014), "idea generation is a process of knowledge creation that requires recombining internal and external knowledge into new forms" (p. 401). The implementation of ideas cannot usually be accomplished by a single person, but requires cooperation and the knowledge, skills and perspectives of various employees, resulting in a synergy effect (Liua and Phillips, 2011). The ability to accumulate knowledge is important for creating new solutions. For example, the knowledge about clients and their needs gathered by the marketing department is passed to the research and development department, where, in addition to technical knowledge, it is the basis for the development of new products. Trust, knowledge sharing, and innovative work behavior From the point of view of cognitive psychology, the effective absorption of new knowledge requires its cognitive restructuring or elaboration by the learner (Slavin, 1996). Sharing knowledge triggers these processes, because the recipient of the knowledge has to connect and integrate the new knowledge with his or her current knowledge. As a result, knowledge sharing can cause reflection on current knowledge, its verification, and its reinterpretation. As Radaelli et al. (2014) claimed, the knowledge recombination and re-elaboration embedded in knowledge sharing stimulate idea generation and application. Research conducted in Poland among team members confirmed that tacit knowledge sharing has a positive influence on team creativity (Kucharska and Kowalczyk, 2016a). These considerations lead to the following hypotheses: H6. Knowledge donating is positively related to (a) idea generation and (b) idea realization. H7. Knowledge collecting is positively related to (a) idea generation and (b) idea realization. Previous research suggests that employees' knowledge sharing has a positive impact on the innovative work behavior of both the knowledge receiver (Majchrzak et al., 2004) and knowledge sender (Radaelli et al., 2014). However, an under-researched issue is which knowledge sharing behavior, knowledge collecting or knowledge donating, is more strongly related to IWB. For a knowledge sender, the positive impact of knowledge donating on IWB might be explained by the self-learning mechanism (Lai et al., 2016). The sender plays the role of a teacher who, before passing knowledge to others, should organize his or her knowledge and codify it so that it is understandable to others. This preparation may trigger reflection on the possessed knowledge, and may lead to the rejection of obsolete knowledge and the clarification of doubts as to the possessed knowledge and cause-and-effect relationships. These processes support the creation of new knowledge, which is the base for innovative solutions. Moreover, the sender usually receives feedback on the knowledge donated. In this way, the sender's knowledge is verified in terms of truthfulness, accuracy and comprehensiveness. The transferred knowledge may not apply under certain conditions. In this case, the feedback provided by the recipients of the knowledge allows the sender to update the original knowledge and contribute to innovative behavior. However, a sender who engages in too much knowledge donating might be less innovative, because the sender has no time to develop innovative ideas. Knowledge sharing can also benefit the innovative behavior of knowledge receivers. The recipient collects knowledge gained by other people, including their experience and proven and useful solutions and practices. Through the learning process, the recipient of the knowledge combines the knowledge gained from others with his or her own knowledge, which leads to the reinterpretation of the knowledge, updates and even the questioning and rejection of obsolete knowledge. Consequently, the acquired knowledge stimulates the creativity and innovative behavior of the recipient of the knowledge (Lai et al., 2016). Research conducted among managers and staff of 148 retail units in China showed that the highest level of employees' innovative behavior was achieved when there was a balance between knowledge outflow from the business unit and knowledge inflow into the business unit (Lai et al., 2016). This finding and the above discussion suggest that both knowledge donating and knowledge collecting have an equal impact on an individual's innovative behavior. Therefore the following hypotheses are stated: H8. The strength of the relationship between knowledge donating and idea generation is not significantly different from the strength of the relationship between knowledge collecting and idea generation. H9. The strength of the relationship between knowledge donating and idea realization is not significantly different from the strength of the relationship between knowledge collecting and idea realization. Trust and innovative work behavior Trust is a factor that can significantly influence the willingness of employees to undertake non-standard and innovative actions in the workplace. According to social exchange theory (see Cropanzano and Mitchell, 2005), the more employees trust the organization, the more work and energy they are willing to devote to working in the organization. If an employee trusts his or her colleagues and supervisors, then he or she will show greater organizational commitment, pro-activity and risk-taking (Dirks and Ferrin, 2002;Colquitt et al., 2007). Specifically, the greater the employees' trust in the supervisor, the greater the sense of security and comfort when it comes to the supervisor's reaction to the subordinate's behavior (Erkutlu and Chafra, 2015). As a consequence, the chance of an employee engaging in innovative behavior increases (Mayer et al., 1995;Hughes et al., 2018), which is usually associated with the risk of failure. On the other hand, when there is a lack of trust, employees will focus more on self-protection than on entrepreneurial behavior (Hughes et al., 2018). Both horizontal and vertical trust can play an important role for innovative behaviors. As Hughes et al. (2018) noted, "trust among the team members provides the lubricant for individuals to jointly devise new plans and actions" (p. 755). Where employees have a bond of trust, collaborative discussions and debates develop that stimulate new useful ideas (Yu et al., 2018). If an employee trusts his/her supervisor, then he/she is willing to take more risky actions without fear that he/she will be punished by the supervisor if his/ her ideas and actions do not bring the intended results (Hughes et al., 2018). The empirical research to date confirms the relationship between trust and innovative behavior (Afsar et al., 2020;Yu et al., 2018) and creativity (Barczak et al., 2010). Thus, the following hypotheses are proposed: H10. Horizontal trust is positively related to (a) idea generation and (b) idea realization. H11. Vertical trust is positively related to (a) idea generation and (b) idea realization. The previous considerations suggest that trust has a positive impact on knowledge donating and collecting, which in turn facilitates idea generation and realization. Therefore, knowledge sharing can play the role of a bridge linking interpersonal trust to employee innovative behavior. This is in line with Hughes et al.'s (2018) claim that "the team conditions generated by higher levels of horizontal trust produce higher levels of information exchange and cooperative behaviour (. . .) and set a team climate commensurate with innovation activity" (p. 755). Similarly, Afsar et al. (2020) noted that "trust motivates employees to collaborate and support each other's ideas through reciprocity and knowledge sharing". Moreover, under social exchange theory, employees usually reciprocate for high vertical trust through increased knowledge sharing and innovation (Hughes et al., 2018). Khorakian et al. (2019) found that knowledge sharing behaviors (sharing best practices and mistakes) mediate the effect of ethical behavior in an organization (including support, benevolence, and respect) on innovative work behaviors in public organizations in Iran. These considerations lead to the following hypotheses: H12. Knowledge donating mediates the positive relationship between trust and innovative work behavior. H13. Knowledge collecting mediates the positive relationship between trust and innovative work behavior. Trust, knowledge sharing, and innovative work behavior 2.7 Conceptual model Figure 1 presents the theoretical model that guides this study. The relationships between components of the model and the associated hypotheses have been elaborated in the previous subsections. Additionally, the literature suggests that employee education has an impact on these relationships. For example, Le and Lei (2018) indicated that the higher the education, the greater the commitment to knowledge sharing, particularly knowledge collecting. They concluded that "employees who have a higher level of education will have a greater ability and willingness to meet the demands of colleagues in knowledge and information" (Le and Lei, 2018, p. 533). Other studies indicate a significant relationship between education and trust (Charron and Rothstein, 2016;Hooghe et al., 2012). This is explained by the fact that better educated people have better cognitive skills and higher social prestige (Hooghe et al., 2012) and are more tolerant and less suspicious of others (Charron and Rothstein, 2016). However, it should be noted that an insignificant relationship between education level and trust was also reported (Vaughn, 2011). Similarly, mixed results were obtained in research on the relationship between education and innovative work behavior (Bantel and Jackson, 1989;Hanif and Bukhari, 2015;Leong and Rasli, 2014). Research method 3.1 Data collection An anonymous survey was conducted among employees of a large Polish organization. Selection of the organization for research was guided by the following premises. First of all, the organization should be large, since large organizations usually have richer experience in knowledge management than small and medium enterprises (Hutchinson and Quintas, 2008;Durst and Runar, 2012;Zieba et al., 2016). In the European Union, large enterprises are those that employ at least 250 employees. In addition, since white-collar employees were the subject of the study, the organization should employ a large number of white-collar employees in order to provide a large sample size. The research was conducted in one of the largest capital groups in Poland, which is one of the 10 largest employers in Poland. This capital group is a global organization that places special emphasis on knowledge, cooperation, and innovative activity. This emphasis is expressed in official documents such as codes of ethics and business strategies. In accordance with the adopted code of ethics, it is fundamental for this organization to share knowledge and experience by employees and to build mutual trust. In addition, according to strategic documents, the goal of the capital group is to increase The required sample size was calculated using G*Power 3.1.9.7 software (Faul et al., 2009). The following parameters were used: the effect size f 2 5 0.15, the required significance level 5 0.05, the desired statistical power 5 0.95, and the number of predictors 5 5. On this basis, the minimum sample in this study was 138 respondents. Respondents were employed in companies belonging to the capital group. The business profiles of these companies included manufacturing machinery and equipment for the mining sector. A link to the questionnaire was sent by e-mail in May 2018 to all 699 white-collar employees from these companies. Data collection lasted one month. During this month, the questionnaire was filled by 252 respondents, which gave a response rate of 36 percent. Table 2 presents the characteristics of the sample. Since each respondent answered all questions (regarding both independent and dependent variables) at the same time (in one study), there was a risk that the results of the survey might have the common method bias. In order to examine this effect, a Harman single factor test (Podsakoff et al., 2003) was used. The test results indicated that a single factor explained 47% of the variance, thus below the threshold of 50 percent. Thus, common method bias was not an issue in this study. Measures Constructs in this study were measured with previously established and validated multipleitem scales. All items were translated from English into Polish following the back-translation procedure (Brislin, 1986). Responses were scored on a seven-point Likert scale ranging from 1 5 strongly disagree to 7 5 strongly agree. To The literature review indicates that education might impact knowledge sharing (Le and Lei, 2018). Therefore, education was a control variable in this study. Education level was coded into two categories: (1) higher education (bachelor's degree, master's degree or doctoral degree), and (2) less than higher education. Analyses Research hypotheses were validated using partial least squares path modeling (PLS-PM). PLS-PM allows one to simultaneously analyze different complex relationships between latent variables, but compared with covariance-based structural equation modeling (CBSEM) it is less restrictive in terms of sample size (Henseler et al., 2009) and does not impose any distributional assumptions on the data (Sanchez, 2013). PLS can be applied in both exploratory and confirmatory research (Chin, 2010). Given the above, the use of PLS was appropriate in this study. The use of PLS included (1) the assessment of the measurement model (essentially confirmatory factor analysis), and then (2) the assessment of the structural model (Hair et al., 2011). Measurement model The measurement model only involved constructs with reflective indicators. Validity and reliability tests were conducted to assess the quality of the measurement model. Confirmatory factor analysis showed that factor loadings overcame the critical value of 0.7 (Hair et al., 2019). Moreover, the loadings of the indicators on their respective latent variables were higher than the cross-loadings on other latent variables (see Table 3). Average variance extracted (AVE) was above the 0.5 threshold (Hair et al., 2019). The square root of the AVE was greater than all corresponding correlations (Table 4). The above results supported convergent and discriminant validity. Unidimensionality for blocks of indicators was examined using Cronbach's alpha, Dillon-Goldstein's rho, and the first and second eigenvalues of the indicators' correlation matrices. The results in Table 3 show that all these statistics met the recommended values; that is, Cronbach's alpha >0.7, Dillon-Goldstein's rho >0.7, first eigenvalue >1 and second eigenvalue <1 (Chin, 1998;Sanchez, 2013). Summarizing, the results of the above analysis suggested a good psychometric quality of the measurement model. The quality of the structural model The R2 determination coefficient, the redundancy index, and the goodness of fit (GoF) were investigated to assess the quality of the structural model. The higher the values of these indices, the better the quality of the model. The R2 values ranged from 0.277 for knowledge collecting to 0.692 for idea realization (Table 5). This result means that R2 values in this model were low (R2 < 0.3), moderate (0.3 < R2 < 0.6), or high (R2 > 0.6; Sanchez, 2013). The mean redundancy was 0.622 for idea realization, which suggested that other constructs (horizontal and vertical trust, knowledge donating and collecting, and idea generation) predicted 62.2% of the variability of the idea realization indicators. The general prediction of model performance was assessed using the GoF index (Tenenhaus et al., 2004), which was 0.57. The GoF index was below the suggested cutoff of 0.7 (Sanchez, 2013). Relationships among trust, knowledge sharing, and IWB The bootstrap method with 2,000 subsamples was performed to generate 95% confidence intervals (CIs) and test the statistical significance of the path coefficients. A path coefficient between variables is significant if a CI generated for this estimated coefficient does not contain zero (Henseler et al., 2009). Table 6 gives results of the bootstrap validation. Figure 2 shows the structural model with path coefficients. The results reveal that both types of trust are positively related to knowledge donating and collecting, lending support for H1 and H2. Results also strongly support H5, because the path coefficient between idea generation and idea realization is significant and amounts to 0.85. As expected in H6a, knowledge donating is In addition to the direct effects of trust on knowledge sharing and innovative work behavior, the indirect effects of horizontal and vertical trust on idea generation and realization through knowledge donating and collecting were investigated (Table 7 and Table 8). The results show that horizontal trust has an insignificant indirect effect on idea generation through knowledge donating (0.07) and collecting (0.05), but a total indirect effect (0.12) is significant. Moreover, horizontal trust has insignificant indirect effects, both total and specific, on idea realization. Vertical trust has a significant indirect effect on idea generation through knowledge donating (0.15), but an insignificant indirect effect on idea generation through knowledge collecting (0.05). A total indirect effect of vertical trust on idea generation is also significant (0.20). Vertical trust has also a significant total indirect effect (0.31) on idea realization, mainly through knowledge donating and idea generation (0.13). These findings partly support H12, but there are no grounds for supporting H13. Additionally, according to the approach proposed by Baron and Kenny (1986), it was investigated whether idea generation might be a mediator between knowledge donating and idea realization. An alternative structural model without a relationship between idea generation and idea realization was examined. In the alternative model, the path coefficient between knowledge donating and idea realization has increased from 0.01 to 0.34 and is 2. Results of structural model Table 7. Direct, indirect and total effects analysis Table 8. Specific indirect effects analysis Trust, knowledge sharing, and innovative work behavior significant. Therefore, the effect of knowledge donating on idea realization is completely mediated by idea generation. However, such a mediating effect of idea generation was not found in the relationship between knowledge collecting and idea realization. Difference between the strengths of the relationships To test the remaining hypotheses (H3, H4, H8, and H9), the overlap of the relevant CIs was analyzed. According to the very conservative rule, two parameters are significantly different if their CIs do not overlap (see Cumming, 2009). Cummings (2009) proposed a less restrictive approach: two path coefficients are significantly different if the corresponding 95 percent CIs overlap with no more than 50 percent of the length of a single arm of a CI, "in other words the overlap of the 95 per cent CIs is no more than about half the average arm length, meaning the average of the two arms that overlap" (Cumming, 2009, p. 219). The calculations conducted for path coefficients showed that the appropriate confidence intervals overlapped by more than 50%. Therefore, there is no statistically significant difference between the strengths of the relationships indicated in hypotheses H3, H4, H8 and H9. Hence, there are no grounds for supporting H3 and H4 and rejecting H8 and H9. Impact of education The study investigated if education level impacts the relationships between constructs. The bootstrap t-test was applied. Results showed that at the 5 percent level there were significantly stronger relationships between vertical trust and knowledge collecting and donating among respondents with less than higher education compared to respondents with higher education (Table 9). The study next investigated if mean values for constructs are different for respondents with higher education and respondents with less than higher education. The Mann-Whitney U test was applied. The mean values of horizontal trust, knowledge collecting (p < 0.05), vertical trust and knowledge donating (p < 0.1) were significantly higher for respondents with higher education than for respondents with less than higher education. Discussion The purpose of this paper is to assess the effects of two types of trust (vertical and horizontal trust) on knowledge sharing (knowledge donating and knowledge collecting) and the impact of knowledge sharing on innovative work behavior (idea generation and idea realization). This topic was investigated in the context of white-collar employees from a large Polish organization. Relationships between trust and knowledge sharing The study shows that both vertical trust and horizontal trust are related to knowledge donating and knowledge collecting. This finding is consistent with prior studies, which claimed that trust is an important determinant of knowledge sharing (Hsu and Chang, 2014;Renzl, 2008;Staples and Webster, 2008). However, the present study examined this relationship in more detail. Which trust is more strongly related to knowledge sharing The values of the path coefficients suggested that vertical trust (B 5 0.40) might be more strongly related to knowledge donating than horizontal trust is (B 5 0.19). However, further analysis indicated that the difference between these two path coefficients was not statistically significant. Moreover, the strength of the relationship between vertical trust and knowledge collecting and the strength of the relationship between horizontal trust and knowledge collecting are very similar (0.29 vs. 0.27). The above results indicate that vertical trust and horizontal trust have a similar positive effect on knowledge sharing in the workplace. This finding is inconsistent with a study conducted by Wu et al. (2009). The authors claimed that trust of colleagues plays a more important role for knowledge sharing than trust of supervisors. This inconsistency may have several causes. First, although one coefficient might be greater than the other coefficient, the difference may not be statistically significant. Wu et al. (2009) did not report empirical evidence that the difference between the coefficient (beta weight) for the relation between trust of colleagues and knowledge sharing and the coefficient for the relation between trust of supervisors and knowledge sharing was statistically significant. This lack of evidence suggests that conclusions about which relationship was stronger were based on simplified assumptions. Such simplistic inferences are also observed in other studies (for example, Le and Lei, 2018;Spanuth and Wald, 2017). If such simplistic inferences are applied in the present study, then vertical trust is more strongly related with knowledge sharing, especially with knowledge donating. Second, cultural context may decide which type of trust, vertical or horizontal, has a greater impact on knowledge sharing behaviors. Studies conducted in Taiwan (Wu et al., 2009) and Australia suggested that horizontal trust is more strongly related to knowledge sharing. Compared to Taiwan and Australia, however, in Poland, employees are less prone to risk, avoid uncertainty and more often expect clear instructions from their supervisors (Insights, 2018). Under such conditions, trust in a superior can be at least as Trust, knowledge sharing, and innovative work behavior important as trust in colleagues in the context of knowledge sharing behaviors at the workplace. This study reveals that trust in a superior is significantly more strongly related to knowledge sharing among employees with less than a higher education compared to employees with a higher education. This result may occur because employees with a higher education usually are more self-confident, know their value, and have a greater ability to meet the requirements of colleagues in the field of knowledge (Le and Lei, 2018). Trust in their superior is important in order to share knowledge, but not so important as it is for employees with less education. Relationships between knowledge sharing and IWB This study offered an alternative to traditional studies, which were mainly concentrated on the effect of knowledge sharing on innovative work behavior, without examining relationships between components of knowledge sharing and components of innovative work behavior. Although this study confirms previous empirical studies that proved that knowledge sharing supports IWB (Akram et al., 2018;Kim and Park, 2017;Mura et al., 2013;Radaelli et al., 2014), it also provides some interesting results. As expected, knowledge donating is positively related to idea generation, but, contrary to expectations, the relation between knowledge collecting and idea generation is not significant. This finding is partly inconsistent with a previous study that suggested that both knowledge outflows from and knowledge inflows into the business unit affect employee innovative behavior (Lai et al., 2016). The lack of a significant relationship between knowledge collecting and idea generation can be explained by the phenomenon of homogeneous knowledge collected by employees. It should be noted that the scale for knowledge collecting used in this study concerned only the collection of knowledge from colleagues. Employees who collect knowledge from colleagues also collect their ideas. It may be said that the greater the knowledge collecting by employees from other employees, the more uniform knowledge will be among employees, which hinders the generation of new, innovative ideas. Zhang et al. (2019) drew attention to the difficulty of generating ideas under the conditions of homogeneous knowledge. Previous studies have confirmed the positive relationship between heterogeneity of knowledge and creativity (Huang and Liu, 2015) and innovativeness (Rodan and Galunic, 2004). It was also found that groups composed of members who were heterogeneous in their knowledge structure generate more ideas than groups composed of members who were homogeneous with regard to their knowledge (Stroebe and Diehl, 1994). Therefore, it can be concluded that merely collecting knowledge from colleagues is not sufficient for generating new ideas. It is also important to collect heterogeneous knowledge from various sources, the ability to process collected knowledge, as well as divergent and independent thinking. A significant relationship between knowledge donating and idea generation is also interesting in the context of findings of Axtell et al. (2000). According to Axtell et al. (2000), idea generation depends more on individual characteristics, while idea realization is rather a team activity. Similarly, Liua and Phillips (2011) stated that implementing innovation depends on cooperation between employees and combining their knowledge and skills. Therefore, it would seem that knowledge donating, which is a social activity, will be stronger related to idea realization than with idea generation. Nevertheless, the results of this research do not confirm this reasoning. This can be explained by the fact that knowledge donating requires selfconfidence and job knowledge, that is the same characteristics of the individual that are useful in the idea generation process (Axtell et al., 2000). Thus, there is no contradiction between social activity (knowledge donating) and individual activity (idea generation). It is more justified to say that social activity strengthens individual activity in the area of idea generation. Surprisingly, this study does not provide evidence that knowledge donating or knowledge collecting are directly related to idea realization. Based on such outcomes, it can be controversially claimed that innovativeness, as real actions in organizations, does not exist as the result of trust and knowledge sharing. More building of vertical and horizontal trust and speeding up of knowledge donating only increases the level of ideas, but ideas are not the same as innovativeness of organizations. Such a claim is consistent with some previous research conducted in Polish companies, which found no direct relationships between knowledge sharing and a firm's innovativeness . Nevertheless, the mediation analysis indicates an indirect impact of knowledge donating on idea realization, via idea generation. In the present study, idea generation is strongly related to idea realization. This result supports previous claims that innovation is a multi-stage process, that idea generation is followed by idea realization (De Jong and Den Hartog, 2010;Krause, 2004;Spanuth and Wald, 2017), and that stages of the innovation process are highly correlated (and therefore some scholars treat innovative behavior as one construct). Which knowledge sharing behavior is more strongly related to IWB The present study suggests that knowledge donating is more important for idea generation than knowledge collecting. This finding is explained by the positions of a knowledge sender and receiver. As is commonly known, knowledge is a crucial ingredient in innovation processes (Plessis, 2007). A receiver of knowledge plays the role of a student who has a knowledge deficiency. Only when a receiver gathers and updates his or her own knowledge will he or she be able to offer innovative solutions. However, if knowledge collecting is not accompanied by knowledge donating, a knowledge receiver is perceived as a knowledge parasite; that is a greedy and unfair person in the conditions of competition between employees within the organization (Lai et al., 2016). This negatively affects the willingness of other people to share valuable and relevant knowledge with such a person, which worsens his/her potential for idea generation. On the other hand, a knowledge sender plays the role of a teacher who has appropriate knowledge and passes this knowledge to others. Hence, a knowledge sender already has knowledge. Moreover, when sharing one's own knowledge, a self-learning mechanism appears (Lai et al., 2016) and the sender has an opportunity to update and recombine the knowledge. Consequently, the knowledge is likely to appear as new knowledge, which can trigger new ideas. Hence, a knowledge sender is more likely to generate innovative ideas faster than a knowledge receiver. Relationships between trust and IWB This study is consistent with previous studies assuming the relationship between trust in the workplace and innovative behavior (Barczak et al., 2010;Yu et al., 2018). However, the present analyzed this relationship in more detail. It turned out that a direct relationship exists only between vertical trust and idea generation. Moreover, unlike horizontal trust, vertical trust not only has a direct influence on the idea generation, but also indirectly influences it through knowledge donating. These results are inconsistent with the results of the research carried out in a major Dutch financial services firm, which indicated that, for the relationship between innovative behavior and team performance, the most favorable situation is when horizontal trust is high and vertical trust is low (Hughes et al., 2018). Cultural factors may have influenced the results obtained. In Poland, people are generally very cautious about unconventional behaviors and ideas and are reluctant to take risks (Insights, 2018). For fear of a negative reaction from their superior, they may refrain from proposing innovative solutions. In such conditions, trust in the supervisor is more important than trust in coworkers, because it is mainly the supervisor who determines the sense of security of the subordinate. With high vertical trust, employees' fears of criticism, ridicule, or even punishment for bad ideas are reduced. Trust, knowledge sharing, and innovative work behavior Theoretical implications This study contributes to understanding the processes of trust and knowledge sharing, and their impact on innovative behavior. Previous research exists on the relationship between trust and knowledge sharing (Rutten et al., 2016), but this study distinguishes between two types of trust and two types of knowledge sharing behaviors. Such an approach is relatively new in the literature; it was used by Le and Lei (2018), who called for further research in this field in other contexts. Hence, this research responds to this call. Thanks to the analysis of specific manifestations of trust and knowledge sharing, the results of the study reveal that vertical trust is more important for knowledge donating than horizontal trust, while vertical trust and horizontal trust have similar impacts on knowledge collecting. In addition, this study extends the research model for IWB. Although the literature suggests that various knowledge sharing behaviors may have different antecedents and implications (Mura et al., 2013;Le and Lei, 2018), empirical research on relationships between knowledge donating and collecting on the one side, and idea generation and realization on the other side have been virtually non-existent. The present study indicates that knowledge donating is more important for idea generation than knowledge collecting. Moreover, little has been known about whether and how knowledge sharing mediates the link between trust and IWB. The present study reveals that knowledge donating mediates the relationships between vertical trust and idea generation. Such mediating effect has not been observed in the case of knowledge collecting. Finally, it was not previously clear whether trust and knowledge sharing have direct impact on idea realization. While this study shows no such direct relationships, it does indicate that idea generation plays the role of mediator between knowledge donating and idea realization. Practical implications The study highlights the importance of trust and knowledge donating behavior for individual innovativeness. If managers want to improve employees' innovative behavior, they should build trust in an organization, particularly vertical trust and create conditions for knowledge sharing, in particular knowledge donating. Hence, managers should not only encourage employees to collect the knowledge needed to innovate, but also encourage them to donate knowledge to others. Managers can stimulate knowledge sharing among employees using means such as giving employees new challenges, encouraging employees to try new approaches, initiating processes aimed at developing knowledge and sharing experience and expertise (for example, by mentoring or coaching; Lee et al., 2010). Particularly valuable to knowledge sharing may be mentoring (Bryant and Terborg, 2008), including intermentoring. Intermentoring includes the two-way transfer of knowledge between employees; that is, the mentor is also the student and the student is also the mentor, in a specific expertise area. This type of mentoring provides balance between knowledge donating and collecting and creates conditions for a fuller use of the knowledge and skills embedded in employees. Intermentoring is expected to increase employees' self-esteem and bonds between employees . A welldesigned motivational system should encourage employees to share knowledge and participate in mentoring initiatives. Encouraging knowledge donating means supporting the self-learning mechanisms that occur when knowledge is shared. A knowledge sender should be encouraged to reflect on his or her knowledge and to question previous beliefs and assumptions. Such critical reflection leads to rejecting obsolete knowledge through unlearning (Matsuo, 2018), which releases creativity (Cegarra-Navarro et al., 2010). Knowledge sharing by employees can be supported by building trust in the organization. The relationship between vertical trust and knowledge sharing is especially strong among respondents with a less than higher education. Such employees might feel unconfident compared to employees with a higher education, and therefore they need particular support from their supervisors to share knowledge. Building trust among supervisors and employees requires the creation of an organizational culture where individuals treat each other fairly and honestly, help each other in solving complex problems, and are not afraid to express their frustration, opinions, personal beliefs and feelings, positive as well as negative (Le and Lei, 2018). Training and team building exercises can help employees and supervisors understand how they can contribute to a positive work climate and avoid actions that undermine the trust of their coworkers , and should be considered. Limitations and suggestions for further research The present study has limitations which may be overcome in future research. First, the measurement of variables was based on self-reports and was conducted at one point in time, which increases the threat that a common method bias influenced the research results. Although some remedies were applied to minimize the potential effects of bias on the findings (e.g., the anonymity and confidentiality of the respondents was ensured), and a Harman single factor test showed that the bias was not a problem in this study, in future research, collecting data from different sources is recommended. For example, the employee's innovative behavior may be assessed by the supervisor or using objective data, if available. Second, this study, like many previous studies, assumes that trust has an impact on knowledge sharing. However, future studies should consider a reciprocal effect; that is, to what extent knowledge sharing increases trust among colleagues. Third, this study is cross-sectional. A longitudinal study would be more appropriate to prove causal relationships between trust, knowledge sharing, and innovative behavior. Fourth, in this study only two types of trust were analyzed, vertical and horizontal trust. However, within these two types of trust, a reliance-based trust and disclosure-based trust can be distinguished (see Le and Lei, 2018). Fifth, the study was conducted in one large capital group in Poland. Some results, for example, the result regarding which type of trust is more important for knowledge sharing, may depend on the cultural context. Therefore, the generalizability of the findings is limited. Future research in different countries and industries will help address this concern. Finally, high levels of trust (vertical and horizontal) may reduce the control of knowledge donating and collecting, leading to the spread of poor-quality knowledge. Therefore, it would be wise in the next research to test mediating role of knowledge quality. Conclusion This study investigated relationships between trust, knowledge sharing, and innovative behavior in the workplace. Contrary to many previous studies, this study distinguished between types of trust (vertical and horizontal) and types of knowledge sharing behaviors (donating and collecting), as well as between types of innovative behaviors (idea generation and idea realization). Therefore, the results of this study enable managers to better understand what factors and processes contribute to greater IWB. Idea generation is stimulated by knowledge donating, which is, in turn, supported by vertical and horizontal trust. Hence, managers should create a climate of trust; this is particularly important for knowledge sharing behaviors among less-educated employees. Moreover, conditions for employees' knowledge donating, which is more strongly related to IWB than knowledge collecting, should be developed with due attention in organizations. Trust, knowledge sharing, and innovative work behavior
2020-10-28T19:08:05.685Z
2020-10-13T00:00:00.000
{ "year": 2020, "sha1": "bf61888c27e1f1239216985cdfce79fcf7cc5a4e", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/EJIM-04-2020-0134/full/pdf?title=trust-knowledge-sharing-and-innovative-work-behavior-empirical-evidence-from-poland", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d2299efa9884a77d04eb523105f3a3b897908dba", "s2fieldsofstudy": [ "Business", "Psychology", "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
264962166
pes2o/s2orc
v3-fos-license
Community Engagement Frontier This is the summary report of the Community Engagement Frontier for the Snowmass 2021 study of the future of particle physics. The report discusses a number of general issues of importance to the particle physics community, including (1) the relation of universities, national laboratories, and industry, (2) career paths for scientists engaged in particle physics, (3) diversity, equity, and inclusion, (4) physics education, (5) public education and outreach, (6) engagement with the government and public policy, and (7) the environmental and social impacts of particle physics. Introduction In Snowmass 2013, investigations into HEP community engagement addressed physics Communication, Education and Outreach (CE&O) [1], and this was the first time HEP incorporated such activities into the Snowmass process. The CE&O Frontier organized its Working Groups according to the target audiences of the General Public, Policy Makers, Science Community, and grade 5-12 Teachers and Students. The framework for all CE&O efforts consisted of three main outcome goals: ensuring the resources needed for the US to maintain a leadership role in HEP research, ensuring the public realizes how valuable and exciting particle physics is, and ensuring US HEP produces a talented and diverse pool of STEM professionals. Common themes of action to meet those goals emerged across the Working Groups: making a coherent and unified case for HEP, instituting real recognition for colleagues engaged in CE&O activities, providing more CE&O resources and training for our community, and creating a central team tasked with supporting HEP CE&O efforts. Each Snowmass 2013 CE&O Working Group produced a number of specific recommendations for implementation to help achieve the CE&O goals. In the intervening years since the release of the CE&O Frontier Report, many individuals and institutions accomplished much in following several of those recommendations. Great examples can be found in the area of public policy. A small group of people developed sophisticated, powerful and efficient database and wiki tools that have transformed and multiplied our HEP Congressional advocacy efforts. Another major communications success for the HEP community during this last P5 era has been the development and consistent delivery of a coherent, unified message about the US HEP program to Congress and other audiences. On the other hand, our field essentially set aside many of those recommendations. The community did not create a central or national team to support CE&O work, nor did it put in place many institutional incentives to encourage that work among our colleagues. Although much good work was done by many, implementation of CE&O recommendations has been spotty and incoherent overall. The Snowmass 2013 experience absolutely did make clear that it is crucial to address community engagement for the health of US HEP. In fact, leadership within the field realized from the very beginning of the Snowmass 2021 process that community engagement needed to be expanded into a Community Engagement Frontier (CEF) with full scope over all areas of community engagement (i.e. a much broader scope than CE&O from 2013), and co-equal with the physics frontiers. The structure of this expanded Community Engagement Frontier includes seven Topical Groups defined primarily by general issues to be addressed: Applications and Industry; Career Pipeline and Development; Diversity, Equity and Inclusion; Physics Education; Public Education and Outreach; Public Policy and Government Engagement; and Environmental and Societal Impacts. The overall objective is to improve and sustain strategic engagements with our communities in order to draw support for and strengthen the field of particle physics (an inward-focused goal carried forward from Snowmass 2013), while playing key roles in serving those communities (an outward-focused goal added for Snowmass 2021). Arranging the Topical Groups by issue proved to be a very efficient structure that brought great focus to the efforts of the Groups which, succinctly, aim to support: practical applications of research in particle physics and technology transfers to industries; career development and job opportunities for young scientists; encouragement and inclusion of diverse physicists to reflect the diversity in our communities; advances in physics education to produce talented and qualified students; engagement with the public to share in the essence and importance of physics research; partnerships with governments and policymakers to grow the scientific enterprise; and improvements in the ways that our field affects the environment and society in which we live. The issues addressed by the CEF Topical Groups are relevant to all the other frontiers of Snowmass 2021 -they are crosscutting topics or issues for all the physics frontiers. About one hundred letters of interest (LOI) were received on these topics. These LOIs were condensed into thirty-five contributed papers, the details of which are mentioned in the Topical Group Reports. In addition to the LOIs, inputs to contributed papers came from town hall discussions, group meetings, expert speaker invitations, workshops, and surveys. Sections 11.2-11.8 of this report summarize the work done by the Topical Groups, and the individual Topical Group Reports for CEF01 [2], CEF02 [3], CEF03 [4], CEF04 [5], CEF05 [6], CEF06 [7] and CEF07 [8] document the details and the specific recommendations for improving the ways our community engages in these areas. Overall Community Engagement Frontier Goals Structuring the CEF work into the seven issue-based Topical Groups defined above was a very effective strategy for organizing and maximizing the productivity of the people working in the Frontier. However, two categories of overlaps necessitate a different organizing principle for setting overall implementation goals for CEF. First, the shared experiences and overlapping interests among the various Topical Groups and other frontiers mean that different groups are often addressing the same issue from different directions or perspectives, and we need to continue following the guiding principle of coherence to our efforts. Second, there are often two or more very different recommendations related by a common target audience. Bringing those recommendations together could very well result in more efficient and effective engagement. These considerations informed the development and incorporation of strategies and recommendations articulated in a set of overarching goals for HEP engagement with five interrelated communities: HEP itself, K-postdoc education, private industry, government policy, and the broader society ( Figure 11-1). These overall CEF goals organized by target community for engagement actually echo the Snowmass 2013 working group structure. The goals along with references to the specific Topical Group sections informing each goal are listed below. HEP Internal Engagement It is often said that you can tell much about a group of people by considering how they interact with each other and conduct their own affairs. Therefore, a reasonable place to begin a study of HEP Community Engagement is to analyze how the HEP community engages with itself. What are the characteristics of the ways that we build and organize our own community of colleagues? What values do we embody by our choices of which activities are incentivized (or not) in our work? Every single CEF Topical Group confronted these questions at varying levels. In fact, a plurality of recommendations put forward by the seven Groups were directed inward, suggesting improvements to the standard operating procedures of the US particle physics community. It is widely known that science broadly is a discipline which lags behind most others in its membership diversity along multiple axes. Physics, and HEP in particular, have been less successful than most other science fields in realizing much improvement in this area over the years. It has become clear that a factor contributing to this limited success is the fact that the pool from which we draw professional talent is a small one dominated by traditional physics programs at a select group of R1 academic institutions. This must change for HEP to access the depth of talent from the broader society. At the same time, individuals who do bring greater diversity to our field often encounter barriers to full participation and advancement in their careers. The norms of interaction developed over decades by a fairly homogeneous community can serve to alienate those possessing a potential to enrich our field with different backgrounds and perspectives. It is not only beneficial, but also simply good manners to present a welcoming environment to new colleagues and neighbors. Most particle physicists are receptive to participation in Community Engagement activity, and many are quite active in this work. However, there are strong pressures within HEP that serve to prevent many members, especially early career members, from significant participation. The first of these is time. HEP research is a demanding task requiring great resources, not the least of which is time. There are always schedules to follow, deadlines to meet, tasks to complete. Particularly for postdocs and junior faculty working to establish themselves in the field, it is a tall order to expect them to sacrifice research time for community engagement efforts. The competition for the next job or promotion is fierce, and time "lost" equates to falling behind one's peers in career achievement. This leads to the second pressure, which is the fact that records of successful participation in community engagement tasks have rarely been given strong consideration by hiring or promotion evaluation committees at our labs and universities. There are signs that this may be changing, as cases of work such as "outreach" being included in the "service" portion of committee evaluations is more common than it once was, and NSF has long encouraged attention to similar efforts through its grant requirement to address broader impacts. In the P5 era since Snowmass 2013, US HEP has become renowned for its record of project management. Until now, that success has primarily relied on maintaining a proper balance of our projects' scientific capabilities, budgets and schedules. Over the past decade, the worldwide HEP community has come to realize that another concern must be added to this balancing act: the direct impacts that our activities have on the communities and environment in which we exist. This means that we must plan to limit the specific and sometimes unique impacts that our collaborative projects and individual work have on the climate and broader environment. These and other aspects of internal HEP community engagement resulted in the formation of many specific recommendations for changes or improvements within our field, all of which can be found in the various CEF Topical Group Reports. Overlaps and relationships that exist among the inward-directed recommendation of each Topical Group led to the development of the following goals for engagement within the HEP community (each referenced by the Topical Groups whose reports most directly relate to that goal). • The HEP community should institute a broad array of practices and programs to reach and retain the diverse talent pool needed for success in achieving our scientific vision. In particular, we need to encourage stronger participation in HEP collaborations by faculty and students from non-R1 academic institutions. (CEF02: Section 11.3; CEF03: Section 11.4; CEF04: Section 11.5) • HEP communities are still plagued by the alienation experienced by marginalized physicists who are part of the community. HEP needs to address these persistent issues by employing the use of robust strategic planning procedures including a full re-envisioning of our workplace norms and culture to prioritize eliminating the barriers and negative experiences faced by our marginalized colleagues. (CEF03: Section 11.4) -Research institutes and universities should do more to maintain the highest standard in work-life balance and mental health of staff. Proper training of staff should be developed to integrate productive work habits that encourage a balance between professional expectations and private affairs, and good mental health. (CEF02: Section 11.3; CEF03: Section 11.4; CEF04: Section 11.5) • The HEP community needs to address under-representation of many groups within the field by implementing new modes of community organization and decision-making procedures that promote agency and leadership from all stakeholders within the scientific community. (CEF03: Section 11.4) -Funding agencies, Universities, laboratories, and HEP groups should improve and sustain international outreach, partnerships, schools, workshops, conferences, training, short-visits for research, and development of research consortia; mechanisms should be developed to facilitate the participation of colleagues from developing countries. (CEF03: Section 11.4) • In addressing the unique needs and issues of marginalized physicists, HEP communities must engage in partnership with scholars, professionals, and other experts in several disciplines, including but not limited to anti-racism, critical race theory, and social science. (CEF03: Section 11.4) • All HEP communities should create structures to fully open career path opportunities to everyone, and to conduct event planning to ensure events are accessible to all community members, especially those with disabilities. (CEF03: Section 11.4) • The HEP community should enact structural changes to foster broader, deeper, and more effective participation in community engagement, through policies such as considering community engagement work in hiring, promotion, and grant decisions. (CEF05: Section 11.6, et.al.) -Individual scientists should encourage others, including peers, mentees, and students, by participating in public engagement and discussing its importance. (CEF05: Section 11.6) 11.1 Introduction Education Professionals in particle physics must be prepared for careers in the field through instruction in the skills, techniques and investigation processes characteristic of the discipline. This training primarily occurs within the education community spanning kindergarten through postdoc (K-PD). The development of foundational skills and interests must begin in early grades of local schools, and should eventually expand to include learning experiences in international settings. To produce colleagues with the specific abilities required for modern HEP research, our academic institutions need to be teaching content that matches the needs of our discipline. However, US universities for example, are sometimes towers of tradition, slow to adapt to changing career environments, especially when people active in the fields do not communicate the needs for change. One particular area of disconnect is the trend that has developed of US HEP becoming a more specialized pursuit, weighted more and more heavily toward the academic, analytical side of the work. With changing dynamics in the organization and funding of HEP within the US, many HEP individuals and collaborations have less familiarity with the technical or engineering expertise required for our projects. In addition, modern particle physics analysis depends on the application of specific skills that are not often part of the standard degree programs in many universities. These can range from tried-and-true statistics that are often learned "on-the-job" in our field, to more novel disciplines such as Artificial Intelligence algorithms that are becoming increasingly common in our work. These concerns along with needs to improve connections with K-12 and international students are reflected in goals for HEP engagement with the Education community, formulated to address educational deficiencies specific to our field. • Our field cannot absorb all the early career members that it produces, so funding agencies, national laboratories and universities should work together to provide more education and career opportunities for engineering and industry-focused research within and outside HEP, and update degree programs to match better the skills needed and career opportunities required in today's HEP and related fields. (CEF01: Section 11.2; CEF02: Section 11.3; CEF04: Section 11.5) • HEP academia should work with K-12 teachers and students to create supportive local communities to nurture student interest in math and science. (CEF04: Section 11.5) • Pre-university and university programs for international student collaboration need to be expanded and supported, especially to partner with colleagues in developing countries. (CEF04: Section 11.5) Industry In times past, the US HEP community was supported by strong direct relationships with vibrant industries well matched to the technological needs of the field. Over the years, as modes of project management, funding, licensing etc. have evolved, it has become increasingly difficult for US HEP to provide support through project partnerships adequate to support a viable industrial base of US companies capable of providing the technology production required for successful execution of HEP projects. Often, these mutually beneficial two-way relationships remain stronger in Europe and Asia than they do in the US. For example, industrial accelerator companies in the US represent a rather small community. Improvements in HEP-industry relationships can be achieved at all scales. HEP could strengthen relationships with large microelectronic firms through new models of agency-or field-wide licenses and platform access. Partnerships with smaller startup companies often exist and are much easier to nurture at the lab or university level, so those connections should be leveraged. From the industry perspective, the reduction of cross-agency or cross-office barriers will enhance and accelerate innovation. Many of the proposals envisioned for promoting strong relationships between the Industry community and HEP are represented in the following overarching goals for Industry engagement. • Funding agencies and the national laboratories should enhance policies and programs to promote a more fertile environment for cross-agency technology development and technology transfer, and to support co-development of specific technologies with industries such as accelerators, microelectronics, and FLASH-RT. (CEF01: Section 11.2; CEF02: Section 11.3) • Together, laboratories and universities need to help bolster our US industrial support base by pursuing targeted partnerships with early stage scaleup companies on HEP projects. (CEF01: Section 11.2) • The HEP community needs to strengthen ties with industry and other fields by developing effective alumni networking tools and programs, to facilitate transitions to industry careers and encourage industry collaboration on HEP projects. (CEF02: Section 11.3) Policy When we speak of Policy in this context, we do not refer to the official rules themselves by which an organization operates. We refer to a specific target community for HEP engagement. That is the Policy community, or collection of various governments and individuals working within government that have influence on enacted policies that directly affect our field. HEP, Education, and Industry communities all operate and exist within a milieu of government policy. The core of HEP's Policy engagement has always been its highly effective Congressional advocacy, conducted by the major Users groups representing US particle physics to secure strong funding for US HEP. The primary component of that Congressional advocacy is the "DC Trip," which has been developed, expanded and refined by a small group of dedicated colleagues for decades. It has become a major sophisticated operation that is held up to other fields as the gold standard of scientific advocacy. Throughout the P5 era, HEP has successfully garnered strong bipartisan support in Congress for the field's program of research. One weakness of this effort is the wholly volunteer nature of the stewardship of our Congressional advocacy. The HEP community should dedicate resources to put it on a sustainable path for future growth. On the other hand, there is considerable room for improvement in our advocacy to the federal Executive branch. Across most administrations, support for HEP in Presidential budget proposals has typically not reached the level of Congressional support for many years. Our interactions with the Executive branch are limited in breadth and frequency, so this is a potential area of significant growth. The HEP community has rarely, if ever, mounted any real effort to advocate on behalf of the field to state or local government. While state and local advocacy would likely be selective in its application, it still represents a real growth opportunity for HEP support. Colleagues have expressed interest in advocating for non-HEP funding issues that may not be HEP-specific, but have direct impact on our field and its health. Examples include issues such as VISA and immigration policy. The Users group Congressional advocacy has never directly promoted policies outside of HEP funding, and likely would not attempt to advocate for non-HEP-specific issues on its own. The main reason being the general principle that advocacy is best carried out by the largest group available with common purpose on a given issue. Therefore, partnership with broader scientific societies such as APS, AAAS, etc., would be a more powerful form of HEP advocacy for more general scientific issues. Consequently, the greatest growth in HEP Policy engagement would be achieved by achieving the following overall goals. • APS DPF, HEPAP, and the user groups need to review the structure of HEP advocacy, including considering the formation of an HEP community government engagement group with responsibility for expanding government engagement capabilities. (CEF06: Section 11.7) • The HEP user groups and DPF need to provide the resources for continued growth and sustainability of the annual HEP Congressional advocacy effort. (CEF06: Section 11.7) • The users groups, DPF, laboratories and universities should build on our successful HEP Congressional advocacy by expanding our advocacy to the federal executive branch and state and local governments. (CEF06: Section 11.7) • HEP should establish a group in partnership with other science and physics societies on advocacy for non-HEP funding issues. (CEF06: Section 11.7) Society Each of the four target communities described above are subsets of the broader society at large, existing within what is sometimes referred to as the general public. When we speak of broader society as a target community for engagement, typically we are referring to the portion of society that is the complement of the union of the other four communities. In other words, everyone in society outside of HEP, Education, Industry, and Policy. The key theme in Societal engagement is reflected in the term engagement itself. Until recently, interactions with the public were usually spoken of as Public Outreach. However, outreach implies reaching out to some group. In other words, it is a one-way activity between groups that are not on the same level. HEP is telling other people something. What we're saying might be good, but we're not listening. This is not an effective means of building relationships. Conversely, most groups involved in outreach communication have evolved to frame what they do as engagement, or engaging with another group. This implies a co-equal partnership, hopefully one that is mutually beneficial. This value leads to the articulation of two major goals for real community engagement with the broader society, one very general, and one very specific. • HEP needs to transition from an ethos of conducting outreach and communication to the public, to a culture of engagement in relationships with the public. This should be done by building lasting relationships with the full breadth of all of our supporting communities (especially those that have been historically excluded) that are not based on transactional interactions, but rather real two-way partnerships that consider the needs and interests of the audience and include its members in program design. (CEF05: Sections 11.6; CEF07: Section 11.8) • HEP should build synergistic collaborations with the non-proliferation community that draw on a broader spectrum of funding sources for work on HEP-specific technologies related to nuclear non-proliferation. (CEF07: Section 11.8) Participation and Implementation Throughout the Snowmass 2021 process, there has been relatively low participation in Community Engagement Frontier efforts. This is true although it is generally agreed that CEF topics are cross cutting, i.e. they affect the entire community to various degrees, and thus are important to be addressed. The vast majority of CEF work was carried out by the small number of frontier and topical group conveners, with only a few additional dedicated community members making significant contributions. Indeed, many contributed paper study groups were led and conducted by the CEF topical group conveners themselves through to publications of these papers, because of low community involvement. Instead of CEF being a cross-cutting frontier with significant participation, it became an isolated set of activities-in spite of frontier liaisons-carried out by a relatively small group of people; almost all of whom are also physicists with interest in the other "physics" frontiers. These dedicated volunteers were largely prevented from participating in the Snowmass physics frontiers at significant levels due to the burden placed on them by the lack of participation in CEF by the majority of their HEP colleagues. As a result, all of the CEF topical group conveners sacrificed career advancement opportunities in order to carry out the work necessary for the health of the field on behalf of the entire HEP community. Various reasons have been suggested to explain low involvement in CEF, e.g. lack of time or the fact that career progression depends on the quality of research output rather than community engagement effort. Surveys done in Refs. [9,6] offer further insights into the low participation issue. It is also possible that Snowmass is the wrong time for the community to focus on developing a plan for addressing CEF issues. Certainly, individuals' concerns for time management and maximizing the potential for career development are heightened during the high-stakes planning and decision-making of Snowmass. Snowmass may be the one time that presents the greatest barriers to a large fraction of the community choosing to participate in CEF, so perhaps this should be the last time that CEF is a part of Snowmass. Whatever the reasons, the HEP community both corporately through structural change and as individuals through personal reflection must decide that everyone's participation in CEF issues is required for the field to become healthy and grow. We can no longer rely on a small number of colleagues to shoulder the full burden of this work to the detriment of their own careers. Our field simply will not survive otherwise. If the field of HEP does decide that CEF issues are worth addressing, and furthermore commit to doing so, then a plan of implementation of CEF recommendations must be developed. In the past, the responsibility for guiding the selection and implementation of Snowmass recommendations has been wholly delegated to the Particle Physics Project Prioritization Panel (P5). This arrangement has worked exceedingly well over the last 8 years for implementing a consensus plan of HEP projects. However, experience has shown that the current P5 mandate and makeup is not suited to adequately shepherd other areas of the HEP enterprise, including community engagement issues. For example, Snowmass 2013 included the Community, Education, and Outreach Frontier, which produced a report that made several recommendations [1]. Neither P5 nor any other HEP organization or leaders took ownership of those issues to ensure that recommendations from that frontier were implemented. As a result, despite the fact that individual people and institutions in our community have made efforts in these areas in the last decade, little overall progress on these recommendations has been realized since the last Snowmass. To avoid a similar fate for Snowmass 2021 CEF recommendations, US HEP must establish a structure by which designated entities are given ownership of and responsibility for ensuring that CEF recommendations are implemented and monitored for progress. One such possibility is simply expanding the P5 charge to encompass CEF issues. It is not clear that is the most appropriate solution, though. P5 was explicitly designed to effectively prioritize (largely experimental) HEP projects. It may be that P5 is not ideally situated in the HEP ecosystem, or appropriately staffed to evaluate, choose, and monitor the progress of CEF initiatives. Considering the five engaged communities around which the CEF goals are organized, it could be that the American Physical Society Division of Particle and Fields (DPF) is best positioned to shepherd the internally-focused recommendations for HEP because that organization is most representative of the entire field's membership. Perhaps a group formed in partnership between the funding agencies, the laboratories, and universities is needed to manage engagements with industry since that is where most of the direct relationships with industry are formed. Universities Research Association's membership consisting of university administrations and its role in connecting academia with laboratories could make it best suited as a sponsoring organization for a team to work towards implementation of education initiatives. An option of forming an HEP community government engagement group, composed of elected community representatives and policy experts, to expand advocacy efforts, should be considered. As the focal points for public engagement with HEP, the laboratories themselves may be the ideal choice to manage the recommended programs directed at the broader society. All of these stakeholders should begin the conversation that must lead to an agreed upon structure for taking responsibility for implementing CEF recommendations. If this does not happen, then we face another decade of no progress on CEF issues in the HEP community. The rest of this report describes the impressive amount of high-quality work that a small number of your colleagues accomplished on your behalf. We hope that as you read and consider this content you will be convinced of two propositions: 1. It is critical that we as individuals agree on the importance of all working together to address CEF issues in HEP. 2. A structure within HEP for taking ownership and responsibility for implementing CEF recommendations and monitoring their progress must be developed. If we as a field can make these two ideas a reality, then US HEP will be much stronger and healthier by the time we gather for the next Snowmass process. CEF01: Applications and Industry The charge for the topical group CEF01: Applications and Industry is to develop strategies to strengthen HEP/Industry relationships in both directions, i.e. forming more partnerships to draw on industry expertise to further HEP goals and building on programs to facilitate transfer of HEP technologies/techniques for use in the broader society. This group considered the relationship between HEP laboratories, universities, and industrial stakeholders. In particular, CEF01 pursued the following objectives: (1) how to create an innovation ecosystem mutually beneficial to national laboratories, academia, and industry, (2) how to maximize the HEP-funded technology outcomes benefit to practical applications, (3) how to encourage codevelopment of related applications across agencies and programs, and (4) how to leverage HEP project partnerships to enable innovators to become entrepreneurs through tech commercialization. In order to expand the discovery reach of experimental high energy physics, innovations in a variety of technologies are required to push operational and measurement tools and techniques to ever-higher levels of spatial and temporal precision. Not only do these advances propel scientific discovery, but they also enhance industrial capabilities to deliver novel and powerful applications to the benefit of the broader society. Strong development relationships between laboratories and universities of the HEP community and small to large scale tech companies in the industrial community are key for building an efficient and sustainable ecosystem for advancing technologies such as accelerators, microelectronics, artificial intelligence, and quantum information. With individual modern HEP experiments characterized by industrial scales such as detectors with more than 1 billion sensors, 70 kilotons of liquid argon, or data rates equivalent to the entire North American internet traffic, it is obvious that a robust and diverse array of industry partners is necessary for HEP to mount almost any project in its portfolio. On the other hand, the engineering design benchmarks needed to handle the extreme radiation, cryogenic, low power, and inaccessible operating environments of HEP projects often exceed those typically found in industry by orders of magnitude. Multidisciplinary technology design and production partnerships for HEP between national labs, academia, and industry accelerates lab to fab innovation, prototyping to scale, technology maturation, spin-off development, and rapid adoption, all of which benefits industry capabilities; it also accelerates scientific discovery across the landscape of federally-funded research. Barriers to effective HEP-Industry partnerships do exist. Although Small Business Innovation and Research (SBIR) funding can facilitate lab and university partnerships with small business, SBIR timeframes and funding levels are inadequate to support the large-scale HEP projects that require collaboration with big business. HEP technology goals and requirements are often not communicated to industry broadly and effectively. Economy of scale with regard to tool and license purchases from industry is typically not exploited in HEP. Science goals are not well-mapped to technology goals across funding agency offices, often resulting in the loss of synergistic co-development opportunities. Those working in CEF01 produced five contributed papers examining three different modes of collaborative partnership, and cooperation on three different specific areas of technology: • Programs enabling deep technology transfer from national labs [10]; • Application-driven engagement with universities, synergies with other funding agencies [11]; • Big industry engagement to benefit HEP: microelectronics support from large CAD companies [12]; • Transformative technology for FLASH radiation therapy [13]; • Nurturing the industrial accelerator technology base in the US [14]; The following sections summarize the ideas and set forth the suggestions arising from each of those papers, and detailed in Ref. [2]. Programs Enabling Deep Tech Transfer from National Labs To achieve the scientific goals of HEP projects, DOE national laboratories and the experiments they host require innovative ideas and at-scale prototyping for novel technologies. These projects must bridge the academy's drive to push beyond state-of-the-art capabilities to industry's forte in quality control and reliability. Experiments and industry collaborating to move these novel technologies from ideas to robust and cost-effective mid-scale manufacturing, can put industry on a path to commercial production. However, a proper environment is necessary to nurture this laboratory to fabrication (or lab to fab) technology transfer process in a manner that can sustain spin-off development and startup ventures over the long term. Several recommendations are made to develop an effective technology transfer ecosystem or for HEP at the national labs: • DOE should implement specific policy changes to foster deep technology transfer. (CEF01 Recommendation 4) There are several specific policies within DOE that could be optimized to encourage more efficient technology transfer. Aligning inventor royalty distribution consistently across the DOE complex would simplify commercialization of technology developments. Laboratory and/or Division royalty shares might be used to support additional projects, rewards programs, and innovation/entrepreneurship educational opportunities for laboratory staff. The use of Partnership Intermediaries (PI) can accelerate commercialization, particularly for laboratories supporting HEP facilities with limited Technology Transfer (TT) resources. A PI is a non-profit with specialized skills to assist federal agencies and laboratories in TT and commercialization. Past pilot programs in the Department of Energy Office of Technology Transitions have shown promise in assessing technologies for market pull, marketing HEP-related technologies, and matchmaking technologies at national labs with entrepreneurs in private industry. This can be particularly successful by identifying "dual-use" applications early, which can be leveraged to enable upfront marketplace analysis and speed market acceptance. Other Transaction Authority (OTA) is a special mechanism federal agencies use to obtain or advance R&D or prototypes. The government's procurement regulations and certain procurement statutes do not apply to OTs; thus, OTA gives agencies flexibility to develop agreements tailored to a particular engagement with companies unwilling or unable to comply with the government's procurement regulations. While the Energy Policy Act of 2005 granted OTA to DOE at the agency level, it did not authorize the labs to use it directly. OTA could be a more effective model for technology transition if driven at the local laboratory level, where the interaction with industry is vital for success. • Industry and national laboratories should develop public-private partnerships to accelerate scientific discovery and benefit industry applications, particularly in emerging technologies. (CEF01 Recommendation 1) Promoting public-private partnerships will accelerate HEP innovations. In specific technology areas such as accelerators, US federal program managers proposed developing public-private partnerships to foster and support small and large technology businesses who collaborate with the laboratories and serve as commercialization partners for critical technologies developed as part of facilities and experiments in high-energy physics. These public-private partnerships could serve as both advocacy and economic development entities for HEP-derived technologies, as well as matchmakers that aid companies and laboratories in forming collaborations, which lead to positive commercialization outcomes. Entrepreneurial Leave Programs (ELP) allow employees to take a leave of absence or separation from the laboratory in order to start or join a new private company. ELPs encourage startup activities by reducing the risks faced by the employee entrepreneur. Some elements of an ELP may include business preparation/training, a means for licensing laboratory IP, continuity of health benefits during leave, and a mechanism for returning to work. ELPs are not implemented consistently across the DOE complex; some laboratories have ELPs while others do not. Providing more technology transfer educational opportunities targeted to HEP researchers will facilitate a ramp-up to I-Corps level of engagement for high-energy physics researchers. This would be a great opportunity to provide researchers the building blocks to enable more engagement in capturing innovations. Discussions on the types of intellectual property (ex. patents, copyrights), rights afforded to researchers from their innovations, and the mechanisms to engage with industry to advance their technologies would provide valuable resources and new perspectives to HEP personnel. Technology transfer with Scaleups HEP collaboration with small businesses is primarily facilitated through SBIR program Phase I awards, which are relatively plentiful I and easy to access. However sustaining development through this channel is difficult because Phase II awards are more limited in number and lack sufficient funding for deep tech transfer. Work with large companies through Cooperative Research and Development Agreements (CRADA) is a slow and lengthy process. In the long term, CRADA partnerships are quite fruitful, but the slow starts and small number of opportunities create a high barrier for execution. A relatively unexploited and overlooked intermediate option is partnership with mid-sized scaleup companies in the post-startup phase. These middle-ground businesses can be identified using a bottom-up approach of scouring online databases such as crunchbase.com and dealroom.com. A top-down approach of building relationships with venture capitalists (VC) with vested interests in scaleup firms. • HEP should leverage multiple programs and relationships to build collaborations with scaleup companies on HEP projects. Laboratories should host "Discovery Days" for scaleups. National laboratory business development (BD) efforts with larger companies involve hosting Discovery Days. Product leads and problem owners from companies are invited to visit the lab and have discussions with technical/domain experts. Success is measured in the number of Discovery Days that convert to project collaborations. Similar Discovery Days to host scaleups at the lab to deep dive into their technology roadmap need to be established. Venture Capital firms generally prefer to procure services from a commercial service provider instead of a lab, partly because of the relative difficulty getting technology out of the lab due to how hard it is to get an exclusive license. Labs find it better to give exclusive license to large corporations, as opposed to startups where there is more chance the company might fail. Leveraging contacts at venture capital firms is extremely useful, since they have access to service providers such as lawyers, accountants, government lobbyists, and labs. The labs create a summary description of what they do (i.e. their value proposition), which VCs can then share with companies to see if they are interested. HEP should also leverage existing university relationships to connect with Venture Capitalists. Some VCs such as ARCH look for strong scientific founders who can help their companies at early stages including getting an exclusive license from the university, which is not as easy to get from a national lab. The university model, which allows staff members to spend one day a week on external projects, helps facilitate such work. Application-driven engagement with universities, synergies with other funding agencies Laboratory-university HEP partnerships have been very successful, but within the United States have been limited to university physics departments. However, as technology advances, the level of engineering design for accelerators and detectors keeps growing and close collaboration between the labs and university engineering departments is becoming more important. Interactions between the labs and engineering departments are opportunistic and transactional rather than systematic and synergistic like with physics departments. In contrast to Europe, the US HEP community does not have the programmatic ability to directly support application engineering research in the academy. In areas such as computation and microelectronics, upcoming HEP technology projects will require significant amounts of engineering R&D. This activity and those conducting it need to be considered integral to the HEP community. In addition, HEP labs need to partner with universities to produce a technology workforce. • Funding agencies should work with universities to create cross-agency engineering initiatives focused on application-driven fundamental technology rather than fundamental science. (CEF01 Recommendation 3) The DOE Office of HEP should engage more with engineering departments to create explicit representation of engineering partnerships in established listings of funding opportunities. Clearly label support for HEP-Engineering efforts, such as fellowships reserved for engineering graduate students, or a class of projects designated as science-engineering partnerships, where the project application requires dedicated components reserved for both science and engineering (similar to NSF grant funding where a research and an education component are requested to be described individually). These engineering collaborations should be expanded across DOE and with other agencies (e.g. NASA, NSF, DOD). • HEP labs and universities should work together to develop intentional pipelines of engineers for the HEP workforce from undergraduate students to professional ranks. A possible pipeline could be as follows. HEP labs recruit undergrads for internships through partner universities → students are trained in the setups and topics the HEP lab prioritizes → successful undergraduate students are channeled to graduate programs across a network of partner universities → universities recruit these students into engineering PhD programs → coadvising models are used to mentor these students by both an engineering professor and a HEP scientist → feed the students back into the HEP workforce. In addition, universities should welcome input from HEP labs on recruiting the next generation of graduate students from the cohort of international students with interdisciplinary (physics, science, and engineering) backgrounds. Schools could also establish joint academic appointments for HEP lab and industry scientists within engineering departments. These HEP scientists could then also be thesis advisors and thesis committee members of students. HEP labs should promote engineering students for awards, such as the URA Visiting Scholar award. Often times, engineering faculty and engineering PhD students are not aware of all opportunities that exist within the HEP and national lab ecosystems. Guidance from HEP scientists will help lift entry barriers for them. Big Industry Engagement to Benefit HEP: Microelectronics Support from Large CAD Companies Only a few large companies have both deep expertise in modern microelectronics and access to CAD-EDA tools. In addition, ASICS designed for the extreme environment requirements characteristic of HEP have little market value for those large companies. Therefore, microelectronics for HEP tend to be designed by partnerships of national lab and university personnel. However, these collaborations typically do not have access to the suite of CAD tools due to complicated and expensive licensing frameworks, which are negotiated independently by each DOE lab. The DOE needs to develop a centralized licensing framework with CAD vendors to bring economy of scale and flexibility for each lab to procure the set of tools required for their own projects and teams. There are motivating benefits to the microelectronic industry to pursue this business relationship with HEP. Extreme environment microelectronics make up little of the commercial market, but that segment is growing, particularly for QI and AI applications. DOE science users are typically good sources of feedback on cutting edge uses of advanced CAD-EAD tools, and also develop the talent pool for the microelectronics workforce. The following recommendations arose from DOE HEP hosted meetings with several major CAD-EAD companies. • A collective all-of-DOE approach for engaging Big Industry should be employed for procurement of common industry tools, licenses and services. (CEF01 Recommendation 5) Setting DOE-wide common terms and conditions with the flexibility for each lab to make the technical choices specific to their program and negotiating low-cost research licenses for basic science developments will enhance project collaboration with big companies. DOE should consider creating a Collaborative Innovation Hub scoped for cooperative team shared access to CAD/EDA tools, training, and support. Establishing dedicated cloud-based communal participation platform between academia, DOE national labs, and CAD/EDA companies, and leveraging successful solution frameworks (e.g. DARPA Innovation Package, Europractice IC Service, DOD Cloud Access Rights) will bring efficiencies of shared access. It will also be useful to Incorporate some aspects of CAD/EDA companies' academia policies for research projects at national labs, in order to create a new class of research licenses. The resulting solutions should keep intact the premise of CAD/EDA companies' contributions with special arrangements for commercializing research results. The academic network can also be leveraged to cultivate the talent to advance and promote innovations in semiconductor technologies. Transformative Technology for FLASH Radiation Therapy Radiation therapy (RT) cancer treatment has arguably delivered the greatest societal impact of any particle accelerator application, and a large share of accelerator science has been enabled through HEP research support. FLASH radiation therapy (FLASH-RT) is a recent development in which ultra-high doses of therapeutic radiation are delivered in less than a second. Experiments show that FLASH-RT effectively destroying tumors while almost completely sparing normal tissue. However, there are technical difficulties in the development of a clinically-safe delivery system, and the accelerator capabilities within the HEP community will be needed to resolve the issues. Most R&D for FLASH has been carried out using 4-6 MeV electrons from clinical linacs, producing strong results. Photon beam FLASH studies using synchrotron radiation and X-ray tubes have yielded mixed results. Some work has also been done with 230-250 MeV shoot-through beam protons from CW and isocyclotrons. Limitations include intensity requirements preventing the use of energy degraders for proton beams, and synchrotrons lacking the intensity of ion beams required for FLASH. • Prioritize and simplify high risk, high reward transformative technology opportunities. (CEF01 Recommendation 6) In some technical areas (e.g. FLASH radiotherapy), high impact technology incubation by the HEP ecosystem can produce significant, and occasionally disruptive, benefits to society, within a decade timeframe. In these scenarios, we recommend prioritizing and simpliying access by all domestic stakeholders to HEP facilities, expertise, and resources. The HEP community should carry out a broad R&D program to clinically realize the curative potential of FLASH-RT with different radiation modalities. Among the relevant projects are: • The Advanced Compact Carbon Ion Linac (ACCIL) is a program initiated by the Argonne National Laboratory to develop up to 1 kHz repetition rate, compact proton linac capable to deliver FLASH-RT doses.; • Scaling Fixed Field Gradient Accelerators (FFGA) are synchro-cyclotron style proton accelerators, which can operate at high repetition rates and high currents consistent with FLASH needs; most of the current R&D programs on scaling FFGAs are performed by Japanese research groups.; • Non-scaling FFGAs are particularly well suited for accelerating other ion species (i.e. carbon), and there is a pilot facility under construction at the National Particle Beam Therapy Center (Waco, TX).; • Laser-driven accelerators can deliver very large doses of protons or high energy electrons from a compact source (both scenarios are potentially of interest to FLASH-RT). The bulk of US program is centered at the LBNL BELLA laboratory.; • The pulsed power based linear induction accelerator (LIA) using a multilayered bremsstrahlung conversion target also represent very promising technology in meeting FLASH-RT requirements, and there is a pilot program underway at LLNL.; • Multiple groups are also working to develop FLASH-capable X-ray systems, including the ROAD initiative by UCLA/RadiaBeam, and PHASER initiative by SLAC/Tibaray.; • One potential application, which can take advantage of the recent interest by HEP community towards novel cold RF technology, is a compact cold-RF Very High Energy Electron (VHEE) radiotherapy system, with relevant R&D programs initiated at SLAC and at CERN.; Nurturing the Industrial Accelerator Technology Base in the US It is widely perceived by the HEP accelerator community that accelerator technology transfer to US industry is not a high priority. US HEP commonly develops state-of-the-art accelerator technology, then buys it back later from international firms for domestic projects. Europe and Asia have nurtured vibrant accelerator industrial bases, leaving US firms at a competitive disadvantage. This has resulted in a US accelerator community plagued by increased costs, insufficient component availability, dependence on foreign sources, small talent pool of technical personnel and low societal recognition of accelerator science benefits. Although industrial firms play critical roles in the scientific enterprise, important US accelerator companies have struggled to survive. Pioneers in SCRF and undulator technology enjoyed initial success, but failed due to the inability to sustain support from DOE research, leading to the loss of capital and unique expertise. Regulatory and policy recommendations are suggested to build a competitive domestic industrial base for accelerator technology. • DOE should invest in programs to provide direct support to specific critical need industries. (CEF01 Recommendation 7) There is a growing interest in the community to improve support to the domestic industrial vendors providing critical technological capabilities to the HEP ecosystem. We recommend that DOE takes a proactive approach in establishing critical technology needs, and work directly with the qualified vendors to maintain and develop critical industrial capabilities, relevant to these needs. Modify the US Small Business Innovation Research (SBIR/STTR) program to nurture these small businesses across the "Valley of Death". One improvement would be to more closely align the program technical topics to the future procurement needs of the labs, and encourage the labs benefiting from the SBIR funded work to maintain the momentum and work with the industry beyond the SBIR funded phase. DOE should also establish a method to identify key technologies that will be needed in a decade time frame and create new channels of direct funding to the qualified industrial enterprises to develop expertise, infrastructure, and capacity to meet such needs. It is also equally important to be able to help sustain the companies that have already achieved critical capabilities. Support specialized industrial vendors by implementing directed "knowledge transfer" programs. Recent decades saw a proliferation of national laboratories-based commercialization centers built around the technology transfer activities. Yet, few of them can report success and the idea of technology transfer through funding the commercialization activities by the labs is generally counterproductive for the purposes of building the industrial vendors base. We believe it would be more beneficial to deemphasize technology transfer as a means of supporting the labs, and emphasize knowledge transfer as a means of supporting motivated businesses to expand capabilities of interest to the DOE programs. Laboratories should also simplify some of the procurement practices, and likewise explore various creative ways for industry and laboratories to collaborate on the prototype developments that would minimize the risks and maximize return to both sides. The accelerator community should promote programs that facilitate direct and open communication channels between laboratory engineering and technical staff with their industrial counterparts (there are many conferences for scientists to attend and share their experiences, but not so many venues are available to technicians and engineers whose skills are essential and irreplaceable in our field). CEF02: Career Pipeline and Development This working group is not simply about making early career scientists aware of different opportunities, but also changing the culture of HEP career paths. It aims to identify and encourage career opportunities for high energy physicists in both academia and industry, and to identify useful partnership options between HEP and industry. Smoother pipelines between different types of employment are critical for the success of HEP trainees in the future. One objective is to promote the skill development of physics graduates and young researchers and encourage career direction-based scientific majors and skills. Thirty-two LOI were submitted to this working group and were condensed into three contributed paper topics, namely: 1. Facilitating Non-HEP Career Transition; 2. Enhancing HEP research in predominantly undergraduate institutions and community colleges; 3. Tackling diversity and inclusivness in HEP. Topic (3) was integrated to and developed in the topical group on diversity, equity and inclusion in Section 11.4. Ultimately, two contributed papers on topics (1) and (2) were prepared and presented in Refs. [15,16]. Considering that there are fewer academic positions than job seekers, many degree holders will eventually seek jobs outside HEP, where, in sectors such as industry, there are demands for skills acquired in HEP training, e.g. data science or machine learning. However, organized guidance-developed through engagements between the HEP community and the alumni that have already transited out of HEPis needed to help with non-HEP career transitions [15]. Another career trajectory may to employments at predominantly under-graduate institutions (PUI) and community colleges (CC), with high teaching loads and lack of support for research. PUI and CC can serve as pipelines to improve diversity and under-representation in HEP, by facilitating participation of faculties and students at PUI and CC in HEP activities [16]. Facilitating Non-HEP Career Transitions It is noted in Section 2 of Ref. [15] that more that two-third of trained physicists will eventually transitions to employments in private or government sectors, collectively referred to as "industry"; moves to the industry sector may occur at various stages of HEP career evolution and proper planning is needed to facilitate the transition. A survey conducted by the Snowmass Early Career (SEC) physicists included questions about career pipeline and development [9], to offer insights on existing efforts, support, networking, preparation and attitude towards career change and alumni participation or eventual return to HEP. Details about the SEC survey and findings related to career pipelines and development are documented in Refs. [9,15]. We recall here the suggestions: • Supervisors and mentors should be directly involved in planning the career of their mentees early on. This career plan should not be based on the desires of the mentor but the skills and interest of the mentee. A commensurate effort in the job search process is also needed. • Supervisors should allow a certain fraction of working time for their mentees to pursue opportunities and preparation activities for a possible industry career. • HEP experiments, laboratories, or university departments should provide training for supervisors so that they can better understand and be more sensitive to the needs of their mentees in terms of their career goals and preparation. • HEP experiments and/or laboratories should provide workshops on industry job preparation: translating HEP skills and examples to industry language, converting CVs to resumes suitable for different fields, finding successful job search phrases (for example, "Engineer" or "Data Scientist" as opposed to "Physicist"). This will be most successful when paired with the recommendations below for deepening connections with HEP alumni. • HEP experiments and/or laboratories should develop innovative opportunities for networking with HEP alumni in various fields to strengthen industry job search success. Alumni are more than willing and happy to respond and engage. This will be most successful when paired with the recommendations below for deepening connections with HEP alumni. The survey revealed that about 50% of respondents tried to find jobs in their field before moving on to opportunities in industry; they exited at student or post-doctoral levels and went to STEM-related responsibilities outside academia. These transitions were mostly facilitated through networking; but the difficult step is to leave at a relatively late stage in academic. It is challenging to return to HEP but alumni are open to joint projects, and this can help strengthen partnerships between HEP and industry [15]: • Supervisors and mentors should actively communicate with alumni and highlight their experiences for current students and postdocs, to normalize the reality of transitioning to an industry career. • The US HEP community should develop tools and portals for connecting with alumni. Existing programs for networking with alumni like at CERN must be studied and adapted. This effort should be supported and strengthened by funding agencies by dedicating a small amount of continuous funding to support technical and personnel staff that can organise and build a framework that can serve as a hub to facilitate process of networking with alumni. A DOE lab would be an ideal place to host this effort, like Fermilab, which is a hub for US particle physics. HEP experiments and laboratories should take creative steps to reverse "brain drain" from HEP by exploring mechanisms for collaboration with alumni on HEP projects: • Alumni are a relatively low cost but very valuable asset with an abundance of experience from transitioning to an industry career. Their goodwill to contribute and strengthen ties with HEP can be tapped to facilitate industry job transitions and further the goals of both groups. • Individual scientific collaboration can be extended to the company of the alumni itself and this can strengthen knowledge transfer from labs and universities and vice versa; and work done by HEP research can benefit companies and vice versa. HEP training offered at universities and laboratories could be extended to industry for opportunities to apply HEP skills in a different environment and culture and this may facilitate eventual career transitions. The survey showed support to develop such HEP-industry partnerships [15]: • HEP laboratories should create targeted internships or training programs in the areas of Accelerator Technology, Computer and Information Science, Detector and Engineering Technology, Environmental Safety and Health and Radiation Therapies. This would expand access to industry-focused training to students and postdocs who are not based at national laboratories. • HEP laboratories should leverage existing public-private partnerships with industries like Accelerator Technologies, Computers Information Science, Detector and Engineering Technologies and also Environmental Safety to create experience for resident students and early career scientists to build skills ad connections for a future industry career. • Funding agencies should evaluate funding rules and regulations to allow HEP students and postdocs to pursue industry-focused training that can be integrated with their core research curriculum. • Supervisors must adopt a mindset that industry partnerships and career transitions are valuable options for their students and postdocs, and should support their participation in training opportunities whenever possible. Enhancing HEP research in predominantly under-graduate institutions and community colleges HEP activities are carried out primarily by people at laboratories and research focused non-PUI. However, about 40% of undergraduate students in the United States are enrolled in CC where ∼80% are from demographics under-represented in STEM. It is therefore important for the HEP to engage the vast community at PUI and CC which may serve as pipelines to improve diversity and under-representation in HEP. For such engagements to be productive, barriers to participation of PUI and CC faculties in HEP activities must be addressed. These barriers include heavy teaching loads, lack of guidance and research funds, lack of research infrastructure and equipment, and lack of administrative support and understanding of the regulations and requirements for successful participation in HEP-see Ref. [16] and the references therein. To address these barriers, we suggest the set of recommendations on institutional culture [3]: The HEP community should encourage a global shift in perception, acknowledging that: • Undergraduate research experiences are key to engaging a broader section of the student population. • PUI or CC faculties have much to offer their collaborations, particularly in experiment-wide training and educational activities. HEP experiments should offer coordinated communication from leadership to PUI administrators, extolling the features of high energy physics research alongside highlighted participation. The HEP community should offer special sessions for PUI and CC faculty at national meetings to develop a deeper sense of community. We also have suggestions for research funding [3]: • Funding agencies should strengthen participation by PUIs in HEP by allocating funds for grants from these institutions, and HEP experiments or laboratories should fund grant-writing workshops. • Funding agencies should allow course buyouts in proposals by PUI/CC faculty in order to boost productivity and establish continuity in PUI research programs. • Funding agencies, HEP experiments, and laboratories should create or support paid summer programs for PUI faculty to work at National Labs or non-PUIs, as well as research opportunities for students not enrolled at major HEP institutions. • Supervisors and HEP experiments should provide training to interested students and postdocs on US-specific research funding procedures. Finally, We make the following suggestions for participation in HEP activities [3]: Community Planning Exercise: Snowmass 2021 CONTENTS • Non-PUI senior-level researchers should investigate how their groups could offer opportunities for short-term and long-term collaboration on their experiment to faculty and/or students at local PUIs. • HEP experiments must reevaluate large fixed "entry fees" per institution, if they exist. Consider implementing "light" membership forms that are low cost but not time limited. • US HEP experiment leaders should advocate with international experiment leadership for pathways to sustainable membership for PUIs, which are most common in the US. Postdocs should be aware of options for entering these pathways so they are not discouraged from applying to PUI faculty positions. • HEP experiments must continue to improve options for remote participation in experiment meetings and service tasks, especially operational shift work. Connections with other Frontiers and Topical Groups Improvement in career pipeline and development requires improvement physics education as discussed in Section 11.5 to prepares a skilled workforce needed for HEP and career migrations industry and improve diversity, under-representation and inclusion in HEP as discussed in Section 11.4. HEP experimental physicists have developed expertise and transitioned into accelerator physics; this is essential to HEP operations and applications in industry, such as medical, materials, pharmaceutical, chemical and biological areas. Small scale experiments in neutrino, dark matter, nuclear and rare processes are sources of training for HEP physicists and transitions to industry. Technology transfers and applications and industry as discussed in Section 11.2, in addition to training in instrumentation and detector technologies, can enhance fruitful career transitions [3]. CEF03: Diversity, Equity and Inclusion This topical group focused on issues and projects related to (1) Diversity, (2) Inclusion, (3) Equity and (4) Accessibility; all are essential not only to professional success in our field, but to developing a better society at large. The group gathered information concerning diversity, equity, inclusion and accessibility in high energy physics, instances of success and failure, and actions that have been taken to promote our tenets. Thirty-two letters of interest were tagged to this group; other inputs came from surveys, town hall meetings and discussions. Ultimately, twelve contributed papers were developed, as detailed in Ref. [4]. The contributed papers may be categorized as follow: • Accessibility in High Energy Physics: Lessons from the Snowmass [17]; • Lifestyle and personal wellness in particle physics research [18]; • Climate of the Field: Snowmass 2021 [19]; • Why should the United States care about high energy physics in Africa and Latin America [20]; • Experiences of Marginalized Communities in HEP [21,22,23,24,25]; • In Search of Excellence and Equity in Physics [26]; • Strategies in Education, Outreach, and Inclusion to Enhance the US Workforce in Accelerator Science and Engineering [27]. Accessibility in High Energy Physics: Lessons from the Snowmass Process Various barriers may impede on full participation in HEP activities; in Ref. [17], using the results of surveys, experiences and additional feedback from community members, and best-practice guidelines, the authors studied accessibility to engagements in HEP and offered recommendations for improvement. The authors discussed the resources and funding needed to implement the recommendations. Barriers to accessibility include lack of financial support, mental health issues, deaf/hard of hearing, visual disability/blind, caretaker responsibilities and virtual access. These barriers affect the community as a whole by impacting on the ability to collaborate with the members that face accessibility challenges. Survey respondents said that logistics for accessibility should not be the burden of the persons that need access; the availability of transcripts and auto-captioning were noted; however, these fail to transcribe correctly all ramifications of human expressions. Furthermore, resources available often require advance planning and funding and these are the core recommendations for organizing accessible physics events. More details on the studies done and recommendations can be found in Ref. [17]. Lifestyle and personal wellness in particle physics research activities The demand of particle physics activities may result in an unhealthy imbalance between work and personal life. Unequal remunerations, living conditions and caretaker responsibilities and competitiveness for career progression, visibility, and grants, are among the causes for poor work-life balance. These career requirements lead to working after-hours, during weekends and holidays; teleworking may be impacted by living conditions and may blur the boundaries of work and personal times. Such an imbalance may result in mental health issues, burnouts, poor job satisfaction, and poor performance [18]. Other triggers of work-life imbalance are the expected activities that do not translate into research outputs or are not compensated in career evaluations-these include work for community engagement as noted in Section 11.6, for DEI initiatives, mentorship, refereeing, reviews, hiring committees, etc. Furthermore, as noted Refs. [22,23,24,25], unwelcoming working environment that translates into discrimination, harassment, non-inclusion, code of conduct violations, etc., places undue burden on the victims and members of marginalized communities and lead to work-life imbalance. In Ref. [18], the authors propose recommendations or actions to improve work-life balance. Climate of the Field: Snowmass 2021 The state of existing policies and their effectiveness to create an inclusive, equitable and safe environment for HEP engagements are discussed in Ref. [19]-"climate of the field". In many scientific engagements, code of conduct guidelines are in place to define respectful interactions. Mechanisms to address violations are also defined. Yes, implementation of these guidelines and how violations are reported and addressed, are affected by the "climate of the field". An example is how a violation is handled when it occurs in a collaboration and the concerned parties (perpetrator and victim) have different institutional affiliations. Often, group dynamics and power dynamics lead to an inability to adhere to code of conduct guidelines and address violations; this creates an unwelcoming environment and alienates victims and marginalized folks; as noted in Ref. [18], it also impacts on work-life balance. The contributed paper of Ref. [19] provides recommendations for several top-down approaches that should be implemented by the community as well as recommendations for funding agencies to support these approaches. Why should the United States care about high energy physics in Africa and Latin America? Contributions of developing countries to high energy physics activities are hampered by limited resources and national priorities. Title VI of the 1965 Higher Education Act [28], designed "to support US national interests and maintain global competitive edge in the international arena", is a compelling reason for the United States to support HEP in developing countries. Mechanisms and recommendations to improve HEP engagements with Africa and Latin are articulated in Ref. [20] where it is argued that such sustained engagements will help international development, improve diversity and increase the participation of developing countries in HEP. Experiences of Marginalized Communities in HEP Power dynamics, informal socialization, policing and gate keeping in HEP create an environment and culture that alienate under-represented physicists and negatively affect their participation. Often, privileged folks lack awareness and attention or focus on perception rather than reality; therefore, they hang on to claims of objectivity in physics which only serve to maintain the culture of under-representation and non-inclusion, to deny the negative experiences of marginalized physicists, and to expect them to shrug off these bad experiences, despite the harm caused, in order to be taken seriously as physicists. The key to improve the experiences of BIPOC physicists consists of addressing the complexity and impact of power dynamics, policing and gate keeping [23,24], and implementing actions individuals and organizations can take to lower barriers for early career BIPOC physicists [25]. Despite all the efforts and investment to improve DEI in HEP, these issues remain as demonstrated in Refs. [22,23,24,25,21]. We offer concrete suggestions towards improving DEI; these suggestions include effective approaches to reach members of marginalized communities through engagements as discussed further in Section 11.6 and Ref. [21]. In Search of Excellence and Equity in Physics The claims of objectivity in physics lead to meritocracy, i.e. the idea that "scientific work is judged on its merits and that opportunities in physics are equitably available to all aspirants". However, as demonstrated in Ref. [26], there is far more under-representation than could be expected from meritocracy. This further challenges the claims of objectivity in physics, along similar lines as Refs. [22,23,24,25,21]. To address this, changes in community practices are needed; we should challenge or verify organizational claims of equitable access. Focused efforts, continuous measurements and frequent corrections are required to achieve fair procedures, eliminate barriers and improve under-representation [26]. The physics community should seek best practices on how to combine equity and excellence; the private sector has made some progress to match best practices to company values, thus mitigating damages resulting from public exposure of misconduct. The need to enforce codes of conduct and to address violations, mentioned in Ref. [19], are also echoed in Ref. [26]. The American Physical Society has made efforts towards an equitable, diverse and inclusive field; however, community participation is required to improve excellence and equity. Meritocracy, instead of cronyism, is important to identify leaders. In Ref. [26], the authors go further to suggest recommendations towards an ethical hiring process for excellent leaders that will uphold the values of equity. Strategies in Education, Outreach, and Inclusion to Enhance the US Workforce in Accelerator Science and Engineering Accelerators, large or small, play important roles in fundamental research and applications; they are essential to discovery science and high technology, thus can help to train a strong technical workforce needed for particle physics research. In Ref. [27], the educational and outreach opportunities available in accelerator science and engineering are reviewed, with the objectives to attract talents and develop capacity for future R&D; in this process, the need to improve diversity, equity and inclusion is noted-the participation of women in the US Particle Accelerator School (USPAS) has increased; however, under-representation of women and historically marginalized groups still persists. Recommendations are proposed to improve diversity, equity and inclusion in accelerator science and engineering [27]. Suggestions to Improve Diversity, Equity and Inclusion in HEP From the aforementioned work in Section 11.4, we have prepared suggestions and resources that are tailored to particle physics, cosmology, and astrophysics, to further promote diversity and encourage equity, inclusion and accessibility at all levels of scientific discourse, engagements and managements. Suggestions for Funding Agencies HEP communities should improve use of robust strategic planning procedures, including a full re-envisioning of science workplace norms and culture: • Prioritize community-related issues at the funding level, e.g. inclusion of community-related topics into safety parts of collaboration "Operational Readiness Reviews," "Conceptual Design Reviews," or similar documentation submitted to funding agencies. Funding agencies should provide clear and enforceable requirements for the advancement of DEI issues in grants, programs, and evaluations [19]. • Funding agencies should provide formal recommendations for institutions, research groups and collaborations for handling violations of their codes of conduct. This should include advice on handling community threats, removal of collaboration affiliates, leadership rights and responsibilities, and protections against legal liability for leadership that is responsible for that enforcement. This should also include advice on reporting to the funding agency itself; if there is no mechanism for reporting misconduct to a funding agency, that mechanism should be developed [19]. • Funding and structural aid should be made available to develop "Collaboration services" offices at host laboratories. Such offices should provide HEP collaborations and other physics communities of practice with the following: a) advice on legal and policy topics, b) training in project management and ombudsperson training, c) logistical tools including facilitation of victim-centered investigation and mediation, d) resources and funding for local meeting accommodations, and other topics as described here in Section 11.4.8 [19,17,25,23,26]. • Funding agencies should use their leverage to promote community-focused policies at funded institutions. Funding agencies should require institutions that receive funding to implement policies on vacation time, parental & family leave (for all genders), and health leave for all levels. Funding agencies should require institutions to prohibit confidentiality in settlements for egregious behavior (e.g. harassment); this promotes accountability and prevents known perpetrators from continuing to harm their communities [18,19]. • Funding agencies should establish a dedicated Office of Diversity, Equity, and Inclusion to work with Program Officers to strategize and prioritize funding decisions and develop equitable practices for the review processes [21]. HEP communities must implement new modes of community organizing and decision-making that promote agency and leadership from all stakeholders within the scientific community: • Funding agencies should facilitate Climate Community Studies. Studies should not be the responsibilities of individual communities. These studies should be informed by expertise in social and organizational dynamics [19]. • Grant calls and assessments should include clear definitions of the tasks expected of PIs, including DEI related tasks, and provide grant funding for each. Alternatively, agencies could provide specific grants and awards for EDI and mentorship work. Agencies should ensure that they pay those on their grant review panels for their time [18]. • Funding agencies should collect, analyze, and publish demographic information on grant proposals and funded grants. PI and funded and unfunded researcher demographic information on grant proposals should be collected and used to track the effectiveness of these measures and are necessary to inform any additional policy changes needed to advance DEI policies and structures [18,25]. • Pay for fellowship-/ grant-funded student and postdoctoral researcher positions must increase. Pay should include cost-of-living adjustments, health / wellness / leave benefits, and relocation expenses [18]. • The US HEP community should maintain the current engagements and increase investments in Africa and Latin America to improve the reach of HEP in these regions. Funding agencies and international collaborations should acknowledge the disparity in economic capabilities of countries in Africa and Latin America compared to what is available in the United States. Funding agencies should support the development of HEP in these countries, should support and lead initiatives for more equitable contributions (e.g. membership and operations fees for participation in large collaboration, conference fee waivers and travel support to US-based meetings, etc.) [20]. HEP communities should develop partnership with scholars, professionals, and other experts in several disciplines, including but not limited to anti-racism, critical race theory, and social science: • Funding should be made available to both engage with and compensate experts in DEI, anti-racism, critical race theory, and social science. This can take the form of independent grants, but more effective would be the inclusion of climate-related topics into safety components of collaboration "Operational Readiness Reviews," "Conceptual Design Reviews," or similar documentation submitted to funding agencies [19]. • Community studies should be run by and receive advice from experts in sociology and organizational psychology. The tools used to evaluate the climate of HEP need to be adequate, effective, and informative. These studies and accompanying expertise should be funded at the federal and institutional levels. They should include evaluation of leadership selection, development of junior scientists and their trajectories, and the existence of detrimental power dynamics that specifically affect underrepresented groups. Undesirable systems should be addressed with direct intervention [19,23,25]. • Grant calls and assessments should include the advice of professionals in DEI and education. Such experts should review the entire process, including portfolios in their entirety, but with specific attention to mentorship and DEI plans. Experts should be paid for their time [18]. Suggestions for HEP Communities HEP communities should develop or improve robust strategic planning procedures, including a full reenvisioning of science workplace norms and culture: • HEP communities should support and take advantage of existing support structures and informational networks. Tools exist to support efforts to improve diversity and inclusion, as well as to address injustices in our communities. These include the American Association for the Advancement of Science (AAAS) Diversity and the Law program [29] which hosts resources to enable promotion of legal and policy goals related to DEI. Knowledge like that collected by the American Institute of Physics' National Task Force to Elevate African American Representation in Undergraduate Physics & Astronomy (TEAM-UP) Project [30,31] and the AAAS's STEMM Equity Achievement (SEA) Change [32] should also be promoted [23]. • Institutions and HEP communities must develop reporting mechanisms and sanctions for egregious behavior. These institutions and communities should transparently describe those mechanisms in full for the benefit of all affiliates. Communities must be prepared to exercise those mechanisms. Future HEP community codes of conduct should align with, and current codes of conduct should be reviewed upon new recommendations from funding agencies regarding enforcement and disciplinary measures [19]. • The community should prioritize the implementation of best practices networks across institutions and communities of physics practice. This may be facilitated through Collaboration Services Offices, but may also include the facilitation of networks between DEI groups at similar collaborations [19]. • All community affiliates should reject harmful rhetoric and behavior related to work-life balance. This includes "ideas around 'lone geniuses', the need for unhealthy work schedules, and the idea that sacrifice of personal wellness demonstrates your commitment to science" [18]. Senior scientists are responsible to ensure that they are managing their time and the time of those in their group properly to respect work-life balance (including reducing meetings outside of working hours, or rotating meetings to accommodate varying time zones) [18]. • Departments and institutions should have clear definitions of job responsibilities and ensure that they are funding all functions of the job. This includes any DEI work. Assessments should weight work in these areas equally and individuals should be awarded and/or recognized when they excel. Evaluation for employment should be based on carefully developed, public rubrics that include DEI, outreach, and service. Such rubrics should be created with considerable care and research-driven (e.g. if any of the criteria are biased in a way that would limit access or promotion of people who identify with an underrepresented group) [18,19,25]. • Departments and institutions should reject the use of standardized exams in favor of holistic rubrics for admission. Evaluation for admission should reject the use of standardized exams and instead should be based on carefully developed, public rubrics, that are tailored to the department. Such rubrics should be created with considerable care and be research-driven (e.g. if any of the criteria are biased in a way that would limit access or promotion of people who identify with an underrepresented group) [18,19,25]. • Graduate students and postdoctoral researchers should be paid at the level of their respective skill levels. This includes benefits like relocation services, health coverage (including families), retirement savings, subsidized family housing, and are not taxed for fellowship money they do not receive as pay. These benefits must apply while students and their families are abroad on behalf of HEP activities [18]. • Institutions should have accessible, clear, robust, and flexible policies for parental / family leave (for all genders) and vacation time. These should be guaranteed at all levels. Junior scientists should be made aware of and encouraged to take advantage of institutional policies and resources on diversity, health, leave, vacation, and wellness [18]. HEP communities must implement new modes of community organizing and decision-making that promote agency and leadership from all stakeholders within the scientific community: • Reviews of community climate should include an evaluation of how leadership is selected within HEP collaborations, as well as the valuation of sub-community contributions. This should include a expert-advised review of the assignment of high-impact analyses & theses topics, convenership of working groups, and public-facing roles representing the collaboration such as spokespersons or analysis announcement seminars. Power dynamics within communities should also be evaluated, and should consider the impact that senior scientists can have especially on junior scientists of color. It should also include reviews of the participation of "non-scientists" in community engagement and authorship, community perceptions of operations and service work, the development of onboarding and early-career networks, and implementation of policies toward equity in information sharing and software [19,23,25,26]. • Collaborations should train members in standards in the field and offer mentorship programs to ensure that postdocs and students (especially from underrepresented groups) have additional support and resources. Mentorship programs should be research-driven and should make access to information as ubiquitous as possible. Mentors should help novices navigate the complicated landscape of the community, and care should be taken to address the "untold rules", like non-academic career trajectories. Information sharing, especially about collaboration policies, procedures, and code-bases, should be evaluated from an equity lens [18,25,19]. • Conferences should offer financial assistance to individuals with hardships. Conferences should offer limited travel grants through an application procedure overseen by an ethics group associated with the conference. To promote the engagement of under-resourced and early-career scientists, conferences should also strongly consider developing an application for sliding-scale / waiver for conference registration fees. Conferences should accommodate care-giving responsibilities by providing childcare onsite, or by supporting the travel of an accompanying person. In both situations, extra funding should be budgeted by the conference to fully or partially cover those costs [17]. • The organizers of all HEP activities should ensure that people with accessibility barriers are truly accommodated, with guaranteed, low-friction, dignified access to all aspects of the experience. All conferences, collaborations, universities, and labs should be made accessible to people with disabilities. For example, conferences (including virtual meetings) should be announced with enough time to arrange accommodations for any individual needs, and organizers should plan to secure funding and book services far enough ahead of time. Accommodations should include both steno-captioning and ASL interpretation, which should be fully funded as part of the conference budget. Conferences should also be accessible to the blind / low-vision community, which may include screen-reader-accessible tools and "color-blind-friendly" plots. Other accommodations include: seating or accessible access to amenities like check-in and meals; locating the conference in ADA-compliant buildings with no obstructions to seating, entrances / exits, or accessible pathways; quiet spaces; and designated contacts for troubleshooting accessibility. An extensive list of recommendations can be found in [17]. • US universities and research labs should encourage and support the participation of their personnel, faculties and research staffs in HEP education and research efforts of African and Latin American countries. US institutes need to partner with Latin America and Africa in establishing bridge programs and supporting community members from Africa and Latin American to come to United States laboratories and universities for research experience programs. Collaborations and conferences should seriously consider decreasing or waiving membership and operations fees for participation and should provide financial assistance for travel to the United States [20]. HEP communities must engage in partnership with scholars, professionals, and other experts in several disciplines, including but not limited to anti-racism, critical race theory, and social science: • Experts should be adequately integrated into HEP communities. This is motivated by the need to apply their expertise effectively, and should include collaboration communities. This may take the form of an official collaboration role like a non-voting member of a collaboration council [19]. • Identification of leaders within HEP communities should be research-driven. HEP organizations and institutions require leaders who will promote policies and practices that support underrepresented and historically marginalized groups instead of favoring "politics and convenience". Best practices have been developed by industrial & organizational psychologists and are under studies at NSF (e.g. [33]). Details on necessary search practices can be found in [26]. Suggestions for Future Snowmass Activities • Community Engagement topics should be better integrated into other frontiers. This work is the responsibility of all HEP community members, and should not be relegated entirely to an independent, volunteer-driven frontier. • Funding for Snowmass activities should include critical infrastructure for accessibility. This includes live captioning for all public events, and infrastructure for hybrid meetings to support those who cannot travel to attend workshops. CEF04: Physics Education CEF04, the Physics Education (PE) topical group, examined the role that physics education at all levels plays in advancing the field of HEP. Two goals were identified as critical for the long-term health of the field: 1) attracting students across all demographics to the study of physics, and 2) provide them the education, training, and skills they will need to pursue any career in STEM or related fields. The CEF04 Topical Group Report puts forth recommendations intended to achieve these goals by strengthening ties between researchers and teachers, the academy and the private sector, and domestic and international students [5]. The PE group framed its studies according to the pyramidal scheme displayed in Figure 11-2. This diagram rises from the relatively large number of K-12 science students at the base up to the small apex of faculty-level physicists. The work was organized into four groups, each of which produced a contributed paper presenting detailed examinations of physics education challenges and opportunities at each level. The working group contributed papers are: • Opportunities for Particle Physics Engagement in K-12 Schools and Undergraduate Education [34]; • Transforming US Particle Physics Education: A Snowmass 2021 Study [35]; • Broadening the Scope of Education, Career and Open Science in HEP [36]; • The Necessity of International Particle Physics Opportunities for American Education [37]; Particle Physics Engagement Opportunities in K-12 Education The first working group found that partnerships between academia and K-12 teachers and students are effective in nurturing early student interest in math and science. It is important to provide a broad exposure to the different STEM fields at all levels in order to develop properly their scientific literacy. The key recommendation for forming these partnerships is: • At the local level, form collaborative communities ("fora") of academics of all backgrounds (physicists, engineers, technicians and K-12 teachers). (CEF04 Recommendation 1) To create these fora, a minimal amount of support for coordination and logistics will need to be available. Then to function well and avoid isolation, they need to be supported by a nationally, or even internationally, organized online repository for sharing resources. To be sustainable, it will be important to have a steady source for continued support, which could come from colleges, universities or institutes, but might also come from or be supplemented by outreach support in the form of research grants. Another important ingredient for sustainability is that the efforts of the fora participants are appropriately and regularly recognized. (CEF04 Recommendations 2,3) Educational Opportunities at the Undergraduate, Graduate and Postdoctoral Levels Education and training specific to particle physics generally begins with college undergraduates, and continues during the graduate student and postdoctoral researcher stages, and usually takes place at universities and laboratories. An online survey of the US HEP community was conducted to gain insight on students' experiences during this phase. On one hand, respondents indicated that this is the point at which students are expected to gain the skills necessary to launch a career in HEP or related fields. However, they also expressed that the training they received through formal education did not match the skills needed to succeed in physics. In contrast, many of the most important skills they routinely use in their research were learned "on-the-job." To better prepare students for physics careers, updates to formal university training is recommended [34]. • University degree programs should normalize training for particle physics and a broad range of STEM careers through inclusion of appropriate formal courses and career mentoring. (CEF04 Recommendations 4,5) There is a need for graduate programs in particle physics to provide formal courses with strong grounding in particle physics and mathematics, but also computation, statistics and instrumentation. This course instruction will benefit student careers in physics, industry or education, because the students will not be forced to resort prematurely to self-teaching or peer learning alone. Universities should provide undergraduate students with a more complete picture of what particle physics researchers do. By presenting a realistic view of common career paths post baccalaureate and postgraduate school, students will be better prepared to pursue options including theoretical and experimental positions as well as non-academic careers. The survey data was limited due to minimal undergraduate participation. This low response was due in part to a lack of connection information for the undergraduate demographic. Support from professional societies and Physics Departments could provide opportunities to strengthen connections and networking for undergraduate students with HEP community activities, and to develop a future survey focused on undergraduate participation. (CEF04 Recommendation 6) The physics community and HEP in particular, tends to be fairly "PhD driven." It is worth investigating whether a Masters degree program in particle physics could fill an important career path need in our field. Masters programs often have the opportunity for more cross-disciplinary work in adjacent fields such as engineering or computer science. Professional level Masters degrees could attract students working in private companies pursuing career advancement through continuing education, and build stronger bi-directional ties between HEP and the industry sector. PhD programs in physics often present strong structural barriers to many students from traditionally underrepresented groups, while a Masters degree could offer an intermediate and more achievable goal, and perhaps lead to greater diversity in HEP. • Universities, especially non-research universities, should consider setting up Masters Degree programs in particle physics and related areas, such as hardware and software technology for Big Science experiments. (CEF04 Recommendation 7) Collaborative Opportunities Across Academia An important challenge for HEP is the need to broaden and diversify the pool of talent and expertise drawn to the field. A crucial part of the solution to this problem is to build more collaborations between groups at the R1 research institutions that traditionally represent the vast majority of the HEP community, with R2 institutions, Predominantly Undergraduate Institutions (PUI), and Community Colleges (CC). • Expand the benefits of faculty collaboration and research opportunities across the broad spectrum of academia and give equivalent opportunities for all in technical and scientific leadership on projects, with appropriate recognition for contributions. (CEF04 Recommendation 8) A study of new models of collaboration or cooperation that would allow R2/PUI/CC faculty and their students to participate effectively in experiment collaborations could help address the challenges of teaching loads, student training and funding availability that directly impact our non-R1 institution colleagues ability to fully contribute to projects. Making data and analysis platforms broadly accessible will benefit student access and participation. The HEP community should embrace the value of Open Science by defining the scope of making our data and resources publicly available, and the hardware, software and person-power costs associated with such implementation. (CEF04 Recommendations 9,10) Fields such as instrumentation, computation, and machine learning have become critical components of the HEP enterprise. However, career paths at the intersection of particle physics and these specialized fields are not very clear or easy to navigate, nor universally recognized as "physics" work. Improving this situation would simultaneously address pipeline and retention issues within HEP, and equipping colleagues for careers outside the field. • Qualification for HEP faculty jobs should not be based solely on physics analysis but rather expanded to include computing, software and/or hardware contributions. (CEF04 Recommendation 11) International Opportunities for Particle Physics Education High Energy Physics is conducted through global international collaborations. These diverse partnerships enrich the intellectual environment of our field. As such, training in international collaboration throughout the educational process will facilitate more productive integration of talent and resources in future projects: • U.S. based pre-university particle physics collaborations should expand collaboration with international partners. (CEF04 Recommendation 12) Collaborations such as QuarkNet and other outreach programs, such as the International Particle Physics Outreach Group (IPPOG), the CERN Beamline for Schools (BL4S) and Teacher summer school programs in Europe, should partners with counterparts in the developing world, such as the African School of Fundamental Physics and Applications. Participation in the Global Cosmics portal should be enhanced by developing low-cost cosmic ray detectors for educational use. • Student exchange programs should be fostered and supported. (CEF04 Recommendation 13) These programs include the NSF Research Experience for Undergraduates (REU), which funds participation of U.S. students in the CERN Summer Student program, and the DoE-INFN summer student exchange program between the U.S. and Italy. Where possible these should be extended, in particular with student exchange programs and summer schools in developing countries, such as the African School of Fundamental Physics and Applications. CEF05: Public Education and Outreach The CEF05 working group focused on enabling members of the physics community to effectively communicate about scientific research through public engagement. Thirteen LOIs were tagged to the Public Education and Outreach topical group. Some of them were consolidated and developed into contributed papers in Section 11.4 about diversity, equity and inclusion [21,27] and Section 11.5 about physics education [34,36,37]. Other LOIs were condensed into two contributed paper topics, namely: 1. The need for structural changes to create impactful public engagement in US particle physics" [38]; 2. "Particle Physics Outreach at Non-traditional Venues" [39]. CEF05 collected input through a variety of methods. In addition to reviewing the LOIs, the group invited experts to their regular meetings for discussion and conducted a survey of the physics community. The majority of the survey's 358 respondents said they had participated in outreach activities. They mentioned that they were discouraged from participating in public engagement because they did not have enough time and because it generally did not benefit their careers. They were motivated to participate, though, because they wanted to reach underserved groups, to show openness or explain the scientific method to the public, to share their enthusiasm, and to inspire the next generation of physicists. In their engagement, they used storytelling and shared their own reasons for pursuing physics. Respondents said the best way to get involved in public engagement was to start small and gain experience by finding and plugging into established engagement programs. Details on the group activities are compiled in the topical group report [6]. The group identified several structural and cultural barriers to participation in public engagement. To remove those barriers and encourage physicists to engage the public, the group made specific recommendations aimed at research groups, experimental collaborations, conferences, universities and colleges, national laboratories, OSTP, Congress, DOE, NSF, private foundations, AAAS, APS, and DPF [38]. In general, the group recommends: • Providing or financially supporting training in effective public engagement The group also recommends individual scientists encourage others, including peers, mentees and students, by participating in public engagement and discussing its importance. For the next Snowmass process, CEF05 recommends a shift in focus from "public outreach" to "public engagement": two-way interactions that ensure mutual learning, which goes beyond the acquisition or transmission of knowledge and includes the understanding of perspectives, worldviews and socioeconomic backgrounds. Some innovative ideas on public engagement are discussed in Ref. [39], and details are provided in the topical group report [6]. The group recommends updating the topical group name from "Public Education and Outreach" to simply "Public Engagement" to reflect this shift in priorities, and also to clear up confusion between the goals of CEF05 and CEF04, the topical group focused on education. Public engagement can help recruit and retain scientists from diverse backgrounds, thus improving diversity as discussed in Section 11.4. Therefore, learning how to reach members of marginalized communities via public engagement is essential. Public engagement conducted without proper preparation can be counterproductive and harmful [40]. Working toward addressing the needs of the intended community must be the objective, achievable through building relations and inclusion [21]. A detailed checklist of questions for institutions to address when preparing to engage marginalized communities is mentioned in Section 11.4 and recalled here. Consider the audience: • Who specifically are we hoping to reach with this event? Why are we hoping to reach these communities? • How can we plan this event to make it maximally beneficial to these communities? What elements of this plan can we continue to use in other events? • What are the best ways to communicate about this event with members of these communities? Can we continue going to those same channels to communicate about other events? • Have we created a process by which we take time to evaluate the success of the event after it concludes? • What metrics (both qualitative and quantitative) will we use? Which of these metrics will we continue to use in evaluating other events? Identify and remove barriers: • Are there logistical barriers (e.g. time of day, day of the week, public transportation access, affordability, safety concerns, financial barriers) to our events that make them inaccessible to these communities? What will we do to address these barriers? • Have we allowed adequate lead time and budget to make this event accessible to all members of these communities, including those with disabilities? Have we identified partnership or staffing needs required to make the event accessible? Value partnerships: • What members of these communities will make good partners in this event? Have we made sure they're involved in planning the event? Have we secured an adequate budget to support fair compensation for our partners as co-creators of the event, prior to requesting their labor? • Do any members of these communities work for our institution? If they do, do they work in roles with decision-making power (e.g. managerial positions), or do they work primarily in service roles? If members of these communities do not work at our institution, or work only in lower-level positions, is our institution making any effort to change this? • Are members of these communities who work for our institution participating in this event? If so, are they receiving the support they need to take on this effort and fulfill their other job duties? Do they have decision-making power over the planning and execution of the event? Are they being fairly compensated and recognized for their efforts? Build lasting relationships: • Is this event a part of a larger effort to build relationships with members of these communities? If so, what is the long-term plan? Who will be responsible for enacting it? • Are there ways in which our institution is causing harm to members of these communities? If so, how is our organization working to change this? • How are representatives of our institution involved in these communities outside of this event? Are there ways our institution can work with members of these communities on their priorities, even ones that do not directly benefit our institution? CEF05 recommends finding ways to implement the structural changes needed for improved public engagement. The group further recommends that the physics community build lasting relationships with marginalized communities through public engagement; this will contribute to improve diversity in HEP as discussed in Section 11.4. Finally, CEF05 recommends that the American Physical Society's Division of Particles and Fields monitor progress toward these goals leading up to the next Snowmass process. CEF06: Public Policy and Government Engagement The topical group CEF06: Public Policy and Government Engagement (PPGE) was tasked with conducting a review of all current interactions between the HEP community and government offices and individuals. This enterprise includes identification of consensus positions on policies with direct impact on our field, development of unified messages from HEP to those determining and implementing policy, and creation and deployment of tools and resources to effectively communicate those messages in a manner resulting in positive policy outcomes. Those working in CEF06 identified areas of HEP government engagement that are missing or in need of improvement, and developed recommendations to address these opportunities. Three contributed papers produced by CEF06 document this work: • Congressional Advocacy for HEP Funding [41]; • Congressional Advocacy for Areas Beyond HEP Funding [42]; • Non-congressional Government Engagement [43]. and details of the analysis, synthesis, and recommendations based on those papers is presented in Ref. [7]. HEP Funding and Advocacy Organization Over the past few decades, HEP communication with government has largely focused on advocacy for strong federal budget support for HEP, which comes almost exclusively through the Department of Energy (DOE) Office of Science (OS) and the National Science Foundation (NSF). Funding of federal government programs is an extremely complex cyclical process; however there are three basic steps that provide target points for our advocacy. The first is the creation of the annual President's Budget Request (PBR), which is formulated by the Office of Management and Budget (OMB), with advice from the Office of Science and Technology Policy (OSTP), which works closely with DOE and NSF. The second step is for Congress to pass a budget, which sets topline numbers for funding each major area of government spending. Although the Budget Committees create the budget, it is informed and guided by individual authorization bills, which specify what Congress may spend money on, and these bills come out of authorization committees. Getting specific language supporting HEP programs into authorization bills greatly increases the likelihood of positive funding outcomes. The third major step is appropriations. Appropriations Committees make the decisions on the actual yearly allocation of funds to all government agencies and programs within the constraints of the Congressional budget topline numbers. The DOE OS and NSF receive HEP program planning advice from a federal advisory committee, the High Energy Physics Advisory Panel (HEPAP). HEPAP has a subpanel known as the Particle Physics Project Prioritization Panel (P5), which produces reports detailing long-range strategic plans for US HEP that are largely based on studies resulting from the Snowmass community planning process. The most recent P5 report from 2014 has served as the core of our field's message and advocacy to government for nearly a decade. The effectiveness of our messaging and advocacy over this time is indicated by the fact that DOE funding of HEP has grown by 36%, or roughly $300M since 2015. This advocacy is carried out jointly by the Fermilab Users Executive Committee (UEC), US-LHC Users Association (USLUA), and SLAC Users Organization (SLUO) with help from the American Physical Society Division of Particles and Fields (APS DPF). However, this group does not have the mandate nor resources to address any aspects of engagement with the government beyond federal funding advocacy (and not enough even to sustain fully that activity). Some new structure must be put in place to broaden HEP's engagement with government to effectively work for policies to strengthen our field. • Representatives of APS DPF, HEPAP, and the user groups, as appropriate, should have dedicated discussions to determine what actions can be taken to advance the recommendations outlined in this report [7]. (CEF06 Recommendation 1) Message Unity Around P5 Advocacy for the 2014 P5 plan has been very successful, leading to a current DOE budget for HEP in excess of $1B. Critical elements of that success are that P5 represented a single comprehensive plan for the entire US HEP program that had community-wide buy-in, and this led to one unified message that our field delivered to Congress and the Executive Branch. Prior to the 2014 P5, our messaging was fragmented, with people inside and outside the HEP community bringing their own takes on the HEP program to policymakers that were inconsistent with our community-organized advocacy. HEPAP and the 2023 P5 have to lead the effort to build a consensus message around the new P5 plan. This has to include both educating the community and the government about the new plan, and for the updates and changes to the plan that will inevitably occur. • HEPAP should build community unity around the 2023 P5 plan and develop a clear messaging strategy spanning the next 10 years. (CEF06 Recommendation 2) Building consensus will require short-term steps related to the drafting and roll-out of the P5 plan. There must be ample opportunity (outside of HEPAP meetings) for internal presentation and community feedback on the draft plan before its release. Once the plan has been finalized, P5 will need to launch an "education campaign" to communicate details about the plan to HEP community members and other stakeholders such as the funding agencies and policy makers. Long-term actions to maintain message unity consist largely of ensuring good communication. Each year since 2014, the P5 chair has produced a one-page status report that has proved invaluable for Congressional advocacy among other things. A formal commitment should be made to continue producing those reports. More detailed regular reports and feedback opportunities for the community on P5 plan implementation progress, modifications, and impacts will be crucial for keeping the field united behind the plan. (CEF06 Recommendations 2.3-2.6) Because P5, by definition, is focused exclusively on projects, it has never before directly addressed issues of community engagement, and it is not structurally set up to do so. However, community engagement issues have become extremely important to the healthy functioning of our field, including our projects. P5 will need to explicitly consider relevant community engagement concerns in its work in order to keep the field unified. (CEF06 Recommendation 2.7) Congressional Advocacy for HEP funding The community's advocacy for federal funding support of HEP largely consists of the annual trip to Washington, DC, during which a group of our colleagues meet with as many Congressional offices as possible, as well as with OMB, OSTP, DOE, and NSF. The 'DC Trip' is organized by UEC, USLUA, and SLUO each spring to fall between the release of the PBR and the markup of appropriations bills. These users organizations are composed of elected representatives of our community, and the roster of trip attendees is intentionally selected to broadly represent the entire field. However, the users groups are not fully representative of the field, so the formation of a new broader "HEP Congressional advocacy" group should be considered. The DC Trip has grown dramatically over the past two decades. Around 2004, roughly 25 attendees visited about 150 offices. With increased funding support, by 2019 almost 70 attendees visited all 541 Congress members' offices as well as 8 subcommittee staff and the Executive Branch offices. • Representatives of UEC, SLUO, USLUA, and APS DPF should facilitate discussions to consider the formation of a more formal "HEP Congressional advocacy" group to assume responsibility for organizing the annual advocacy trip to Washington, D.C. (CEF06 Recommendation 3) • The HEP Congressional advocacy group should continue to support, and should aim to grow, the annual HEP community-driven advocacy activities. (CEF06 Recommendation 4) The DC Trip requires a huge organizational effort that has been made possible by the development of a number of tools and resources. Foremost among these is the Washington-HEP Integrated Planning System (WHIPS), which is a framework to automate most of the planning, execution, and documentation logistics. WHIPS compiles information on Congressional districts, offices, and committees, tracks all past and future meetings, and uses data on trip attendees' personal, work, and family connections to districts to assign attendees to specific Congressional offices. It is the key tool that has enabled HEP advocacy visits to achieve complete coverage of Congress. There is also a twiki repository of trip information and an HEP funding and grant database, both of which could be expanded to include more granular district-level information. These resources have all been developed and maintained by the volunteer effort of a handful of early career colleagues without permanent positions, some who are no longer within the field. A more permanent plan for further maintenance, development and sustainability of these and new tools must be implemented to ensure their continued availability. (CEF06 Recommendations 4.1-4.2) One question colleagues often ask about HEP advocacy is "How do we know it is effective?" What are the diagnostics and metrics that we use to measure the benefit of the advocacy efforts? WHIPS can track information such as Congress members' voting records on specific legislation and signatures on Dear Colleague letters, and even match those members' activities to the HEP trip attendees who visited those offices. However, long-term collection and analysis of the data will require resources beyond current trip planning and participation. (CEF06 Recommendation 4.3) There are many professionally-produced communication materials created for the DC Trip. These documents convey our advocacy messages to government offices, and many also serve to share different aspects and benefits of HEP to other audiences. They have been developed by the user organizations in concert with the Fermilab Office of Communication, DOE, and the P5 chair, but there is no guarantee that those groups and individuals will be able to continue providing that support. Investments in maintaining those production partnerships should continue. (CEF06 Recommendation 5) Training materials have also been created to prepare community members for their participation in the DC Trip, and this training has been crucial for enhancing the professionalism and effectiveness of our advocacy. These materials must be regularly updated and deployed. In addition, making them available to the wider HEP community will enable expansion of our advocacy and help ensure unified messaging. Further inreach efforts to inform the field about our advocacy through more frequent talks and annual reports would also help achieve these goals. (CEF06 Recommendations 4.4-4.5) Some specific aspects of the DC Trip require specialized knowledge and experience that currently is held by a small number of long-term participants. These include organizing meetings and building relationships with OMB, OSTP, and Congressional subcommittee staff, and the development and use of WHIPS and other tools. Through documentation, this knowledge base needs to be expanded to more participants and archived for future leaders. (CEF06 Recommendations 4.6) Because the users groups have organized the DC Trip, participants have skewed to experimental Energy and Intensity Frontier colleagues. Although efforts are made to achieve broad representation of the community on the trip roster, more needs to be done to ensure representative participation from segments of our field such as Theory and Computation, from colleagues at all career stages, and from underrepresented groups. (CEF06 Recommendation 4.7) Finally, opportunities for year-round advocacy should be pursued to engage specific offices (including Congressional local district offices) at other key points in the budget cycle. These opportunities must be weighed against the additional resource costs that would be required. (CEF06 Recommendation 4.8) Non-Congressional Advocacy As was mentioned previously, the DC Trip includes meetings with staff from OSTP and OMB. These are the people who provide policy and budgetary guidance concerning science funding to the formulation of the PBR. These meetings are opportunities for HEP to convey the priorities of our field to the Administration, and also for us to learn about the Administration's science priorities. The materials for and timing of the DC Trip are chosen primarily for Congressional advocacy. These choices are not necessarily optimal for Executive branch advocacy. For example, our meetings with OSTP and OMB take place immediately after the completed PBR is released. The potential impact of these meetings is very high, and could be maximized with materials, messages, and timing specifically targeted for these offices. • The HEP Congressional advocacy group should work to improve HEP community engagement of the executive branch, especially OMB and OSTP. (CEF06 Recommendation 8) No HEP-wide advocacy efforts exist that are directed to state or local governments. However, there are state and local engagement efforts carried out between individual facilities and the communities in which they are located. These include the Fermilab Community Advisory Board which provides community input and feedback to Fermilab regarding its programs and projects, and similar situations exist with Berkeley Lab and SURF. In all cases, these engagements have been mutually beneficial to the facilities and their communities. • The HEP Congressional advocacy group and APS DPF executive committee should facilitate discussions to explore the potential advantages to systematic engagement of local and state governments. (CEF06 Recommendation 9) Advocacy for Issues Beyond HEP Funding All of the current HEP community-wide advocacy is directed toward the support and growth of federal funding for HEP. There are many other policy issues not directly tied to HEP funding that nevertheless impact our colleagues and programs of research. Most are or can be addressed with federal legislation. Among these are DEI concerns about limited access to national research facilities for non-R1 institutions, visa and immigration policies that present barriers to foreign scientists wishing to study at or visit US institutions, and balancing research security with open collaboration. While some of these issues are referenced in our DC Trip materials within the context of supporting particle physics, our advocacy infrastructure does not have the resources, procedures, or mandate to advocate for specific non-funding policy positions. Conversations throughout Snowmass on this type of advocacy yielded no consensus between the desire of some to leverage our funding advocacy infrastructure for these issues, and the concerns of others that consensus building on policy issues and potential negative impacts to the field would prove problematic. However, there are external groups with larger resources, constituencies and infrastructure that we have access to for broader advocacy. Among these are the APS, American Institute of Physics (AIP), and American Association for the Advancement of Science (AAAS). All of these groups employ government relations staff, and possess the resources and experience to mobilize advocacy for non-funding issues. They also run very active Congressional Fellowship programs which offer opportunities for HEP community members to participate in much more direct and deeper government engagement on policy. These opportunities should be much more widely promoted to the HEP community. • The HEP Congressional advocacy group and APS DPF executive committee should identify an existing community group or create a new one to take ownership of strengthening connections between the HEP community and science and physics societies, including APS, AIP, and AAAS. (CEF06 Recommendation 6) Engagement with Funding Agencies Another area of government engagement by the HEP community that needs to be improved is direct communication with the funding agencies, DOE and NSF. There are some communication channels that exist, but there are issues with each that prevent them from adequately serving the community. Foremost among the challenges to open communication is the power dynamic between the funding agencies and individual scientists. Groups like HEPAP and the Community of Visitors are explicitly directed to serve as communication channels between the community and agencies, but their memberships are appointed and skewed toward senior colleagues, presenting somewhat of a barrier to younger colleagues feeling comfortable sharing feedback. This is exacerbated by the fact that HEPAP meetings usually have Congressional and/or Executive government officials in attendance. Meetings between DOE/NSF program managers and individual PIs or groups, as well as grant reviews also provide opportunities for communication, but suffer from lack or participation from early career scientists and concerns that negative feedback could have a negative impact on grant applications. None of these channels offers anonymous communication. • DOE and NSF should improve existing channels and create new ones, as necessary, to enable HEP community feedback to the funding agencies. (CEF06 Recommendation 7) There are actions that DOE and NSF could take to remove barriers created by the power dynamic between the agencies and researchers, particularly those in early career stages or from marginalized groups. Chief among these are creating anonymous feedback channels and partnering with community leaders such as the users groups, DPF executive committee and collaboration spokespeople to advertise and encourage the use of communication paths. A particular topic of concern that was frequently expressed is the need to open channels of communication regarding details of the granting process and how that process could be improved. (CEF06 Recommendations 7.1-7.3) CEF07: Environmental and Societal Impacts This topical group focused on ideas and projects related to how particle physics research impacts society and the environment. Examples of impacts on society include collaboration between particle physics research facilities and indigenous communities related with the land host facilities, or ethical usage of software tools in particle physics research. Examples of impacts on the environment range from the local environment of a research facility (pollution, regional development, international visibility, etc.) all the way to the carbon footprints of particle physics research (experiments, facilities, institutions, etc.). Looking at the long time scale of some particle physics experiment proposals, consideration of the implications of climate change and of the various commitments on carbon emissions reductions by host countries is of paramount importance to ensure the success of particle physics research in the future. Ideas and suggestions on all of those issues were encouraged within this topical group. Environmental Impacts of Particle Physics Particle physics activities include construction and operation of large-scale facilities, detectors, and computing farms, and travels for various types of physics engagements. Doing particle physics impacts on the environment, and this must be considered in the global context of climate change as discussed in Ref. [44]. Future progress in the field will require the construction of new facilities. The environmental impacts of facility constructions to advance particle physics research need to be understood in the global efforts to reduce global warming. The projected carbon impacts of just the construction of the main tunnel of the Future Circular Collider (FCC)-or any similar-scale facility-would be comparable to that of a redevelopment of a major city neighborhood; that level of emission will not go unnoticed. The field needs to invest in carbon reduction R&D and anticipate environmental impact reviews [44]. The green ILC is an effort to include carbon reduction in the design, and later in the construction and operation of this machine; a working group has been organized to study efficient design of ILC components and a sustainable ILC City around the laboratory. Should this machine be constructed, its design will be adapted, in consultation with local authorities, to offset excess carbon emission [45,8]. In Ref. [46], the necessity for environmental sustainability in the development of next generation accelerators is articulated; energy-efficient components and conceptual designs are focus areas for energy efficiency and power consumption in large-scale accelerators. Carbon emissions from greenhouse gases used for detectors and cooling is another area of environmental impact of particle physics activities and mentioned in Ref. [44]. For future particle physics projects, investment in R&D is needed for alternative gases-with low global warming potential-for detector operation and facility cooling, without compromising physics performance. Particle physics research relies on large-scale computing; efforts to mitigate computing-related carbon emission should developed, e.g. by optimizing computing-intensive coding and task scheduling [44]. Greenhouse gas emissions associated to laboratory or university activities are categorized within the scope of direct emissions (from the organization), indirect emissions (electricity, heating, etc.) and other indirect emissions (business travels, commutes, catering, etc.). Across many institutes, much remains to be done to reduce per-capita emissions below the 1 t CO2e per year needed to prevent excessive warming. Travel for physics activities are an essential part of doing particle physics, and aircraft emissions are increasing. It is important to understand what travel are important or necessary and to develop the infrastructures for effective remote engagements. The experiences during the COVID-19 pandemic can serve as guides to optimize in-person versus remote engagements [45,8]. The following guidelines are advanced to reduce the environmental impacts of particle physics activities: • New experiments and facility construction projects should report on their planned emissions and energy usage as part of their environmental assessment which will be part of their evaluation criteria. These reports should be inclusive of all aspects of activities, including construction, detector operations, computing, and researcher activities. • US laboratories should be involved in a review across all international laboratories to ascertain whether emissions are reported clearly and in a standardized way. This will also allow other US particle physics research centers (including universities) to use those standards for calculating their emissions across all scopes. • Using the reported information as a guide, all participants in particle physics -laboratories, experiments, universities, and individual researchers -should take steps to mitigate their impact on climate change by setting concrete reduction goals and defining pathways to reaching them by means of an open and transparent process involving all relevant members of the community. This may include spending a portion of research time on directly tackling challenges related to climate change in the context of particle physics. • US laboratories should invest in the development and affordable deployment of next-generation digital meeting spaces in order to minimize the travel emissions of their users. Moreover the particle physics community should actively promote hybrid or virtual research meetings and travel should be more fairly distributed between junior and senior members of the community. For in-person meetings, the meeting location should be chosen carefully such as to minimize the number of long-distance flights and avoid layovers. • Long-term projects should consider the evolving social and economic context, such as the expectation of de-carbonized electricity production by 2040, and the possibility of carbon pricing that will have an impact on total project costs. • All US particle physics researchers should actively engage in learning about the climate emergency and about the climate impact of particle-physics research. • The US particle physics community should promote and publicize their actions surrounding the climate emergency to the general public and other scientific communities. • The US particle physics community and funding agencies should engage with the broader international community to collectively reduce emissions. Impact on Local Communities Physics engagement with local communities is important to build relations and draw long-lasting community support for particle projects and activities; as noted in Section 11.6, successful and impactful engagements should be foundational rather than transactional. In Ref. [47], local community engagement efforts of three laboratories are studied, namely Lawrence Berkeley National Laboratory (Berkeley Lab), Fermi National Accelerator Laboratory (Fermilab), and the Sanford Underground Research Facility (SURF). These laboratories have different local environments, from urban (Berkeley Lab), to suburban (Fermilab), to rural (SURF). In all the three cases, foundational local engagements that promote diversity, communication and lasting relationships, are shown to be mutually beneficial to the community and the laboratory. We propose the following recommendations for foundational engagements between laboratories and their surrounding communities. Laboratories should engage with their local communities in order to create awareness about their work and build lasting, positive relationships. Community engagement plays an essential role in local decisionmaking, building relationships, and important discussions about the implementation of key projects. Large particle physics projects funded by the US Government require an evaluation and mitigation of each project's potential impacts on the local communities. In addition to satisfying governmental requirements, working alongside their local communities can foster lasting change that broadens the positive societal impacts of particle physics research. Laboratories should have consistent outreach and engagement efforts that provide regular opportunities for feedback to help establish trust. Through its Community Advisory Board, Fermilab offered regularly scheduled meetings to gain feedback from local communities. In addition, SURF ensured its communication with stakeholders at Isna Wica Owayawa was consistent and persistent in order to overcome scheduling and other barriers. Laboratories should promote diversity of membership and collaborative efforts in their outreach initiatives to bring a variety of perspectives to the table and create a better end project. SURF's work with tribal elders and other leaders in its local community helps ensure perspectives of indigenous populations in the region are represented and reflected in the work of the Sacred Circle Garden. Meanwhile, Fermilab regularly refreshes and expands its CAB membership to ensure it remains representative of the diversity of its suburban area. Laboratories should avoid transactional relationships when developing relationships with stakeholders, and instead focus on approaches that provide value to each entity. Laboratories will be best served by making an extended commitment to working with collaborators over an extended period of time, rather than onetime interactions. Opportunities to receive feedback and consider changes can have lasting impacts on the collaborative efforts. SURF has continued to see improvement in program outcomes with Isna Wica Owayawa using this approach. Berkeley Lab has seen success by utilizing small investments in staff time, small-scale donations, and other resources as a launch pad for lasting collaborations with organizations with shared goals and values. Laboratories should utilize methods that promote honest, two-way communication when engaging in collaborative efforts with stakeholders. All three case studies exemplify the benefits of open communication.The CAB at Fermilab creates a space where local community members and the lab are able to air concerns and discuss solutions. Berkeley Lab ensures that its community engagement interactions provide a space for members of the community and partners to voice their opinions, while Berkeley Lab listens and reflects on the opinions shared. Finally, SURF seeks indigenous perspectives although in some instances, the resulting dialogue can result in uncomfortable conversations. However, by promoting difficult conversations in a safe environment, SURF was able to promote a design for its ethnobotanical garden that was approved by all involved. Impact on Nuclear Non-proliferation Detector technologies for neutrino physics can find applications-or benefit from R&D-in nuclear nonproliferation where detection of reactor anti-neutrinos offer promising reactor monitoring systems that can be remotely operated, robust, non-intrusive and persistent. These ideas are explored in Ref. [48] and the following recommendations are advanced: The High Energy Physics community should continue to engage in a natural synergy in research activities into next-generation large scale water and scintillator neutrino detectors, now being studied for remote reactor monitoring, discovery and exclusion applications in cooperative nonproliferation contexts. Examples of ongoing synergistic work at US national laboratories and universities should continue and be expanded upon. These include prototype gadolinium-doped water and water-based and opaque scintillator test-beds and demonstrators, extensive testing and industry partnerships related to large area fast positionsensitive photomultiplier tubes, and the development of concepts for a possible underground kiloton-scale water-based detector for reactor monitoring and technology demonstrations. Opportunities for engagement between the particle physics and nonproliferation communities should be encouraged. Examples include the bi-annual Applied Antineutrino Physics conferences, collaboration with US national laboratories engaging in this research, and occasional NNSA funding opportunities supporting a blend of nonproliferation and basic science R&D, directed at the US academic community. Links with Other Topical groups This topical group is interlinked with other Community Engagement Frontier topical groups. For example, the impacts on society include issues involving inclusion and diversity and the impacts on the environment and the sustainability of the field involve components associated with the applications and industry topical group. Conclusions During Snowmass 2021, participants in the Community Engagement Frontier have attempted to address the importance of engaging members of various communities to generate support for and sustainability of high energy physics. The CEF efforts were organized into seven topical groups and connections with the other Snowmass frontiers were established through frontier liaisons to exchange feedback. Each CEF topical group studied specific focus areas of engagement and their importance to society and the future of our field. The focus areas were Applications and Industry, Career Pipeline and Development, Diversity, Equity and Inclusion, Physics Education, Public Education and Outreach, Public Policy and Government Engagement, and Environmental and Societal Impacts. Topical groups collected and studied inputs from letters of interest, surveys, town hall meetings, workshops, invited expert discussions, and regular working group meetings. These efforts produced thirty-five contributed papers and seven topical group reports containing recommendations to improve engagement between HEP and related communities. A few ideas that were not developed into contributed papers because of lack of person-power are nevertheless noted in the texts. To facilitate maximum impact and efficiency of implementation, the suggested recommendations for action have been directed to different entities within the HEP community, namely government and funding agencies, academic and research institutions, research collaborations, professional societies and individual physicists, and have been organized into overall goals categorized by five target communities for engagement. We, the topical group and frontier conveners of CEF, lament the persistently low participation in community engagement. Regrettably, regardless of efforts throughout Snowmass 2021 to motivate participation on crosscutting CEF issues, the work in this frontier was carried out by a relatively small number of colleagues, most of whom are physicists who also had interests in other physics frontiers that they were largely unable to pursue. Various reasons, some quite understandable, have been advanced to explain this lack of interest. However, until due importance is given to community engagement efforts and mechanisms are implemented for support, encouragement and rewards, no meaningful progress will be achieved in spite of expressed wellmeaning intentions. We call upon each specified entities' members that are serious about improving HEP community engagement to take ownership of the CEF suggestions and act on their implementation within structures developed to foster and gauge progress. We hope that at the next Snowmass, we do not find a repeat of this past decade's inaction on these issues, but rather that we inherit a vibrant program of HEP community engagement on which to build. of communications with other frontiers, and the topical group and frontier conveners who carried the heavy loads so diligently and efficiently. Community Planning Exercise: Snowmass 2021
2022-11-24T06:42:26.995Z
2022-11-23T00:00:00.000
{ "year": 2022, "sha1": "345e78a503d4e5a6ce8dc66d0f423ba17c433849", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "345e78a503d4e5a6ce8dc66d0f423ba17c433849", "s2fieldsofstudy": [ "Education", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
82208690
pes2o/s2orc
v3-fos-license
Effect of D-Alpha Tocopherol Therapy towards Malondialdehyde The journal homepage www.jpacr.ub.ac.id ISSN : 2302 ‐ 4690 72 Effect of D-Alpha Tocopherol Therapy towards Malondialdehyde 1 Level and Histology Analysis of Kidney in Rattus norvegicus with 2 MLD-STZ Induction 3 4 Marissa Agnestiansyah, Aulanni’am and Chanif Mahdi 5 6 Department of Chemistry, Faculty of Science, Brawijaya University, Jl. Veteran, Malang 65145, East Java 7 Indonesia; Corresponding author: ichaansya@gmail.com; Phone: +62341575838 8 9 Received 29 March 2013; Re-submitted 10 April 2013; First revised 28 May 2013; Second revised 3 June 2013; 10 Published online 5 June 2013 for edition May-August 2013 11 12 ABSTRACT 13 Diabetic Nephropathy is a kidney disease which occurs due to complication of diabetes 14 mellitus as a consequence of the damage of the kidney endothelial cells. Hyperglicemia 15 condition in patients with diabetes mellitus that induces an oxidative stress, were related 16 to endothelial cell damage. Oxidative stress as a result of hyperglycemia will activate a 17 number of signal transduction pathways resulting in increase of free radicals. D-alpha 18 tocopherol as one of antioxidant substance, that can act as an inhibitor of free radical 19 chain reactions, play an important role in the reduction of the oxidative stress effect. 20 Effect of D-alpha-tocopherol in reducing oxidative stress is identified by measuring the 21 levels of malondialdehyde (MDA) in kidney and histology of kidney. This study used 22 five groups mice; they were a control group, a diabetic group which was induced with 23 MLD-STZ, and a therapeutic groups with a varieties doses of D-alpha tocopherol (100 24 mg/kgBW, 200 mg/kgBW and 300 mg/kgBW). The results showed that the D-alpha 25 tocopherol was able to reduce the levels of malondialdehyde (MDA) and repair the 26 histology of kidney of mice induced by MLD-STZ. 27 28 Key word: diabetic nephropathy, diabetes mellitus, MLD-STZ, malondialdehyde, D-alpha 29 tocopherol 30 INTRODUCTION Complications of diabetes in the kidney is known as diabetic nephropathy.Damage to the kidneys filter or glomerolus occurs in patients with diabetic nephropathy.Glomerolus damage will cause some blood proteins excreted abnormally in urine.This situation is called as glomerolus hyperfiltration 1 . Glomerular hyperfiltration is due to endothelial kidneys cell damage.One of the causes of endothelial cell damage is oxidative stress that occurs in diabetic people with hyperglycemia 2 .Hyperglycemia stimulates the release of superoxide in mitochondria, triggering early oxidative stress in patients with diabetes mellitus (DM).Source of oxidative stress in diabetic patients proceed against non-enzimatic pathway, enzymatic and mitochondrial pathways.Enzymatic sources of oxidative stress is derived from the enzymatic glucose.Glucose can undergo autooxidation and generate hydroxyl radicals ( • OH).In addition, glucose reacts with non-enzymatic proteins that produce Amadori products followed by the formation of Advanced Glycation End Products (AGEs) which increase the oxidative stress.Polyol pathway in hyperglycemia also produces the radical • O 2 -.Autooxidation process on hyperglycemia and glycation reactions will trigger the formation of free radicals, particularly radical superoxide ( • O 2 -) and Hydrogen peroxide (H 2 O 2 ), then the Haber-Weis and Fenton reactions will convert the previous radicals into hydroxyl radicals ( • OH).Hydroxyl radicals attack Poly Unsaturated Fatty Acids (PUFAs) in cell membranes, resulting in the formation of hydroperoxide lipids and MDA.The latter compound will cause oxidative damage to kidney cells 3 .The damage of oxidative stress in people with diabetes mellitus can be resisted by a diet of high levels antioxidant food.One of antioxidant that serves to reduce oxidative damage in diabetics is vitamin E. According to Aggarwal et al., 4 vitamin E has been shown to reduce microalbuminuria and repair kidney damage in patients with diabetic nephropathy.The majority of natural supplements of vitamin E are in the form of D-alpha tocopherol.D-alpha tocopherol can work as a scavenger of oxygen free radicals, lipid peroxyl and singlet oxygen.D-alpha tocopherol is also known as an antioxidant that can maintain the integrity of the cell membrane 5 . Vitamin E supplementation 100 IU/day significantly increases glutathione and lowers lipid peroxidation and glycosylated hemoglobin (HbA1c) concentrations in the erythrocytes of type 1 diabetic children patients 6 .Alpha tocopherol supplementation was beneficial in decreasing blood lipid peroxide concentrations without altering antioxidant enzyme activities in Korean patients with type 2 diabetes treated with Continuous subcutaneous insulin infusion (CSII) 7 .Streptozotocin-induced diabetic rats receiving 200 mg/kgBW alpha tocopherol daily, after 10 days reduced plasma malondialdehyde levels, increased glutathione peroxidase activity and accelerated the rate of wound closure in treated rats 8 .Erythrocyte malondialdehyde decreased and serum-total antioxidant status increased after alpha tocopherol treatment 800 IU/day during 6 weeks in female type-2 diabetics 9 .Vitamin E supplementation 1000 IU/day to diabetic type 2 patients for 2 months significantly increased GSH levels and lowered MDA levels which are markers of oxidative stress and this may reduce the risk of microvascular and macrovascular complications associated with diabetes mellitus 10 .To the best of our knowledge, the use of D-alpha-tocopherol in reducing oxidative stress in diabetic nephropathy has not been studied.Therefore, in this study, we observe MDA levels isolation of kidney organ and histology of kidney tissue in the diabetes mellitus type 1 mice that are treated with D-alpha tocopherol.EXPERIMENTAL Animals and experimental design Twenty-five Rattus norvegicus (male, body weight 130-160g) were housed at room temperature in the animal house of Cellular and Molecular Biology Laboratory, Mathematics and Sciences Faculty, Brawijaya University Malang and were exposed to alternate cycles of 12 h light and darkness.The mice were divided into five groups as follows : control (nondiabetic) group (n = 5), diabetic group (n = 5) which is induced by multiple low dosestreptozotocin (MLD-STZ) for five days and incubated for fourteen days until their glucose blood level was more than 300mg/dl.STZ dose used was 20 mg/kg BW for five consecutive days 11 .Therapeutic groups are treated with variant doses of D-alpha-tocopherol (100; 200, and 300 mg/kg BW) after induced by MLD-STZ.Each D-alpha-tocopherol dose group contained 5 mice.At the end of the experiment, kidneys were collected by cervical dislocation.The kidneys were washed with 0.9% NaCl and the left kidneys were immersed in PBS for five minutes.The right kidneys were immersed in 4% PFA for seven days for further kidney tissues observation.All conditions and handling animals were conducted with protocols approved by Ethical Clearences Committe of Brawijaya University (121-KEP-UB). MDA Measurement using Thiobarbituric acid (TBA) Test A kidney (1.8 gram) was homogenized with 1 mL of NaCl 0.9% in a cold condition by using a block ice for conditioning.The homogenate was centrifuged at a speed of 8000 rpm for 20 minutes and supernatant was taken.Then 100 μL of kidney supernatant was added by 550 μL aquadest, 100 μL TCA 100%, 250 μL HCl 1 N, and 100 μL Na-Thio.At each reagent addition was homogenized with a vortex.The mixture was centrifuged at 500 rpm for 10 minutes and supernatant was taken.Furthermore, the solution was incubated in the water bath at 100° C for 30 minutes and left until reach to room temperature.The samples were measured at 541 nm for TBA test. Histological analysis of kidney tissues Kidneys were fixed in paraformaldehyde solution and were dehydrated with a gradual ethanol series, then were embedded in paraffin to bring out ultrathin sections of kidneys.Furthermore, the ultrathin sections were stained with Hematoxylen-Eosin.First, the ultrathin sections were deparaffinized with xylol and rehydrated with a gradual ethanol series (absolute, 95, 90, 80 and 70%) respectively for 5 minutes.Then those were soaked in aquadest for 5 minutes.Furthermore, the ultrathin sections were dyed with hematoxylen and were incubated for 10 minutes to obtain the best color results.Then the ultrathin sections were washed with flowing water for 30 minutes and rinsed with aquadest.Next, the ultrathin sections were dyed with eosin with alcohol for 5 minutes.The last steps were dehydrated using a gradual series of ethanol (80%, 90%, 95%, and absolute) and cleared with xylol then dried.The dried and stained ultrathin sections were mounted with entellan and were observed under a microscope (Olympus BX53) with a magnification of 600 times. Therapeutic Effect of D-alpha tocopherol Against MDA Levels of White Rat Kidney Induced MLD-STZ A number of diabetic nephropathy pathogenesis pathway cause of hyperglycemia increases the amount of free radicals in the body.The imbalanced condition between free radicals and cellular antioxidants in the body will induce an oxidative stress and related to oxidative damage. One of these pathogenesis pathway was sorbitol polyol pathway.Sorbitol polyol pathway activation reduces the number of reduced Nicotinamide adenin dinucleotida phosphate (NADPH) which is required to convert Glutathione disulfide (GSSG) into Glutathione (GSH).GSH is an important cellular antioxidant and GSH reduction will lead to oxidative stress.Autooxidation glucose that occurs due to hyperglycemia is also a source of hydrogen peroxide (H 2 O 2 ) and superoxide ( • O 2 -).Hydrogen peroxide and superoxide via the Habber-Weis reaction include Fenton reaction step will be converted into hidroxyl radicals 12 . Lipid peroxidation is one cause of oxidative damage which involves the reaction between hydroxyl radical with Poly Unsaturated Fatty Acids (PUFA) 13 .Lipid peroxidation which happens in the cell membrane of the kidney will cause kidney disfunctioned and it will leading to the end-stage condition that called kidney or renal failure.Levels of oxidative damage caused by lipid peroxidation can be checked through the measurement of MDA 14 . Unsaturated double bond in PUFA facilitates hydroxyl radical to attack on the acyl chain.PUFA becomes radical lipid through the taking of one hydrogen atom from one methylene group.Lipid radicals react with oxygen in the body forming lipid peroxyl radicals.Peroxyl lipid radicals attack the other lipids so that it generates lipid peroxide and new lipid radicals.This reaction occurs continuously forming a chain reaction.Lipids peroxyl radical have a rearrangement through cyclisation reaction to form MDA 13 .Lipid hydroperoxide is an unstable compound and its fragmentation will produce a product such as MDA 15 . MDA level of kidney tissue was measured by TBA test.TBA test principle is a condensation reaction between one molecules of MDA with two molecules of TBA in acid condition as displayed in Fig. 1.Complex of MDA and TBA produced a pink color that can be measured at a maximum wavelength of 541 nm.MDA levels indicates the number of lipid peroxidation and cell damage that occured.The higher level of MDA was indicate the more severe cell damage that occurs.1, the levels of MDA in diabetic mice were significantly higher compared with non-diabetic mice.Therapy with D-alpha tocopherol may reduce elevated levels of MDA.MDA levels declined with increasing doses of D-Alpha tocopherol used.Statistical test results showed that there were significant differences (P<0,01) between MDA levels of diabetic mice and therapeutic mice.It suggests that the D-alpha tocopherol able to act as an antioxidant especially as a hydroxyl radical scavenger.studied that the inhibition mechanism of lipid peroxidation by D-alpha tocopherol which initiated when lipids (LH) lost an atom hydrogen became lipid radical (L•).Lipid radicals will react with molecular oxygen to produce lipid peroxyl radical (LOO•).Lipid peroxyl radicals can react with other unsaturated lipids and caused a chain radical reaction.At this stage, D-alpha tocopherol will donate one H atom from its hydroxyl (OH) group to lipid peroxyl radical.In the rest, this D-alpha tocopherol become non active alpha tocopherol radical and can be excreted out of the body. Histology of Kidney Tissue from Control Mice, Diabetic Mice, and The Therapeutic Mice Free radicals are the result of normal product of cell metabolism.However, some circumstances may interfere the balance between ROS production and cellular defense mechanisms that lead to cell disfunction and cell damage.Fibrosis and endothelial cell damage due to oxidative stress could cause damage to kidney tissue and kidney disfunction.The histology of kidney tissue was observed to determine both the level of damage and organ repair.Comparison of kidney tissue damage between the control mice, diabetic mice and therapeutic mice can be seen in the results of Haematoxylen Eosin staining results as displayed in Fig. 2. Glomerolus cells and tissues in control mice kidney look intact and compact.The boundaries between one cell and another in control mice kidney tissue are clearly visible.The boundary between one cell and another in diabetic mice can not be seen.Glomerolus cells of the diabetic mice not intact.It indicates that MLD-STZ induction has been damage the endothelial cells of diabetic rats kidney. After receiving therapy of D-alpha tocopherol, glomerolus looked better and the boundaries between the cells became clearly visible.The higher dose of D-alpha tocopherol therapeutic bring out a better repair of histology of kidney tissues and the therapeutic dose of D-alpha tocopherol 300 mg/kgBW in diabetic mice can restore kidney tissue structure almost like normal mice kidney.D-alpha tocopherol really can maintain the integrity of cell membranes by inhibited lipid peroxidation reaction. CONCLUSION Therapy of D-alpha tocopherol with varieties of doses (100; 200 and 300 mg/kgBW) in diabetic rats which is induced by MLD-STZ showed a decreasing of MDA levels and a repair of histology of kidney tissues in accordance with the increasing dose given. ACKNOWLEDGEMENT This study is part of into the development research of Herbal Therapy for DM Diseases.The author would like to thank Dr. Ora Et Labora Immanuel Palandeng, SpTHT-KL as members of research team Herbal Therapies Development for DM Diseases.The authors thank Dr. Sasangka Prasetyawan, MS and Dra.Anna Roosdiana, M.App.Sc.for the discussion and Dr. Sc Akhmad Sabarudin who helped the forming of this manuscript. Figure 1 . Figure 1.Reaction between Malondialdehyde and Thiobarbituric AcidAs shown in Table1, the levels of MDA in diabetic mice were significantly higher compared with non-diabetic mice.Therapy with D-alpha tocopherol may reduce elevated levels of MDA.MDA levels declined with increasing doses of D-Alpha tocopherol used.Statistical test results showed that there were significant differences (P<0,01) between MDA levels of diabetic mice and therapeutic mice.It suggests that the D-alpha tocopherol able to act as an antioxidant especially as a hydroxyl radical scavenger.The decline in MDA levels related to the decreasing of lipid peroxidation in cell membranes that leads to the reducing of cell membrane damage and inhibition of diabetes mellitus complications.Table1.Profile of MDA level in control, diabetic, and therapeutic mice kidney Table 1 . Profile of MDA level in control, diabetic, and therapeutic mice kidney
2019-03-19T13:14:19.583Z
2013-09-13T00:00:00.000
{ "year": 2013, "sha1": "076be27dadeb07bff788837dfb733c94a2bc10b6", "oa_license": "CCBYNC", "oa_url": "https://jpacr.ub.ac.id/index.php/jpacr/article/download/137/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cc9c3347b7681555d9461e4c05c7bbb21b4109e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
55524188
pes2o/s2orc
v3-fos-license
Nutritional evaluation of palm kernel meal types : 1 . Proximate composition and metabolizable energy values Studies were conducted to determine the proximate composition and metabolizable energy values of palm kernel meal (PKM) types. The PKM types studied were obtained from Okomu, Presco and Envoy Oil Mills and were either mechanically or solvent extracted using different varieties of palm kernels. Samples of PKM types were assayed for proximate composition and the results obtained indicated that Okomu, Presco and Envoy PKM resulted in crude protein values of 14.50, 16.60 and 19.24%, respectively. Crude fibre values were in the order of 10.00, 12.29 and 17.96%, respectively for Okomu, Presco and Envoy PKM types. Envoy PKM resulted in the lowest fat content (1.30%) while Okomu and Presco PKM gave fat values of 9.48 and 7.59%, respectively. The values of ash ranged from 3.40 to 4.34% and nitrogen free extract, 50.05 to 53.42%. Apparent metabolizable crude energy values were 2654, 2423 and 1817 for Okomu, Presco and Envoy PKM, respectively. It can be concluded that Okomu and Presco PKM which were mechanically extracted had close nutrient values and were particularly higher in fat but lower in protein as compared to Envoy PKM. INTRODUCTION Experience has shown that a wide range of palm kernel meal (PKM) types exist depending on the processing method and type of palm kernel used.Over the years, PKM has been indiscriminately used in broiler chicken diets without due regard to type or source.This has given rise to inconsistent results and may be responsible for the low productivity of broiler chickens fed PKM-based diets.Attempts have been made to chemically characterize PKM without regard to type.Consequently, random chemical analyses have resulted in a wide range of nutrient values.For instance, PKM has been found to contain between 16.0 and 21.3% crude protein with low content of lysine, methionine, histidine and threonine (Nwokolo et al., 1977;Olomu, 1995).The crude fibre content ranges from 6.7% (Babatunde et al., 1975) to 17.5% (Olomu, 1995).It has an estimated ash content of 4.30% (Yeong, 1980;NIFOR, 1995).The values of ether extract range *Corresponding author.E-mail: ev.ezieshi@yahoo.com.Phone: +234-803-418-7347.between 0.80% (Yeong et al., 1981) and 10.33% (Nowkolo et al., 1977;Onwudike, 1986;Olomu, 1995).The nitrogen free extract ranges between 38.7% (Olomu, 1995) and 63.5% (Yeong et al., 1981).The values of the metabolizable energy are in the range of 1481.8/kcal/kg(Nwokolo et al., 1977) to 2500.0/kcal/kg (Olomu, 1995). An in-depth understanding of the nutro-chemical characteristics of PKM types would ensure a more judicious use of the feedstuff.In addition, it would go a long way to standardizing research results in relation to specific types of PKM. Source of test materials In this study, three types of PKM were tested.The first type was obtained from Okomu Oil Palm Company Plc, Benin City.The varieties of palm kernel used here are dura and tenera.After mechanical cracking of the nuts, the kernels are separated from the shells with the aid of kaolin solution.The processing method adopted in the extraction of palm kernel oil is mechanical method.The second source of PKM is Presco Oil Plc, Benin City.Here, already process- Proximate analysis To determine the proximate composition, representative samples of differently processed PKM were assayed for moisture, crude protein, crude fibre, fat and ash.Nitrogen free extracted was computed accordingly (A.O.A.C., 2001). Metabolizable energy study The ME values were determined using five weeks old Hybro broiler chickens.At the beginning of the studies, the birds were divided into 12 similar groups at three birds per group.The apparent metabolizable energy (AME) of the basal diet and the substituted diets were calculated using algebraic equation: Where Fi = Feed intake (g), E = Excreta output (g), GEf = Gross energy of feed (Kcal/kg), and GEe = Gross energy excreta (Kcal/kg).From the ME of the basal and substituted diets, the ME of the test ingredients was calculated using the following assumptions and algebraic equation: RESULTS AND DISCUSSION The proximate composition of the PKM types is presented in Table 2.The results indicated that none of the PKM samples had crude protein up to 20%.The crude protein content obtained fro Okomu, Presco and Envoy PKM types were 14.50, 16.60 and 19.24%, respectively.Envoy PKM resulted in the highest crude fibre value (17.96%).Okomu and Presco PKM gave fibre values of 10.00 and 12.29% respectively.The crude fat content varied between 1.3% (Envoy PKM) and 9.48% (Okomu PKM).Percentage ash content of PKM was in the range of 3.40 to 4.34%.Percentage nitrogen free extract of the PKM types ranged between 50.05 and 53.43%.Afr.J. Biotechnol. The results of the metabolizable energy study are presented in Table 2.The results showed that Okomu PKM gave the highest AME value of 2654 kcal/kg (11.11MJ/kg) while Envoy PKM gave the lowest value of 1817 kcal/kg (7.60 MJ/kg).Presco PKM gave a value intermediate between those of Okomu and Envoy PKM, 2423 kcal/kg (10.14 MJ/kg).The n-corrected apparent metabolizable energy (AME n ) followed the same trend as AME. The results of the proximate chemical analysis of PKM types indicated that the crude protein values range between 14 and 20%.The variation in crude protein composition can be attributed to differences in processing method employed and type of palm kernel used.The results showed that Envoy PKM which is solvent extracted gave the highest value of crude protein as compared to Okomu and Presco PKM.This is definitely due to greater concentration of the nutrient because of the lower amount of fat left after the solvent extraction process.The crude fibre values observed were higher than that earlier reported by Babatunde et al. (1975).The crude fibre value reported by Olomu (1995) was within the range of values observed in this study.The wide range of crude fibre values may be related to the type of palm kernel used, method of separating shell from the kernel, the amount of shell left in the kernel before oil extraction and the method of processing of the palm kernel before use.The solvent extracted PKM yielded higher crude fibre than the mechanically extracted PKM.This may be related to the higher degree of oil extraction associated with the solvent extraction method.Okomu and Presco PKM resulted in fat contents of 7.59 and 9.48% which were comparable; suggesting that mechanically processed PKM has a fat content ranging between 8 and 9% which agreed with the findings of Palmer-Jones and Halliday (1971).The low fat level (1.3%) recorded for Envoy PKM (solvent extracted PKM) also agreed with earlier reports (Palmer-Jones and Halliday, 1971;Yeong et al., 1981).The lower value is due to the fact that solvent extraction usually removes more fat from the products than mechanical extraction.The values of ash content observed were within the reported range of values (Yeong, 1980;Onwudike, 1986;NIFOR 1995).The NFE values reported were in agreement with earlier reports (Yeong et al., 1981). The results of the metabolizable energy studies showed that Okomu and Presco PKM which were mechanically processed resulted in higher metabolizable energy than the Envoy PKM which was solvent extracted.This may be related to the processing method employed in the production of the PKM types.Mechanical extraction re-sulted in higher residual oil (about 8%) in the cake than the solvent extraction process which resulted in about 1% residual oil.It is not surprising therefore, that Okomu and Presco PKM resulted in higher ME values than the Envoy PKM.The ME values obtained for Okomu and Presco PKM are close to the value earlier reported (Olomu, 1995). Conclusion From the results of the foregoing studies, it can be concluded that Okomu and Presco PKM which were mechanically processed resulted in almost similar values of proximate composition and metabolizable energy which were distinct from the values obtained for Envoy PKM (solvent extracted PKM). Table 1 . Percentage composition of basal diet. Table 2 . Proximate composition and metabolizable energy values of different types of palm kernel meal. cessed palm kernels obtained from different sources are used suggesting that the method of processing to get the palm kernels and type of kernel are not known.Palm kernel oil extraction is also carried out mechanically.The third source of PKM is Envoy Oil Mill Plc, Port Harcourt.Here, already processed palm kernels are procured from different sources.The palm kernel oil is extracted using hexane base (or solvent extraction method). Three groups were randomly placed on each of the 4 diets used in the Experiment.A standard broiler diet, without any of the test ingredients served as the control or basal diet (Diet 1, Table1).In diets 2, 3 and 4, Okomu, Presco and Envoy PKM was substituted at 20% into the control diet.That is, each of the test diets constituted 80% basal and 20% test ingredient.The treatments were thus:
2018-12-06T13:43:17.553Z
2007-11-05T00:00:00.000
{ "year": 2007, "sha1": "78905555d019c16fe120eb919929476370ad0fb5", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/1BC4C156457.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "78905555d019c16fe120eb919929476370ad0fb5", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
14333463
pes2o/s2orc
v3-fos-license
Graph parsing with s-graph grammars A key problem in semantic parsing with graph-based semantic representations is graph parsing , i.e. computing all possible analyses of a given graph according to a grammar. This problem arises in training synchronous string-to-graph grammars, and when generating strings from them. We present two algorithms for graph parsing (bottom-up and top-down) with s-graph grammars. On the related problem of graph parsing with hyperedge replacement grammars, our implementations outperform the best previous system by several orders of magnitude. Introduction The recent years have seen an increased interest in semantic parsing, the problem of deriving a semantic representation for natural-language expressions with data-driven methods. With the recent availability of graph-based meaning banks (Banarescu et al., 2013;Oepen et al., 2014), much work has focused on computing graph-based semantic representations from strings (Jones et al., 2012;Flanigan et al., 2014;Martins and Almeida, 2014). One major approach to graph-based semantic parsing is to learn an explicit synchronous grammar which relates strings with graphs. One can then apply methods from statistical parsing to parse the string and read off the graph. and Quernheim and Knight (2012) represent this mapping of a (latent) syntactic structure to a graph with a grammar formalism called hyperedge replacement grammar (HRG; (Drewes et al., 1997)). As an alternative to HRG, Koller (2015) introduced s-graph grammars and showed that they support linguistically reasonable grammars for graph-based semantics construction. One problem that is only partially understood in the context of semantic parsing with explicit grammars is graph parsing, i.e. the computation of the possible analyses the grammar assigns to an input graph (as opposed to string). This problem arises whenever one tries to generate a string from a graph (e.g., on the generation side of an MT system), but also in the context of extracting and training a synchronous grammar, e.g. in EM training. The state of the art is defined by the bottomup graph parsing algorithm for HRG by , implemented in the Bolinas tool . We present two graph parsing algorithms (topdown and bottom-up) for s-graph grammars. Sgraph grammars are equivalent to HRGs, but employ a more fine-grained perspective on graphcombining operations. This simplifies the parsing algorithms, and facilitates reasoning about them. Our bottom-up algorithm is similar to Chiang et al.'s, and derives the same asymptotic number of rule instances. The top-down algorithm is novel, and achieves the same asymptotic runtime as the bottom-up algorithm by reasoning about the biconnected components of the graph. Our evaluation on the "Little Prince" graph-bank shows that our implementations of both algorithms outperform Bolinas by several orders of magnitude. Furthermore, the top-down algorithm can be more memory-efficient in practice. Related work The AMR-Bank (Banarescu et al., 2013) annotates sentences with abstract meaning representations (AMRs), like the one shown in Fig. 1(a). These are graphs that represent the predicate-argument structure of a sentence; notably, phenomena such as control are represented by reentrancies in the graph. Another major graph-bank is the SemEval-2014 shared task on semantic dependency parsing dataset (Oepen et al., 2014). The primary grammar formalism currently in use for synchronous graph grammars is hyperedge replacement grammar (HRG) (Drewes et al., 1997), which we sketch in Section 4.3. An alternative is offered by Koller (2015), who introduced sgraph grammars and showed that they lend themselves to manually written grammars for semantic construction. In this paper, we show the equivalence of HRG and s-graph grammars and work out graph parsing for s-graph grammars. The first polynomial graph parsing algorithm for HRGs on graphs with limited connectivity was presented by Lautemann (1988). Lautemann's original algorithm is a top-down parser, which is presented at a rather abstract level that does not directly support implementation or detailed complexity analysis. We extend Lautemann's work by showing how new parse items can be represented and constructed efficiently. Finally, presented a bottom-up graph parser for HRGs, in which the representation and construction of items was worked out for the first time. It produces O((n · 3 d ) k+1 ) instances of the rules in a parsing schema, where n is the number of nodes of the graph, d is the maximum degree of any node, and k is a quantity called the tree-width of the grammar. An algebra of graphs We start by introducing the exact type of graphs that our grammars and parsers manipulate, and by developing some theory. Throughout this paper, we define a graph G = (V, E) as a directed graph with edge labels from some label alphabet L. The graph consists of a finite set V of nodes and a finite set E ⊆ V ×V ×L of edges e = (u, v, l), where u and v are the nodes connected by e, and l ∈ L is the edge label. We say that e is incident to both u and v, and call the number of edges incident to a node its degree. We write u e ↔ v if either e = (u, v, l) or e = (v, u, l) for some l; we drop the e if the identity of the edge is irrelevant. Edges with u = v are called loops; we use them here to encode node labels. Given a graph G, we write n = |V |, m = |E|, and d for the maximum degree of any node in V . If f : A B and g : A B are partial functions, we let the partial function f ∪ g be defined if for all a ∈ A with both f (a) and g(a) defined, we have f (a) = g(a). We then let is defined; and undefined otherwise. The HR algebra of graphs with sources Our grammars describe how to build graphs from smaller pieces. They do this by accessing nodes (called source nodes) which are assigned "public names". We define an s-graph (Courcelle and Engelfriet, 2012) as a pair SG = (G, φ) of a graph G and a source assignment, i.e. a partial, injective function φ : S V that maps some source names from a finite set S to the nodes of G. We call the nodes in φ(S) the source nodes or sources of SG; all other nodes are internal nodes. If φ is defined on the source name σ, we call φ(σ) the σ-source of SG. Throughout, we let s = |S|. Examples of s-graphs are given in Fig. 1. We use numbers as node names and lowercase strings for edge names (except in the concrete graphs of Fig. 1, where the edges are marked with edge labels instead). Source nodes are drawn in black, with source names drawn on the inside. Fig. 1(b) shows an s-graph SG want with three nodes and four edges. The three nodes are marked as the R-, S-, and O-source, respectively. Likewise, the sgraph SG sleep in (c) has two nodes (one of which is an R-source and the other an S-source) and two edges. We can now apply operations to these graphs. First, we can rename the R-source of (c) to an Osource. The result, denoted SG d = SG sleep [R → O], is shown in (d). Next, we can merge SG d with SG want . This copies the edges and nodes of SG d and SG want into a new s-graph; but crucially, for every source name σ the two s-graphs have in common, the σ-sources of the graphs are fused into a single node (and become a σ-source of the result). We write || for the merge operation; thus we obtain SG e = SG d || SG want , shown in (e). Finally, we can forget source names. The graph SG f = f S (f O (SG e )), in which we forgot S and O, is shown in (f). We refer to Courcelle and Engelfriet (2012) for technical details. 1 We can take the set of all s-graphs, together with these operations, as an algebra of s-graphs. In addition to the binary merge operation and the unary operations for forget and rename, we fix some finite set of atomic s-graphs and take them as constants of the algebra which evaluate to themselves. Following Courcelle and Engelfriet, we call this algebra the HR algebra. We can evaluate any term τ consisting of these operation symbols into an sgraph τ as usual. For instance, the following term encodes the merge, forget, and rename operations from the example above, and evaluates to the s-graph in Fig. 1(f). ( The set of s-graphs that can be represented as the value τ of some term τ over the HR algebra depends on the source set S and on the constants. For simplicity, we assume here that we have a constant for each s-graph consisting of a single labeled edge (or loop), and that the values of all other constants can be expressed by combining these using merge, rename, and forget. S-components A central question in graph parsing is how some s-graph that is a subgraph of a larger s-graph SG (a sub-s-graph) can be represented as the merge of two smaller sub-s-graphs of SG. In general, SG 1 || SG 2 is defined for any two s-graphs SG 1 and SG 2 . However, if we see SG 1 and SG 2 as subgraphs of SG, SG 1 || SG 2 may no longer be a subgraph of SG. For instance, we cannot merge the s-graphs (b) and (c) in Fig. 2 as part of the graph (a): The startpoints of the edges a and d are both A-sources and would thus become the same node (unlike in (a)), and furthermore the edge d would have to be duplicated. In graph parsing, we already know the identity of all nodes and edges in sub-s-graphs (as nodes and edges in SG), and must thus pay attention that merge operations do not accidentally fuse or duplicate them. In partic-1 Note that the rename operation of Courcelle and Engelfriet (2012) allows for swapping source assignments and making multiple renames in one step. We simplify the presentation here, but all of our techniques extend easily. Figure 2: (a) An s-graph with (b,c) some sub-sgraphs, (d) its BCCs, and (e) its block-cutpoint graph. ular, two sub-s-graphs cannot be merged if they have edges in common. We call a sub-s-graph SG 1 of SG extensible if there is another sub-s-graph SG 2 of SG such that SG 1 || SG 2 contains the same edges as SG. An example of a sub-s-graph that is not extensible is the sub-s-graph (b) of the s-graph in (a) in Fig. 2. Because sources can only be renamed or forgotten by the algebra operations, but never introduced, we can never attach the missing edge a: this can only happen when 1 and 2 are sources. As a general rule, a sub-s-graph can only be extensible if it contains all edges that are adjacent to all of its internal nodes in SG. Obviously, a graph parser need only concern itself with sub-s-graphs that are extensible. We can further clarify the structure of extensible sub-s-graphs by looking at the s-components of a graph. Let U ⊆ V be some set of nodes. This set splits the edges of G into equivalence classes that are separated by U . We say that two edges if we can reach f from an endpoint of e without visiting a node in U . We call the equivalence classes of E with respect to ∼ U the s-components of G and denote the scomponent that contains an edge e with [e]. In It can be shown that for any s-graph SG = (G, φ), a sub-s-graph SH with source nodes U is extensible iff its edge set is the union of a set of s-components of G with respect to U . We let an s-component representation C = (C, φ) in the s-graph SG = (G, φ ) consist of a source assignment φ : S V and a set C of s-components of G with respect to the set VS C = φ(S) ⊆ V of source nodes of φ. Then we can represent every extensible sub-s-graph SH = (H, φ) of SG by the s-component representation C = (C, φ) where C is the set of s-components of which SH consists. Conversely, we write T (C) for the unique extensible sub-s-graph of SG represented by the s-component representation C. The utility of s-component representations derives from the fact that merge can be evaluated on these representations alone, as follows. Boundary representations If there is no C such that all conditions of Lemma 1 are satisfied, then T (C 1 ) || T (C 2 ) is not defined. In order to check this efficiently in the bottom-up parser, it will be useful to represent s-components explicitly via their boundary. Consider an s-component representation C = (C, φ) in SG and let E be the set of all edges that are adjacent to a source node in VS C and contained in an s-component in C. Then we let the boundary representation (BR) β of C in the s-graph SG be the pair β = (E, φ). That is, β represents the s-components through the in-boundary edges, i.e. those edges inside the s-components (and thus the sub-s-graph) which are adjacent to a source. The BR β specifies C uniquely if the base graph SG is connected, so we write T (β) for T (C) and VS β for VS C . In Fig. 2(a), the bold sub-s-graph is represented by β = {d, e, f, g}, {A:4, B:5} , indicating that it contains the A-source 4 and the B-source 5; and further, that the edge set of the sub-s-graph is The edge h (which is also incident to 5) is not specified, and therefore not in the sub-s-graph. The following lemma can be shown about computing merge on boundary representations. Intuitively, the conditions (b) and (c) guarantee that the component sets are disjoint; the lemma then follows from Lemma 1. Lemma 2. Let SG be an s-graph, and let β 1 = (E 1 , φ 1 ), β 2 = (E 2 , φ 2 ) be two boundary representations in SG. Then T (β 1 ) || T (β 2 ) is defined within SG iff the following conditions hold: (a) φ 1 ∪ φ 2 is defined and injective; (b) the two BRs have no in-boundary edges in common, i.e. E 1 ∩ E 2 = ∅; (c) for every source node v of β 1 , the last edge on the path in SG from v to the closest source node of β 2 is not an in-boundary edge of β 2 , and vice versa. Furthermore, if these conditions hold, we have S-graph grammars We are now ready to define s-graph grammars, which describe languages of s-graphs. We also introduce graph parsing and relate s-graph grammars to HRGs. Grammars for languages of s-graphs We use interpreted regular tree grammars (IRTGs; Koller and Kuhlmann (2011)) to describe languages of s-graphs. IRTGs are a very general mechanism for describing languages over and relations between arbitrary algebras. They separate conceptually the generation of a grammatical derivation from its interpretation as a string, tree, graph, or some other object. Consider, as an example, the tiny grammar in Fig. 3; see Koller (2015) for linguistically meaningful grammars. The left column consists of a regular tree grammar G (RTG; see e.g. Comon et al. (2008)) with two rules. This RTG describes a regular language L(G) of derivation trees (in general, it may be infinite). In the example, we can derive S ⇒ r 1 (VP) ⇒ r 1 (r 2 ), therefore we have t = r 1 (r 2 ) ∈ L(G). We then use a tree homomorphism h to rewrite the derivation trees into terms over an algebra; in this case the HR algebra. In the example, the values h(r 1 ) and h(r 2 ) are specified in the second column of Fig. 3. We compute h(t) by substituting the variable x 1 in h(r 1 ) with h(r 2 ). The term h(t) is thus the one shown in (1). It evaluates to the s-graph SG f in Fig. 1(f). Figure 3: An example s-graph grammar. Rule of RTG In general, the IRTG G = (G, h, A) generates the language L(G · is evaluation in the algebra A. Thus, in the example, we have L(G) = {SG f }. In this paper, we focus on IRTGs that describe languages L(G) ⊆ A of objects in an algebra; specifically, of s-graphs in the HR algebra. However, IRTGs extend naturally to a synchronous grammar formalism by adding more homomorphisms and algebras. For instance, the grammars in Koller (2015) map each derivation tree simultaneously to a string and an s-graph, and therefore describe a binary relation between strings and sgraphs. We call IRTGs where at least one algebra is the HR algebra, s-graph grammars. Parsing with s-graph grammars In this paper, we are concerned with the parsing problem of s-graph grammars. In the context of IRTGs, parsing means that we are looking for those derivation trees t that are (a) grammatically correct, i.e. t ∈ L(G), and (b) match some given input object a, i.e. h(t) evaluates to a in the algebra. Because the set P of such derivation trees may be large or infinite, we aim to compute an RTG G a such that L(G a ) = P . This RTG plays the role of a parse chart, which represents the possible derivation trees compactly. In order to compute G a , we need to solve two problems. First, we need to determine all the possible ways in which a can be represented by terms τ over the algebra A. This is familiar from string parsing, where a CKY parse chart spells out all the ways in which larger substrings can be decomposed into smaller parts by concatenation. Second, we need to identify all those derivation trees t ∈ L(G) that map to such a decomposition τ , i.e. for which h(t) evaluates to a. In string parsing, this corresponds to retaining only such decompositions into substrings that are justified by the grammar rules. While any parsing algorithm must address both of these issues, they are usually conflated, in that parse items combine information about the decomposition of a (such as a string span) with information about grammaticality (such as nonterminal symbols). In IRTG parsing, we take a different, more generic approach. We assume that the set D of all decompositions τ , i.e. of all terms τ that evaluate to a in the algebra, can be represented as the language D = L(D a ) of a decomposition grammar D a . D a is an RTG over the signature of the algebra. Crucially, D a only depends on the algebra and a itself, and not on G or h, because D contains all terms that evaluate to a and not just those that are licensed by the grammar. However, we can compute G a from D a efficiently by exploiting the closure of regular tree languages under intersection and inverse homomorphism; see Koller and Kuhlmann (2011) for details. In practice, this means that whenever we want to apply IRTGs to a new algebra (as, in this paper, to the HR algebra), we can obtain a parsing algorithm by specifying how to compute decomposition grammars over this algebra. This is the topic of Section 5. Relationship to HRG We close our exposition of s-graph grammars by relating them to HRGs. It is known that the graph languages that can be described with s-graph grammars are the same as the HRG languages (Courcelle and Engelfriet, 2012, Prop. 4.27). Here we establish a more precise equivalence result, so we can compare our asymptotic runtimes directly to those of HRG parsers. An HRG rule, such as the one shown in Fig. 4, rewrites a nonterminal symbol into a graph. The example rule constructs a graph for the nonterminal S by combining the graph G r in the middle (with nodes 1, 2, 3 and edges e, f ) with graphs G X and G Y that are recursively derived from the nonterminals X and Y . The combination happens by merging the external nodes of G X and G Y with nodes of G r : the squiggly lines indicate that the external node I of G X should be 1, and the external node II should be 2. Similarly the external nodes of G Y are unified with 1 and 3. Finally, the external nodes I and II of the HRG rule for S itself, shaded gray, are 1 and 3. The fundamental idea of the HRG-to-IRTG translation is to encode external nodes as sources, and to use rename and merge to unify the nodes of the different graphs. In the example, we might say that the external nodes of G X and G Y are represented using the source names I and II, and extend G r to an s-graph by saying that the nodes 1, 2, and 3 are its I-source, III-source, and II-source respectively. This results in the expression where we write " I e → III " for the s-graph consisting of the edge e, with node 1 as I-source and 2 as III-source. However, this requires the use of three source names (I, II, and III). The following encoding of the rule uses the sources more economically: ( This term uses only two source names. It forgets II as soon as we are finished with the node 2, and frees the name up for reuse for 3. The complete encoding of the HRG rule consists of the RTG rule S → r(X, Y) with h(r) = (3). In the general case, one can "read off" possible term encodings of a HRG rule from its tree decompositions; see or Def. 2.80 of Courcelle and Engelfriet (2012) for details. A tree decomposition is a tree, each of whose nodes π is labeled with a subset V π of the nodes in the HRG rule. We can construct a term encoding from a tree decomposition bottom-up. Leaves map to variables or constants; binary nodes introduce merge operations; and we use rename and forget operations to ensure that the subterm for the node π evaluates to an s-graph in which exactly the nodes in V π are source nodes. 2 In the example, we obtain (3) from the tree decomposition in Fig. 4 like this. The tree-width k of an HRG rule is measured by finding the tree decomposition of the rule for which the node sets have the lowest maximum size s and setting k = s − 1. It is a crucial measure because Chiang et al.'s parsing algorithm is exponential in k. The translation we just sketched uses s source names. Thus we see that a HRG with rules of tree-width ≤ k can be encoded into an s-graph grammar with k + 1 source names. (The converse is also true.) Graph parsing with s-graph grammars Now we show how to compute decomposition grammars for the s-graph algebra. As we explained in Section 4.2, we can then obtain a complete parser for s-graph grammars through generic methods. 2 This uses the swap operations mentioned in Footnote 1. Given an s-graph SG, the language of the decomposition grammar D SG is the set of all terms over the HR algebra that evaluate to SG. For example, the decomposition grammar for the graph SG in Fig. 1(a) contains -among many othersthe following two rules: where SG f , SG e , SG b , and SG d are the graphs from Fig. 1 (see Section 3.1). In other words, D SG keeps track of sub-s-graphs in the nonterminals, and the rules spell out how "larger" sub-s-graphs can be constructed from "smaller" sub-s-graphs using the operations of the HR algebra. The algorithms below represent sub-s-graphs compactly using s-component and boundary representations. Because the decomposition grammars in the sgraph algebra can be very large (see Section 6), we will not usually compute the entire decomposition grammar explicitly. Instead, it is sufficient to maintain a lazy representation of D SG , which allows us to answer queries to the decomposition grammar efficiently. During parsing, such queries will be generated by the generic part of the parsing algorithm. Specifically, we will show how to answer the following types of query: • Top-down: given an s-component representation C of some s-graph and an algebra operation o, enumerate all the rules C → o(C 1 , . . . , C k ) in D SG . This asks how a larger sub-s-graph can be derived from other sub-sgraphs using the operation o. In the example above, a query for SG and f R (·) should yield, among others, the rule in (4). • Bottom-up: given boundary representations β 1 , . . . , β k and an algebra operation o, enumerate all the rules β → o(β 1 , . . . , β k ) in D SG . This asks how smaller sub-s-graphs can be combined into a bigger one using the operation o. In the example above, a merge query for SG b and SG d should yield the rule in (5). Unlike in the top-down case, every bottom-up query returns at most one rule. The runtime of the complete parsing algorithm is bounded by the number I of different queries to D SG that we receive, multiplied by the perrule runtime T that we need to answer each query. The factor I is analogous to the number of rule instances in schema-based parsing (Shieber et al., 1995). The factor T is often ignored in the analysis of parsing algorithms, because in parsing schemata for strings, we typically have T = O(1). This need not be the case for graph parsers. In the HRG parsing schema of , we have I = O(n k+1 3 d(k+1) ), where k is the treewidth of the HRG. In addition, each of their rule instances takes time T = O(d(k + 1)) to actually calculate the new item. Below, we show how we can efficiently answer both bottom-up and top-down queries to D SG . Every s-graph grammar has an equivalent normal form where every constant describes an s-graph with a single edge. Assuming that the grammar is in this normal form, queries of the form β → g (resp. C → g), where g is a constant of the HRalgebra, are trivial and we will not consider them further. Table 1 summarizes our results. Bottom-up decomposition Forget and rename. Given a boundary representation β = (E , φ ), answering the bottom-up forget query β → f A (β ) amounts to verifying that all edges incident to φ (A) are in-boundary in β , since otherwise the result would not be extensible. This takes time O(d). We then let β = (E, φ), where φ is like φ but undefined on A, and E is the set of edges in E that are still incident to a source in φ. Computing β thus takes time O(d + s). The rename operation works similarly, but since the edge set remains unmodified, the per-rule runtime is O(s). A BR is fully determined by specifying the node and in-boundary edges for each source name, so there are at most O n2 d s different BRs. Since the result of a forget or rename rule is determined by the child β , this is an upper bound for the number I of rule instances of forget or rename. We can check whether T (β 1 ) || T (β 2 ) is defined by going through the conditions of Lemma 2. The only nontrivial condition is (c). In order to check it efficiently, we precompute a data structure which contains, for any two nodes u, v ∈ V , the length k of the shortest undirected path u = v 1 ↔ . . . e ↔ v k = v and the last edge e on this path. This can be done in time O(n 3 ) using the Floyd-Warshall algorithm. Checking (c) for every source pair then takes time O(s 2 ) per rule, but because sources that are common to both β 1 and β 2 automatically satisfy (c) due to (a), one can show that the total runtime of checking (c) for all merge rules of D S G is O(n s 3 ds s). Observe finally that there are I = O(n s 3 ds ) instances of the merge rule, because each of the O(ds) edges that are incident to a source node can be either in β 1 , in β 2 , or in neither. Therefore the runtime for checking (c) amortizes to O(s) per rule. The Floyd-Warshall step amortizes to O(1) per rule for s ≥ 3; for s ≤ 2 the node table can be computed in amortized O(1) using more specialized algorithms. This yields a total amortized per-rule runtime T for bottom-up merge of O(ds). Top-down decomposition For the top-down queries, we specify sub-s-graphs in terms of their s-component representations. The number I of instances of each rule type is the same as in the bottom-up case because of the one-toone correspondence of s-component and boundary representations. We focus on merge and forget queries; rename is as above. Merge. Given an s-component representation C = (C, φ), a top-down merge query asks us to enumerate the rules C → || (C 1 , C 2 ) such that T (C 1 ) || T (C 2 ) = T (C). By Lemma 1, we can do this by using every distribution of the scomponents in C over C 1 and C 2 and restricting φ accordingly. This brings the per-rule time of topdown merge to O(ds), the maximum number of s-components in C. Block-cutpoint graphs. The challenging query to answer top-down is forget. We will first describe the problem and introduce a data structure that supports efficient top-down forget queries. Consider top-down forget queries on the sub-s-graph SG 1 drawn in bold in Fig. 2(a) An algorithm for top-down forget must be able to determine whether promotion of a node splits an s-component or not. To do this, let G be the input graph. We create an undirected auxiliary graph G U from G and a set U of (source) nodes. G U contains all nodes in V \U , and for each edge e that is incident to a node u ∈ U , it contains a node (u, e). Furthermore, G U contains undirected versions of all edges in G; if an edge e ∈ E is incident to a node u ∈ U , it becomes incident to (u, e) in G U instead. The auxiliary graph G {4,5} for our example graph is shown in Fig. 2(d). Two edges are connected in G U if and only if they are equivalent with respect to U in G. Therefore, promotion of u splits s-components iff u is a cutpoint in G U , i.e. a node whose removal disconnects the graph. Cutpoints can be characterized as those nodes that belong to multiple biconnected components (BCCs) of G U , i.e. the maximal subgraphs such that any node can be removed without disconnecting a graph segment. In Fig. 2(d), the BCCs are indicated by the dotted boxes. Observe that 3 is a cutpoint and 1 is not. For any given U , we can represent the structure of the BCCs of G U in its block-cutpoint graph. This is a bipartite graph whose nodes are the cutpoints and BCCs of G U , and a BCC is connected to all of its cutpoints; see Fig. 2(e) for the blockcutpoint graph of the example. Block-cutpoint graphs are always forests, with the individual trees representing the s-components of G. Promoting a cutpoint u splits the s-component into smaller parts, each corresponding to an incident edge of u. We annotate each edge with that part. Forget. We can now answer a top-down forget query C → f A (C ) efficiently from the blockcutpoint graph for the sources of C = (C, φ). We iterate over all components c ∈ C, and then over all internal nodes u of c. If u is not a cutpoint, we simply let C = (C , φ ) by making u an Asource and letting C = C. Otherwise, we also remove c from C and add the new s-components on the edges adjacent to u in the block-cutpoint graph. The query returns rules for all C that can be constructed like this. The per-rule runtime of top-down forget is O(ds), the time needed to compute C in the cutpoint case. We furthermore precompute the blockcutpoint graphs for the input graph with respect to all sets U ⊆ V of nodes with |U | ≤ s − 1. For each U , we can compute the block-cutpoint graph and annotate its edges in time O(nd 2 s). Thus the total time for the precomputation is O(n s · d 2 s), which amortizes to O(1) per rule. Top-down versus bottom-up. Fig. 5 compares the performance of the top-down and the bottomup algorithm, on a grammar with three source names sampled from all 1261 graphs with up to 10 nodes. Each point in the figure is the geometric mean of runtimes for all graphs with a given number of nodes; note the log-scale. We aborted the top-down parser after its runtimes grew too large. We observe that the bottom-up algorithm outperforms the top-down algorithm, and yields practical runtimes even for nontrivial graphs. One possible explanation for the difference is that the topdown algorithm spends more time analyzing ungrammatical s-graphs, particularly subgraphs that are not connected. Comparison to Bolinas. We also compare our implementations to Bolinas. Because Bolinas is much slower than Alto, we restrict ourselves to two source names (= treewidth 1) and sampled the grammar from 30 randomly chosen AMRs each of size 2 to 8, plus the 21 AMRs of size one. Fig. 6 shows the runtimes. Our parsers are generally much faster than in Fig. 5, due to the decreased number of sources and grammar size. They are also both much faster than Bolinas. Measuring the total time for parsing all 231 AMRs, our bottom-up algorithm outperforms Bolinas by a factor of 6722. The top-down algorithm is slower, but still outperforms Bolinas by a factor of 340. Further analysis. In practice, memory use can be a serious issue. For example, the decomposition grammar for s=3 for AMR #194 in the corpus has over 300 million rules. However, many uses of decomposition grammars, such as sampling for grammar induction, can be phrased purely in terms of top-down queries. The top-down algorithm can answer these without computing the entire grammar, alleviating the memory problem. Finally, we analyzed the asymptotic runtimes in Table 1 in terms of the maximum number d · s of in-boundary edges. However, the top-down parser does not manipulate individual edges, but entire s-components. The maximum number D s of scomponents into which a set of s sources can split a graph is called the s-separability of G by Lautemann (1990). We can analyze the runtime of the top-down parser more carefully as O(n s 3 Ds ds); as the dotted line in Fig. 5 shows, this predicts the runtime well. Interestingly, D s is much lower in practice than its theoretical maximum. In the "Little Prince" AMR-Bank, the mean of D 3 is 6.0, whereas the mean of 3 · d is 12.7. Thus exploiting the s-component structure of the graph can improve parsing times. Conclusion We presented two new graph parsing algorithms for s-graph grammars. These were framed in terms of top-down and bottom-up queries to a decomposition grammar for the HR algebra. Our implementations outperform Bolinas, the previously best system, by several orders of magnitude. We have made them available as part of the Alto parser. A challenge for grammar-based semantic parsing is grammar induction from data. We will explore this problem in future work. Furthermore, we will investigate methods for speeding up graph parsing further, e.g. with different heuristics.
2015-08-08T02:06:19.000Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "b2e8ccc8f8382dcdad93d83e4377a22661ae5cce", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/P15-1143.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "f5fd5e0e5da36b1a1e7f1a66d4292ba22e53429b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
10943431
pes2o/s2orc
v3-fos-license
Memory-impairing effects of local anesthetics in an elevated plus-maze test in mice Post-training intracerebroventricular administration of procaine (20 μg/μl) and dimethocaine (10 or 20 μg/μl), local anesthetics of the ester class, prolonged the latency (s) in the retention test of male and female 3-month-old Swiss albino mice (25-35 g body weight; N = 140) in the elevated plus-maze (mean ± SEM for 10 male mice: control = 41.2 ± 8.1; procaine = 78.5 ± 10.3; 10 μg/μl dimethocaine = 58.7 ± 12.3; 20 μg/μl dimethocaine = 109.6 ± 5.73; for 10 female mice: control = 34.8 ± 5.8; procaine = 55.3 ± 13.4; 10 μg/μl dimethocaine = 59.9 ± 12.3 and 20 μg/μl dimethocaine = 61.3 ± 11.1). However, lidocaine (10 or 20 μg/μl), an amide class type of local anesthetic, failed to influence this parameter. Local anesthetics at the dose range used did not affect the motor coordination of mice exposed to the rota-rod test. These results suggest that procaine and dimethocaine impair some memory process(es) in the plus-maze test. These findings are interpreted in terms of non-anesthetic mechanisms of action of these drugs on memory impairment and also confirm the validity of the elevated plusmaze for the evaluation of drugs affecting learning and memory in mice. Correspondence Introduction Several investigators have shown that learning and memory can be modified by drugs which affect the central dopamine (DA) neuronal system (1)(2)(3)(4)(5).Thus, in an active or passive avoidance task, activation of postsynaptic DA receptors has been reported to induce an impairment of memory (2,3,5).Moreover, it is known that the mesolimbic DA system is involved in the processes underlying memory consolidation (6) and there is a growing body of evidence that some chemically related local anesthetics, besides cocaine, have an appreciable DA agonist activ-ity.For example, the ester class local anesthetics procaine and dimethocaine have been reported to be self-administered by laboratory animals (7,8).Among them, dimethocaine was shown to be the local anesthetic with greatest affinity for the DA uptake binding site (9) and to produce other behavioral effects consistent with DA agonist activity, such as rotational behavior in rats with lesions of the substantia nigra (10), reinforcing anxiogenic effects in mice (11) and other cocaine-like discriminative responses (12,13).Procaine is probably the most extensively studied local anesthetic after cocaine and has been shown to display compara-tively low affinity for the DA transporter.Its reinforcing effects seem to be less potent than those of dimethocaine (8). Recent studies have suggested that the elevated plus-maze test may be useful for evaluating learning and memory in mice (14)(15)(16)(17).In light of these considerations, the present study was designed to compare the ability of local anesthetics of both the ester and amide class, procaine, dimethocaine and lidocaine, to affect elevated plus-maze learning in male and female mice.In addition, we examined the possible interference of these drugs with motor coordination. Animals Male and female Swiss albino mice weighing 25-35 g, from our own colony, were kept in cages in groups of 15-20 with free access to food and water and maintained in a room with controlled temperature (23 ± 1 o C) and on a 12:12-h light-dark cycle (lights on 7:00 a.m.).Female mice were tested without monitoring the estrous cycle. Drugs Procaine HCl, lidocaine HCl (Sigma Chemical Co., St. Louis, MO) and dimethocaine HCl (Hoffman-La Roche, Nutley, NJ) were dissolved in saline and injected intracerebroventricularly (icv).All drugs were administered at doses of 10 or 20 µg/µl.A Hamilton microsyringe and injection needle were used for icv injections by the "free hand" technique according to the procedure described by Laursen and Belknap (18).The drugs were injected in a volume of 5 µl/ mouse over 1 min under brief ether anesthesia.The bregma fissure was the reference for the injection needle.The control group received a similar volume of saline injection under the same conditions.After the experiment, the site of injection was checked by histological examination.Animals presenting any signs of needle misplacement or hemorrhage were discarded.Neither insertion of the needle nor injection of 5 µl saline had a significant influence on gross behavioral responses. Elevated plus-maze test The plus-maze was made of plywood and consisted of two open arms (21.5 x 7.5 cm) and two enclosed arms (21.5 x 7.5 x 20 cm) which extended from a central 7.5 x 7.5 cm platform.The plus-maze was elevated 38 cm above the floor.The enclosed arms were painted black. The procedure of the test was similar to that described by Itoh et al. (14).On the 1st day (training) a mouse was placed at the end of one open arm, facing away from the central platform.The latency for the mouse to move from the open to one of the enclosed arms was recorded.Following entry into the arm the animals were allowed to explore the apparatus for 30 s. Twenty-four hours later, the second trial (retention test) was performed.The drugs were administered immediately after the 1st training day, i.e., soon after the mouse was removed from the maze. For the plus-maze task, 140 mice were divided into two sets (males and females) each consisting of seven groups of 10 mice. Rota-rod test Additional groups of mice were divided into seven experimental groups of 7-10 animals of each sex and tested on the rota-rod. The rota-rod apparatus consisted of a rotating bar (2.5 cm in diameter) covered with sandpaper and revolving at 7 rpm.Mice were placed upon the bar and the time spent upon the rotating bar was recorded up to 120 s (day 1).Immediately after the session on day 1, mice were injected icv with procaine, dimethocaine and lidocaine.Twenty-four hours later the performance on the rota-rod Local anesthetics and memory in mice was again assessed for all animals. Statistical analysis The results are reported as means ± SEM and were analyzed by two-way analysis of variance (ANOVA).When ANOVA was significant, the Newman-Keuls test was used to compare each treatment with the control group.Further comparisons betwen day 1 and 2 were carried out by the paired Student t-test.The criterion for statistical significance was P≤0.05 for all evaluations. Results The effects of icv injection of procaine, dimethocaine and lidocaine on latency of female mice to move from the open arm of the maze are shown in Figure 1A.Control animals showed a significantly shortened latency on day 2 compared to the training session.Mice injected with procaine (20 µg/ µl) and dimethocaine (10 or 20 µg/µl) immediately following the training session showed a prolonged latency in the plus-maze retention test (day 2). A quite similar effect of these drugs on performance of mice in the retention test was also found in male mice (Figure 1B).Thus, procaine and dimethocaine prolonged the latency of mice to move from the open arm to the enclosed arm of the plus-maze on day 2.Although no statistical difference in terms of gender was detected by two-way ANOVA, the prolongation of latency of male mice was statistically significant compared to control animals (Figure 1B).Again, lidocaine (10 or 20 µg/µl) did not affect the shortened latency on the 2nd day, thus showing the same pattern presented by control animals (Figure 1B). In order to examine the eventual interference of motor impairing effects of these treatments with plus-maze performance, additional groups of mice were tested on the rota-rod apparatus.Local anesthetics at the Table 1 -Influence of local anesthetics injected icv immediately after the session on day 1 on the ability of male mice to remain on the rota-rod.dose range used in the present study did not affect the motor coordination of mice exposed to the rota-rod test (Table 1). Discussion The local anesthetics of the ester class procaine and dimethocaine injected icv into mice of both sexes prolonged the latency to move from the open to the one enclosed arm of the elevated plus-maze.However, the same doses of lidocaine, an amide class anesthetic, did not alter this response.Itoh et al. (14) and other investigators (15)(16)(17) have suggested that the changes in the latency of mice and rats to go from an open arm to an enclosed arm of the elevated plus-maze are an indicator of learning and memory.However, it should be kept in mind that such an experimental situation involves an aversive experience of height and openness of the maze that may induce escape learning responses in the animals.Indeed, in a recent serie of studies, Graeff et al. (19)(20)(21) reported an interesting procedure using the socalled T-maze in which the same rat is tested for escape latency (time to escape from open arm) and for inhibitory avoidance latency (time to withdrawal from the enclosed arm).Thus, these investigators suggested that the elevated T-maze is a potentially useful model for the simultaneous study of anxiety and memory. In the present study it is clear that mice were tested only for one task, the one-way escape from the open arm, a response which may be considered to measure the unconditioned fear of the animals.Therefore, we could not separate the influence of fear/ anxiety levels on the evaluation of memory in the elevated plus-maze.Nevertheless, taking into account the finding of Miyazaki et al. (17) that anxiolytic and anxiogenic drugs do not affect learning or memory as measured by the elevated plus-maze test in mice, and considering that in our study the retention test was carried out 24 h after drug treatment and that the changes in latency were achieved without overt motor incoordination of mice, it seems reasonable to suggest that procaine and dimethocaine impaired some memory process(es) involved in the plus-maze task. Although the exact mechanism by which procaine and dimethocaine exert their memory-impairing effects has yet to be identified, it is tempting to speculate about a role for dopaminergic rather than local anesthetic mechanisms in these responses.As mentioned before, both drugs display various behavioral effects consistent with DA agonist activity (8,10,11,13).Regarding biochemical properties, it is known that procaine and dimethocaine can affect the binding site of DA (9).Indeed, the different affinities reported by Ritz et al. (9) for the DA-binding site of dimethocaine (K i = 1.29 µM), procaine (K i = 104 µM) and lidocaine (K i = 3298 µM) may explain the different results of the present study.It is noteworthy that lidocaine injection failed to impair the performance of mice in the elevated plus-maze task.Additional support for this hypothesis comes from the recent study of Graham and Balster (12) showing that local anesthetics belonging to the amide class, such as lidocaine, do not affect the dopaminergic system.Other mechanisms must definitely exist regarding procaine and dimethocaine modulation of memory processes. The present data suggest that procaine and dimethocaine prolong the latency of retention in mice submitted to the plus-maze test, a memory impairing effect probably involving a non-anesthetic mechanism, and that this response is not gender related. Figure 1 - Figure 1 -Effects of intracerebroventricular administration of lidocaine, dimethocaine and procaine on the latency of female (A) and male (B) mice to move from the open arm to the enclosed arm of the plus-maze.Data are reported as means ± SEM of 10 mice.*P≤0.05 compared to the 1st day (Student ttest).**P≤0.05compared to the control group (Newman-Keuls test). Day 2 , performance measured 24 h after the training session.There were no statistical differences amongst anesthetics or days.TreatmentDose (µg/µl) Time (s) on the rota-
2017-05-23T14:05:43.193Z
1998-04-01T00:00:00.000
{ "year": 1998, "sha1": "32be68ad701a339867ee65ae734ae3eb944c541d", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjmbr/a/wxrCbnw7ZPNZpdKCNRKSdhj/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "32be68ad701a339867ee65ae734ae3eb944c541d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
23043972
pes2o/s2orc
v3-fos-license
Method for the Isolation and Identification of mRNAs, microRNAs and Protein Components of Ribonucleoprotein Complexes from Cell Extracts using RIP-Chip As a result of the development of high-throughput sequencing and efficient microarray analysis, global gene expression analysis has become an easy and readily available form of data collection. In many research and disease models however, steady state levels of target gene mRNA does not always directly correlate with steady state protein levels. Post-transcriptional gene regulation is a likely explanation of the divergence between the two. Driven by the binding of RNA Binding Proteins (RBP), post-transcriptional regulation affects mRNA localization, stability and translation by forming a Ribonucleoprotein (RNP) complex with target mRNAs. Identifying these unknown de novo mRNA targets from cellular extracts in the RNP complex is pivotal to understanding mechanisms and functions of the RBP and their resulting effect on protein output. This protocol outlines a method termed RNP immunoprecipitation-microarray (RIP-Chip), which allows for the identification of specific mRNAs associated in the ribonucleoprotein complex, under changing experimental conditions, along with options to further optimize an experiment for the individual researcher. With this important experimental tool, researchers can explore the intricate mechanisms associated with post-transcriptional gene regulation as well as other ribonucleoprotein interactions. Prepare mRNP Lysate 1. Grow and harvest exponentially growing tissue cells to produce between 2-5 mg of total protein for each RIP. a. Two P150 culture dishes are usually sufficient. b. For each RBP being investigated, total cellular number and protein amounts must be optimized to maximize appropriate target mRNA and RBP interactions. c. RIP for qRT-PCR analysis may require less cellular lysate (approximately 400 μg total protein) due to its amplification and detection method especially for high abundance RBPs such as HuR, AUF1, TIAR. 1. Pre-swell protein A Sepharose (PAS) beads overnight in NT2 buffer (3-4 volumes) with 5% BSA. Store at 4 °C. a. Long term storage of up to several months at 4 °C is possible when supplemented with 0.1% sodium azide. b. Protein Sepharose beads should be chosen based on isotype of target RBP antibody. Proteins A, G and A/G all have specific isotype targets and vary in target affinity. Protein A sepharose beads were used based on isotype specificity and affinity for HuR protein antibody. 2. Before use, remove excess NT2 buffer, so that final beads to buffer ratio is 1:1. a. Mixture may be stored for several weeks at 4 °C when supplemented with 0.1% sodium azide. b. An isotype-matched antibody or whole normal sera from same species should be used in parallel as an antibody control against background RNA. 5. Add appropriate antibody to bead mix and incubate overnight, tumbling end over end at 4 °C. a. Optimize antibody titer for specific protein being investigated (1, 5, 10 or 30 μg of antibody is usually sufficient). 6. Prepare antibody-coated beads immediately before use by washing with 1 ml of ice-cold NT2 buffer 5 times. a. Wash bead mix by centrifugation at 13,000 x g for 1-2 min at 4 °C. b. Carefully remove the maximum amount of supernantant with hand pipettor or aspirator but be careful to avoid disrupting pellet. c. Washing helps remove unbound antibody as well as RNase contaminants from antibody mixture. 7. Once the final wash has been completed, resuspend the beads in 700 μl of ice-cold NT2 buffer followed by treatment with various RNase inhibitors to protect the target mRNAs, including 10 μl of RNase Out at 40 U/μl, 10 μl of 100 mM DTT and 15 μl EDTA (15 mM). Bring volume to 1,000 μl with NT2 buffer. a. Vanadyl ribonucleoside complexes are not used due to inhibitory effect of EDTA. Optional Preclear Step: a. To reduce background, beads and control antibody may be used to pre-clear lysate. This may reduce signal in the output. b. This step may be necessary to reduce background when doing IP followed by microarray. It is generally not necessary for IP followed by qRT-PCR. c. Preclear with 15 μg of isotype control for 30 min/4 °C tumbling end over end. d. Add 50 μl of preswollen PAS beads non-coated with antibody from step 2.1. e. Incubate 30 min/4 °C with rotation end over end f. Centrifuge at 10,000 x g at 4 °C. Save supernatant for IP. 2. Add 100 μl of isolated cleared lysate (approximately 2-5 mg) to prepared antibody mix. a. Diluting lysate will help to reduce background and nonspecific binding. b. Amount of lysate input may vary depending on detection method and abundance of RBP or efficiency of RBP antibody as noted in step 1.1.c. 3. (Optional) Immediately mix tube by gently flicking several times followed by brief centrifugation at 10,000 x g at 4 °C to pellet beads and immediately remove 100 μl of supernatant as a total input mRNA representation for qPCR analysis using standard RNA isolation techniques. a. This step is to confirm that input lysate RNA is optimal for IP and should only be performed as a check step or as a troubleshooting step following RIP with poor RNA results. 4. Wrap tube in parafilm to ensure tight sealing and incubate at 4 °C for 2 to 4 hr, tumbling end over end. a. Timing on incubation should be optimized based on target abundance and should be minimized to avoid complex rearrangement or degradation. For some RBPs shorter incubations may be more optimal. 5. Pellet beads at 5,000 x g for 5 min at 4 °C and save supernatant for potential analysis by Western blot. Store aliquots at -80 °C. a. Aliquots of supernatant with high amounts of residual target protein may indicate a failure of the protein to be precipitated by sepharose beads. 6. As described previously, wash beads 5 times with 1 ml of ice-cold NT2 buffer and centrifugation (5,000 x g for 5 min at 4 °C) then remove supernatant with hand pipettor or an aspirator. a. More stringent wash methods may be used in order to reduce background by supplementing NT2 buffer with sodium deoxycholate, urea or SDS. Representative Results If the procedure is optimized and performed correctly, the immunoprecipitation should yield significant enrichment of mRNA targets. Typically, depending on the RBP and its mRNA target(s), we see enrichment of approximately 10-to 50-fold when assessed by qRT-PCR. Many targets of RBPs can be discovered en masse using microarray analysis. However, this method is more sensitive to degradation as compared with qRT-PCR. Depending on the RBP, the number of targets and the efficiency of the reaction, microarray can reveal hundreds of novel targets, or it may only uncover a few, if any. For example, one of the better characterized RNA binding proteins HuR, post-transcriptionally regulates the expression and translation of many important physiological genes 1,2 . Isolation of the HuR-ribonucleo-complex via RIP-Chip in breast cancer cell lines, for instance, revealed enrichment of several important known HuR targets, including β-actin using qRT-PCR as shown in Figure 1. In both cancer cell lines β-actin is enriched 12-to 15-fold. Typically, when performed properly we see a significant enrichment of β-actin in a variety of cell lines. However, if the RIP does not reveal any significant enrichment for β-actin this indicates a problem with the RIP and the procedure may need to be repeated. Furthermore, microarray analysis of immunoprecipitated samples from these cell lines revealed distinct expression subsets of HuR targets in different estrogen receptor (ER) positive MCF-7 cancer cells versus ER negative MB-231 breast cancer cell lines as demonstrated in Figure 2 1 . These targets fall into several categories: known and unknown HuR targets that were either associated or not associated with cancer. For example, CALM2 and CD9 are both cancer genes which were not previously identified as HuR targets. Using the microarray and confirming with qRT-PCR, CALM2 and CD9 were found to be 5-to 180-fold enriched in the HuR pellet indicating a prominent interaction between the HuR protein and these target genes. Figure 1. Immunoprecipitation and RIP in MB-231 (ER-) and MCF-7 (ER+) breast cancer cells. Immunoprecipitations were performed from MB-231 or MCF-7 cell lysates using anti-HuR monoclonal antibody (3A2) and IgG1 isotype control. A. IP Western of HuR revealed expected size band as detected by 3A2. Panel on right reveals amounts of HuR in input lysates used from both cell lines, with β-tubulin as a loading control. B. Verification by quantitative RT-PCR showed fifteen-and eleven-fold enrichments of β-actin, a known HuR target, in the 3A2 IPs from MB-231 and MCF-7, respectively. All ΔΔCT values were normalized to GAPDH. Experiments were done in duplicate (n = 2). Figure 2. HuR RIP-CHIP identifies distinct genetic profiles in ER+ and ER-breast cancer cells. HuR immunoprecipitations were performed from MB-231 or MCF-7 cell lysates using HuR antibody and IgG1 isotype control hybridized to Illumina Sentrix arrays (47,000 genes). Control signals were subtracted. Results represent cumulative data from 12 different arrays. Experiments were done in triplicate (n = 3) for each cell line with matching controls. Scales are log2. Discussion Due to the nature of this experiment, optimization and experience will be the only guaranteed ways to successfully acquire the intended results. In many steps of this procedure, temperature and efficient handling of the reagents and products are critically important. Proper planning and execution of technique will help insure that the experiment was performed in an appropriate timeframe at the optimal temperatures recommended. A major issue with RNA isolation experiments is the sensitivity of RNAs to degradation by RNases. All reagents need to be RNase free and stored or used in RNase-free containers. This is a critical step in ensuring the integrity of your mRNA sample. Even when the One potential problem is having low or even no signal from the RNA isolated by RIP-Chip. Although there may be signal from total RNA, this may be the result of inadequate binding protein being pulled down by the beads. The first troubleshooting step is to confirm that the cellular lysate being used has adequate expression of the specific RBP. Upon confirmation, protein may be isolated after the final NT2 wash and resuspended in Laemmli buffer or another appropriate denaturing buffer and heated at 95 °C for 5 min. Western blot analysis may be used on these samples in coordination with input lysate as well as negative controls to ensure sufficient pull down of associated protein. Furthermore, because lysing the cell is required to access these components, the potential for abnormal and unwanted interactions between normally separated proteins and mRNA may be introduced. These interactions could potentially bind and "soak up" your target mRNAs or binding proteins through nonspecific interactions. Additionally, proteins in these varying conditions can fold in multiple variations and their binding motifs may become inaccessible to their target mRNAs, preventing their interactions. Both of these strengthen the importance of working efficiently as well as utilizing the optimal temperatures listed to limit these unwanted interactions. Additionally, optimization of washing conditions for each specific target protein will be critical to maximize the purity of the interaction. More stringent washing conditions may be needed. For example, the wash buffer may be supplemented with SDS or an appropriate amount of urea to reduce nonspecific interactions and background in your signal output. This will be completely dependent on the experimenter's target RBP as well as the target mRNA in their unique physiological conditions. Some conditions will not be suitable for certain mRNA analysis tools, which should be noted in preparation of samples. Finally, though RIP is successful in the enrichment of RNA-RBP interactions, a well known issue with RIP (CHIP) method is the inability to identify the specific binding domains of the RBP on the transient mRNA targets. Several cross-linking techniques can be used followed by RIP to isolate unique sequence targets; however, the use of short-wave UV tends to lead to nucleic acid damage. A new method known as PAR-CLIP, or photoactivatable ribonucleoside crosslinking and immuoprecipitation, employs long-wave UV to incorporate thiouridine into nascent RNA allowing the identification of unique binding sites from both stable and transient RNA interactions. Overall, RIP-Chip has been established as an excellent tool used to isolate and study the interactions between RNA binding proteins and their mRNA targets by our group as well as many other research groups. Though sensitive in nature and practice, proper execution of this procedure will yield the isolation of these RNP complexes, which until recently, have been inaccessible for discovery and analysis. Disclosures The authors declare that they have no competing interests.
2018-04-03T03:31:49.277Z
2012-09-29T00:00:00.000
{ "year": 2012, "sha1": "11d53c93ad06df62e46645b876ccf46a6be74312", "oa_license": "CCBYNCND", "oa_url": "https://www.jove.com/pdf/3851/method-for-isolation-identification-mrnas-micrornas-protein", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "11d53c93ad06df62e46645b876ccf46a6be74312", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
33901282
pes2o/s2orc
v3-fos-license
A plethora of generalised solitary gravity-capillary water waves The present study describes, first, an efficient algorithm for computing capillary-gravity solitary waves solutions of the irrotational Euler equations with a free surface and, second, provides numerical evidences of the existence of an infinite number of generalised solitary waves (solitary waves with undamped oscillatory wings). Using conformal mapping, the unknown fluid domain, which is to be determined, is mapped into a uniform strip of the complex plane. In the transformed domain, a Babenko-like equation is then derived and solved numerically. Introduction Despite numerous studies devoted to capillary-gravity waves, this topic still fascinates the researchers. The review by Dias and Kharif [21] shortly summarises in great lines what has been known on this subject by the end of the twentieth century. Monographs by Okamoto and Shōji [44] and by Vanden-Broeck [51] are other exhaustive sources of informations on various types of capillary-gravity waves. The travelling capillary-gravity waves of permanent form have been most deeply understood in the framework of weakly nonlinear and weakly dispersive equations, such s the Korteweg-de Vries (KdV) equation and extended Korteweg-de Vries with fifth-order derivatives (KdV5) equation. These equations model the unidirectional propagation of long waves in shallow water with some weak capillary effects. Despite the apparent simplicity of the KdV5 model, it possesses a rich family of solutions. One of them consists of the so-called generalised solitary waves. These are solitary-wave pulses that are homoclinic to small amplitude oscillatory waves. The formation of these waves is mathematically justified by the presence of a resonance in the dynamical system corresponding to the traveling waves [31]. The existence of generalised solitary waves can be deduced from the existence of the phase shift when the action of the stationary problem changes sign [8]. One can also use the existence of the Smale's horseshoe dynamics on the zero energy set [9]. Then, it is relatively straightforward to construct a symbolic orbit which represents a generalised solitary wave. By using the first method one obtains a continuum family of solutions, while the latter gives "only" a countable set of orbits. Benilov et al. [5] showed that, for sufficiently weak surface tensions (i.e. small magnitude of the fifth-order term in KdV5), there cannot exist generalised solitary waves. In fact, the central core necessarily 'radiates' into the oscillatory tail (also known as the "wings"). On the other hand, for non-vanishing values of the surface tension Grimshaw and Joshi [26] constructed a one-parameter family of generalised solitary waves for the KdV5 equation using the methods of exponential asymptotic. This family is characterised by the phase shift of the trailing oscillations. Finally, the existence of multi-pulse solutions was shown numerically and analytically by Champneys and Groves [13]. The main difficulties to compute numerically (to spectral accuracy) the generalised solitary waves for the KdV5 equation are well described by Boyd [7]. The stability of multi-pulse solitary waves was studied by Chardard et al. [15] using the Maslov theory. Much less is known for the full water wave problem. However, the existence of generalised capillary-gravity solitary waves for the full Euler equations was shown by Sun [48,49]. As highlighted by Beale [4], the generalised solitary waves stem from a resonance with periodic waves of the same speed. By using the method of boundary integral equations and Newton iterations to solve the resulting discrete system, Vanden-Broeck with his collaborators computed a plethora of various capillary-gravity travelling waves [27] (see also his monograph [51] and the references therein). Recently, generalised solitary waves were computed to the full water wave problem [14]. The main purpose of the present paper is to show that there are a plethora (likely an infinite number) of generalised solitary waves for the full Euler equations. In the present study, we consider a formulation for travelling capillary-gravity solitary waves by following the pioneering work of Babenko [3]. This formulation is based on the conformal mapping technique on the one hand [23,24,45] and, on the other hand, on a variational principle [36] to derive the equations. The conformal mapping technique has been successfully used to compute numerically periodic gravity waves in deep water [16] and in finite depth [30,34]. More recently, this approach was adapted also to periodic capillary-gravity waves in deep water [40]. The advantage of the Babenko formulation is that it does not add nonlinearities to the Euler equations in the conformally mapped domain. For instance, the Babenko equation being quadratic in nonlinearity for pure gravity waves, like the Euler equation, it is easily solved numerically for solitary waves [20,22]. Moreover, the conformal mapping allows to compute efficiently all the physical fields of interest even in the bulk of the fluid to unveil the underlying internal structure of the flow [22]. In order to solve numerically the Babenko equation for capillary-gravity solitary waves, it is discretised using the Fourier-type pseudo-spectral method [6]. The resulting system of nonlinear equations is solved using the well-known Levenberg-Marquardt (LM) method [29,38]. This algorithm represents a mixture between the steepest descent far from the solution and the classical Newton method in the vicinity of the root [41]. It has been shown to be a robust nonlinear solver even in problems with millions of unknowns [35]. The paper is organised as follows. In Section 2, we present the main constitutive assumptions of the mathematical model, with the conformal mapping technique detailed in Section 2.1, the Lagrangian formulation in Section 2.2 and the Babenko equation in Section 2.3. The numerical resolution is explained in Section 3 and concerns the Fourier collocation discretization and the description of the LM algorithm. Numerical experiments are presented in Section 4; the study is focused on the generation of classical, generalised and multi-pulse solitary waves. Finally, the main conclusions and perspectives of this study are outlined in Section 5. Mathematical model We consider a steady two-dimensional potential flow induced by a solitary wave in a horizontal channel of constant depth. The fluid is assumed to be inviscid and homogeneous. The pressure is equal to the surface tension at the impermeable free surface and the fixed horizontal seabed is impermeable as well. The flow is driven by the volumetric gravity force (directed downward) and by the capillary forces at the free surface. Let (x, y) be a Cartesian coordinate system moving with the wave, x being the horizontal coordinate and y being the upward vertical one. The wave is aperiodic such that x = 0 is the abscissa of the main crest (or trough for waves of depression). By convention, y = −d, y = η(x) and y = 0 denote the position of the bottom, of the free surface and of the mean water level, respectively. The latter implies that the Eulerian average ⟨•⟩ of the free surface is zero, i.e. a ≡ η(0) denotes the wave amplitude 1 . The sketch of the domain is represented on the figure 1. Note that in practical numerical computations, it is not possible to take L = ∞ for generalised solitary waves, so a large L is used instead and the mean water level is not exactly zero. This important point is further discussed below. Let φ, ψ, u and v be the velocity potential, the stream function, the horizontal and vertical velocities, correspondingly, such that u ≡ ∂ x φ = ∂ y ψ and v ≡ ∂ y φ = −∂ x ψ. It is convenient to introduce the complex potential f ≡ φ + iψ (with i 2 = −1) and the complex velocity w ≡ u−iv that are holomorphic functions of z ≡ x+iy (i.e., w = df dz). The complex conjugate is denoted with a star (e.g., z * = x−iy), while subscripts 'b' denote the quantities written at the seabed -e.g., z b (x) = x−id, φ b (x) = φ(x, y =−d) -and subscripts 's' denote the quantities written at the surface -e.g., z s (x) = x + iη(x), φ s (x) = φ(x, y =η(x)). 2 The traces of ψ on the upper and lower boundaries, ψ s and ψ b , are constants because the surface and the bottom are streamlines. The dynamic condition can be expressed in term of the Bernoulli equation where p is the pressure divided by the density, g > 0 is the acceleration due to gravity and B is a Bernoulli constant. At the free surface y = η(x) the pressure p reduces to the effect of the surface tension, i.e., p s = −τ η xx (1 + η 2 x ) −3 2 , τ being a (constant) surface tension coefficient. Averaging (2.2) written at the free surface, the definition of the mean level (2.1) yields an equation for the Bernoulli constant B Let −c be the mean flow velocity defined as Thus, c is the phase velocity of the wave observed in the frame of reference without the mean flow. For solitary waves without wings (i.e, not for generalised solitary waves), −c is also the horizontal velocity in the far field (i.e, u → −c as x → ±∞). It follows that B = c 2 for (localised) solitary waves, but this is not the case for generalised solitary waves. In order to characterise the waves, we introduce the dimensionless parameters Fr and Bo being, respectively, the Froude and Bond numbers. 3 Conformal mapping where α ≡ Re(ζ) and β ≡ Im(ζ). The conformal mapping yields the Cauchy-Riemann relations x α = y β and x β = −y α , while the complex velocity and the velocity components are With the change of dependent variables the Cauchy-Riemann relations X α = Y β and X β = −Y α hold, while the bottom (β = −d) and the free surface (β = 0) impermeabilities yield We define the mean surface elevationη in the conformal domain as Note that, as for the physical space, in practice Λ is finite. The functions X and Y can be expressed in terms of X b -i.e., the function X written at the bottom -as [17,18] where a star denotes the complex conjugate. Thus, the Cauchy-Riemann relations and the bottom impermeability are fulfilled identically. At the free surface β = 0, (2.6) yields that can be inverted as and hence the relation (2.7) yields which relates quantities written at the free surface only. The relation (2.8) can be trivially inverted giving, in particular, where T and C are pseudo-differential operators that, for a pure frequency, take the form Note that, in the far field of a generalised solitary wave, ∂ α X s is a periodic function oscillating around (in general) a non-zero constant. Thus, X s is unbounded as α → ±∞. Integral quantities The wave can be characterised by several integral parameters [32,33,39,47]. These quantities are defined relatively to the uniform flow of speed −c, i.e., in the laboratory 'fixed' frame of reference where the mean flow is zero. This choice is made because the kinetic energy, for example, is infinite in the reference frame moving with the wave. The integral quantities of interest here for classical solitary waves are the: The Lagrangian density L defined from the integral relations above, i.e., The equalities in the integral relations above are easily obtained via some trivial derivations, except perhaps (i.v). The latter, first derived by Starr (1947) [47] without surface tension, can be obtained following the derivation of Longuet-Higgins (1974) [32] and exploiting the relation resulting from the momentum flux equation. Remark 1. Luke's Lagrangian for water waves [36,19] reduces to the Hamilton principle -i.e., the kinetic minus potential energies -if the Laplace equation and the bottom and surface impermeabilities are identically fulfilled. This is precisely the case when using the conformal mapping and the relations derived in Section 2.1, leading in particular to the relation K = ∫ ∞ −∞ 1 2 c 2 η C {η} dα which can then be substituted into the Lagrangian L. Conversely, the relation (i.v) holds only if the equation for the momentum flux is fulfilled. Thus, (i.v) cannot be substituted into L, but it can be used to monitor the accuracy of any resolution procedure. Remark 2. Some of the integral quantities above are bounded for classical solitary waves only. For periodic waves, similar relations can be derived averaging over one wavelength instead of the whole domain. This is not so simple for generalised solitary waves. Babenko's equation Since we have a Lagrangian at our disposal, an equation for η can be obtained from the variational principle δL = 0 leading to the Euler-Lagrange equation which is the Babenko equation [3] for capillary-gravity solitary surface waves. Taking the limit τ → 0, one recovers the Babenko equation for pure gravity solitary waves derived earlier [20,22]. One of the major advantages of Babenko's formulations (comparing to other conformal mapping based techniques [16,30,40]) consists in reducing the degree of nonlinearity of the original Euler equations. Applying the operator C −1 to the both sides of equation (2.11) and splitting linear and nonlinear parts, the Babenko equation can be rewritten where the linear operator (2.13) acts on a pure frequency as One can easily recover in the linear operator the dispersion relation for capillary-gravity waves [21,51]. The numerical method Here, we describe a numerical procedure for discretising and solving the generalised Babenko equation (2.11). Pseudospectral discretisation For Λ > 0 sufficiently large, the periodic problem associated to (2.12) on (−Λ, Λ) is discretised with Fourier collocation techniques. For N ⩾ 1 and on an uniform grid α j = −Λ + jh, h = 2Λ N, j = 1, . . . , N, the values η(α j ) of the solution of (2.12) are approximated , D α stands for the pseudo-spectral differentiation matrix [11,6,12,50], while C h and T h are the discrete version of the operators defined in (2.9). They are constructed as D α by using the discrete Fourier transform matrix with diagonal entries given by the coefficients displayed in (2.9). The operator T h is defined in the same way.) Thus, the discrete system (3.1)-(3.3) is implemented in the Fourier space. Finally, the nonlinear terms are computed by using the Hadamard product of vectors in C N . For these steady computations the use of the anti-aliasing rule was not necessary. The Levenberg-Maquardt algorithm The system (3.1)-(3.3) was attempted to be treated using several techniques. When τ = 0 (pure gravity waves) this was successfully solved using the Petviashvili scheme [20,22]. Note that when τ = 0, the nonlinear term (2.14) is homogeneous with degree two, while the Fourier symbol associated to (2.13), is positive for all k since c 2 > gd. These two properties are key arguments to explain this success, [46] (see also [1,2]). However, they are not retained when τ ≠ 0 and the various theories about the behaviour of the Petviashvili scheme cannot be applied here. For the general case τ ≠ 0, several variants of the Newton method can emerge as alternatives. The reasons for the failure of the classical implementations of this method (explained, e.g., in [7]) were taken into account. Otherwise, the system (3.1)-(3.3) was solved in a nonlinear, least-squares sense, with some standard techniques for this kind of problems presented in the literature, (see, e.g., [43] and references therein). The most robust results were obtained by employing the so-called Levenberg-Marquardt (LM) algorithm, [29,38]. This is one of the most widely used methods for data-fitting nonlinear problems [35]. For the reader interest, a brief description of it in this context is given now. is the residual for the j-th component of (3.1), j = 1, . . . , N at any η h ∈ R N . The associated least-squares problem consists of minimising f (η h ) = (1 2) r(η h ) 2 , where ⋅ stands for the Euclidean norm in R N . As established in [41], the LM algorithm can be formulated as a damped Gauß-Newton method but combined with a trust region strategy. The Gauß-Newton method is a variant of the Newton method with line search, where the search direction p (ν) = p GN (ν), at each Newton step ν, is obtained by solving where r (ν) , J (ν) stand for the residual vector and its Jacobian at the ν-th iteration η (ν) h , respectively. (In our case, the Jacobian was approximated with finite differences, leading in fact to a quasi-Newton method.) Instead of (3.4), the LM algorithm computes p (ν) = p (ν) LM as a minimiser of the model function subject to the condition p ⩽ ∆ (ν) , where ∆ (ν) > 0 denotes the corresponding trust region radius of the trust region where the problem is constrained to ensure convergence of the algorithm (see [37,41,43] for details). Any solution p (ν) is characterised by the existence of a scalar λ (ν) ⩾ 0 (the damping parameter) satisfying (I N stands for the N × N identity matrix. Condition (3.7) simply states that at least one of the nonnegative quantities, λ (ν) or ∆ (ν) − p (ν) , must vanish.) The literature contains several strategies for the computation of the damping parameter [38,42,43]. (The one proposed in the second reference was considered here.) The local convergence of the resulting method is linear [43], but close to quadratic [37], when the damping parameter is sufficiently small. In most of the cases, our implementation of the method will make use of the natural numerical continuation in the Bond number Bo, as a way to improve the performance and to accelerate the convergence. The illustration of this technique will be shown in the following section. Numerical results In numerical computations herein below, the parameters given in Table 1 will be used. The first two concern the model and the rest is associated to the numerical procedure described above. The numerical study is focused on classical, generalised and multi-pulse solitary waves, in several physical regimes. Classical solitary wave of depression The first group of numerical experiments concerns the generation of classical solitary waves of depression. With this name, we mean aperiodic solutions of (2.12) decaying to zero at infinity (in contrast with the generalised solitary waves, which will be considered later on) and with negative amplitude. For supercritical Bond numbers Bo > 1 3 and suitable values of the Froude number Fr < 1, the existence of isolated solitary waves of depression is known, analytically and numerically (see [14] and references therein). A first check of the code is the computation of one of these waves. This is shown in Figure 2, corresponding to the values Bo = 0.45 and Fr = 0.6. The initial iteration of the procedure was chosen simply as a negative localised bump. The algorithm convergence does not seem to be sensitive to the choice of the initial guess for such simple solutions. The classical character of this computed wave is confirmed by Figure 2 (right), which shows that the associated orbit in the phase portrait (computed via the spectral approximation) is homoclinic to the origin The convergence of the numerical method is illustrated by the following experiments. where the Euclidean norm is considered and F h {⋅} is given by (3.1). The vertical scale is logarithmic and the results confirm the convergence. On the other hand, Figure 3 (right panel) shows, in log-log scale, the relation between two consecutive residual errors. This serves as an estimate of the order of convergence. By fitting the logarithmic data to a line, the corresponding slope (with 95% confidence bound) is 1.179. When only errors up to 10 ?15 are considered, the corresponding fitting line has a slope of approximately 2.156. Then, up to this level of errors, Figure 3 (right) suggests a quadratic order of convergence, becoming linear below this error tolerance. Moreover, the residual magnitude is known to be a rather pessimistic estimation of the accuracy in pseudo-spectral methods [6]. Consequently, the convergence of the residual (4.1) is a robust diagnostics of the algorithm convergence. This experiment ends by visualising the internal hydrodynamics under this solitary wave. The total and dynamic pressures are represented in Figures 4, while horizontal and vertical speed and acceleration distribution are displayed in Figures 5 and 6, respectively. All are implemented by using the Cauchy integral formula, which allows to compute all physical fields of interest in the bulk of the fluid [22] for the case of the classical solitary wave displayed in Figure 2. Multi-modal solitary waves Another type of solutions, computed by the code and shown here, consists of multimodal solitary waves. The existence of these solutions in the KdV5 equation was shown analytically and numerically in [13]. An example is given by the generation of multi-pulse solitary waves with negative amplitudes for Bo > 1 3, which is represented by Figure 7 These examples also illustrate the abilities of our numerical procedure. (The computation of the lower figure 8 requires the limit of our hardware capabilities, but more powerful computers will permit the computation of more complicated solutions.) We end this section noting that, for the exact same set of physical and numerical parameters, many different solutions can be obtained by only varying the initial guess of the iterative procedure. Generalised solitary waves The question of existence of classical solitary waves of elevation is not solved, to our knowledge, although some references suggest a negative answer [14]. For Bo < 1 3 and Fr > 1, what is known is the existence of generalised solitary waves, i.e. solitary waves that are homoclinic to exponentially small amplitude oscillatory waves. Furthermore, the amplitude of the corresponding asymptotic oscillations are of order less than for some L ∈ (0, π) and where ω ≠ 0 satisfies the equation ω cosh(ω) = (1 + ω 2 Bo) sinh(ω) [31]. Here, we focus on the numerical generation of this kind of waves. Our first experiments concern the range Bo < 1 3 and Fr > 1, illustrating thus the influence of the capillary effects (smaller or larger values of Bo) on the computations. Taking as initial guess a third-order asymptotic solution for solitary-gravity waves [25], the method has been run in the case of weak and strong capillary effects (Fr = 1.1 and Bo = 0.02, Fr = 1.15 and Bo = 0.22). The resulting numerical profiles are shown in Figure 9. The convergence process is illustrated as in the previous Section. Figure 10 displays the logarithm of the residual error (4.1) as function of the number of iterations for the two cases considered in Figure 9. Note that the method performs better in the case of the nonlinear wave with weaker capillary effects, when the oscillations at infinity of the wave is of smaller amplitude, see (4.2). The order of convergence is suggested by the Figure 10 (right) (where the ratio between two consecutive errors is shown in log-log scale) in the same sense as in Figure 3 (right): for errors above 10 −15 , the slopes of the corresponding fitting lines suggest quadratic convergence, being linear from this error tolerance. The generalised character of the waves is also confirmed in Figure 11. For the wave computed with Fr = 1.17 and Bo = 0.12 (Figure 11 left), the corresponding phase portrait (Figure 11 right) suggests the homoclinic behaviour to oscillations. As mentioned above, most of the results presented in this study are obtained by making the continuation in the Bond number Bo. This process is illustrated on Figure 12 for the Froude number Fr = 1.1 (moderately nonlinear case). As a result, starting from Bo ∼ 0.15 a generalised solitary wave emerges since a resonance occurs between the solitary and the periodic waves of the same speed, [4]. For higher Froude numbers, it happens even earlier Figures 14-16 for the same profile, with the total (Fig. 14 left) and dynamic (Fig. 14 right) pressures, horizontal (Fig. 15 left) and vertical (Fig. 15 right) velocities and accelerations (Fig. 16). Multi-hump generalised solitary waves A final type of waves computed and shown here consists of multi-hump generalised solitary waves. The existence of homoclinic connections with several loops near a resonance for a family of Hamiltonian systems has been recently analysed in [28]. In the case of the water wave problem, these solutions would correspond to multi-hump generalised solitary waves and here we offer some numerical evidences of their existence. Taking a finite number of separated bumps as the initial iteration with Fr = 1.17 and Bo = 0.12, a generalised two-pulse capillary-gravity solitary wave is shown on Figure 17(upperleft) while, for the same values of the parameters, a three-pulse solution is depicted on Figure 17(upper-right). The corresponding phase maps are shown on Figure 17(lower). Our experiments suggest that the process of adding new pulses can be continued indefinitely, provided that the length and resolution of the computational domain are gradually increased as well. As in the previous cases, the convergence of the iteration can be illustrated by the reduction of the corresponding residual error (4.1) with the iterations (figures will not be shown here). This experiment provides numerical evidences that the solutions are not unique for a given set of Froude and Bond numbers, the number of solutions being likely infinite. The variety of possible solutions is illustrated on Figures 18, 19 and 20. On Figure 18, the pulse in the middle is lower than two other pulses, i.e. the middle pic coincide with a trough of the far-field oscillation, while the side pics coincide with ambient crests. Finally, some more novel solutions are shown on Figures 19 and 20 with corresponding phase portraits. Note that they are all obtained for Fr = 1.1 (except Figure 19(e,f )) and Bo = 0.25 by varying the choice of the initial guess of the solution. This is further evidence of the non-uniqueness of the solutions and the likely existence of an infinite number of them. Conclusions and perspectives In this study, the problem of solitary capillary-gravity waves was reformulated on a fixed domain and the corresponding Babenko equation was derived [3] using the classical Lagrangian variational principle [36]. Then, this equation was discretised with a Fouriertype pseudo-spectral method [6]. The resulting discrete nonlinear and nonlocal equation was solved using the Levenberg-Marquardt (LM) method, as an alternative to overcome the drawbacks of the direct application of Newton's method in this and related problems, [7]. Using this formulation, we succeeded to compute by continuation in the Bond number Bo the generalised solitary waves of elevation for Bo < 1 3 (c.f. [14]). Above the critical Bond number Bo = 1 3, we found the classical localised solitary waves of depression which propagate with subcritical speeds Fr < 1, in agreement with the predictions of the KdV5 model. The internal flow structure (velocities, pressure, accelerations) under a generalised solitary wave of elevation (or depression) was also shown by taking advantage of Cauchy-type integral representations, available thanks to the conformal mapping technique, [22]. We also showed that various (generalised) multi-pulse solitary waves of elevation, but also of depression (localised), can be also successfully computed by our method. To our knowledge, these generalised multi-pulse solitary waves have never been computed in the context of capillary-gravity surface waves. The numerical simulations suggest the existence of an infinite number of such waves. Concerning the perspectives, so far in recent studies [20,22], as well as in the present work, we considered the formulation for aperiodic waves. The next step will consist in exploiting the Babenko equation to gravity and capillary-gravity periodic waves. Moreover, the problem of very accurate computation of limiting waves is also essentially open. The variability of solutions arising in several directions is a subject to be explored in future works.
2015-10-07T10:36:28.000Z
2014-11-20T00:00:00.000
{ "year": 2014, "sha1": "614774a246f2ad9e5053c7b928242f2785ca0611", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1411.5519.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "614774a246f2ad9e5053c7b928242f2785ca0611", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
235056030
pes2o/s2orc
v3-fos-license
THE ECONOMIC VIABILITY OF THE ENERGY PRODUCTION FROM BIOMASS VIA ANAEROBIC DIGESTION Anaerobic digestion is a microbial process that occurs in the absence of oxygen where a community of microbial species breaks down both complex and simple organic materials, ultimately producing methane and carbon dioxide. Biogas refers to a secondary energy carrier that can be produced out of many different kinds of organic materials and its options for utilization can be equally versatile biogas can be used to generate electricity, heat and biofuels. It is clear that introduction of the subsidies in 2009 for BGPs initiated usage of the AD technology for generating electric energy. The sharpest increase in number of BGPs was recorded in 2013; however, there was a major downsizing in their installation in 2014 due to change in the subsidy system. The main aim of the paper is to forecast economic viability of biogas plants in Slovakia based on the net present value indicator, estimation of payback period of the technology and assessment of the maximum economic price of input material. INTRODUCTION "A naerobic digestion is a microbial process that occurs in the absence of oxygen. In the process, a community of microbial species breaks down both complex and simple organic materials, ultimately producing methane and carbon dioxide" (Engler et al., 2013). European Biomass Association (2013) defines biogas as a secondary energy carrier that can be produced out of many different kinds of organic materials and its options for utilization can be equally versatile. Biogas can be used to generate electricity, heat and biofuels. Also, the fermentation residues, called digestate, can be used for example as a fertilizer. Pepich et al. (2010) describe biogas as a product of transformation of biomass into energy via an anaerobic digestion (AD), where the resulting product is a biogas, serving as fuel for cogeneration units and it reaches about 70% of the energy content of natural gas. 2 178 kWh of electricity or 11.4 GJ of heat can be obtained by burning 1 000 m 3 of biogas. Additionally, 1 m 3 of biogas contains as much energy as 0.6 to 0.7 dm 3 of fuel oil for heating. Compared with conventional heat and electricity, up to 40% of fuel can be saved. Compressed and adjusted biogas can be supplied to the grid as natural gas and only additional costs for treating biogas are the barrier, even though there are already developed technologies for such treatment (Holm-Nielsen et al., 2008). Baxter (2014) points out that there are many ways how to realise flexible output produced by biogas plants. It is possible to store biogas with storage capacity locally and also via pipelines connecting more biogas plants. Excess capacity cogeneration or combined heat and power (CHP) units might be used in times of deficit irregular renewable electricity generator or of the highest demand. There is a concept of many biogas plants linked together for flexible operation created already. Another alternative is to upgrade biogas to natural gas quality. Braun et al. (2014) believe that economic viability of the energy production from energy crops is possible only if we achieve high crop and biogas yields while keeping investments, raw material and production costs low. In addition, other incentives are provided like subsidies and feedin-tariffs to increase economics of the process. Gebrezgabher et al. (2010) underline that the financial viability of the system also depends on transport of input materials. Some researchers indicate that maximum economical distance is of 15-25 km. Logistics of inputs and outputs are crucial indicator for biogas system to be economically, environmentally, and socially viable. Long distance transportation generates transportation cost as well as environmental costs in form of GHG emissions, odour and noise. Therefore, these externalities of the transport should be managed to their minimums. Wellinger (2014) states that biogas plant operators have to deal with security of sustainability of producing biomass and its higher yields per hectare via catch crop or multiple cropping on arable land. Other possibilities are permanent grasslands. There are also mechanical, physical and biochemical pre-treatment techniques to raise efficiency of biomass degradation. On the other hand, Dollhofer (2014) reminds that these mechanical and chemical pre-treatment techniques come hand in hand with significant energy loses as they require high energy input. The paper intends to forecast economic viability of biogas plants in Slovakia based on the net present value indicator, estimation of payback period of the technology and assessment of the maximum economic price of input material. The contribution of the paper is twofold. First, the paper provides analysis of biogas sector over specific time period in Slovakia. Second, the paper gives an empirical evidence of the economic viability of the biogas sector and its benefits for investors by using simplified model of a biogas plant scenario as a representing sample. MATERIALS AND METHODS In order to forecast economic viability of biogas plants in Slovakia, the following steps are done: Step 1: Biogas plant model is constructed on the basis of analysis of Slovak biogas sector and literature review. Step 2: Grain maize annual price (EUR) forecast is performed. In order to predict the values, the VECM model is performed based on the long run relationship between grain maize annual prices (EUR) and amount of maize production (tonnes) in Slovakia from 1993 to 2018; data were used from Food and Agriculture Organization of the United Nations (FAOSTAT). Anderson et al. (2002) explain that VECM is a policy-oriented vector autoregressive model that is anchored by long-run equilibrium relations suggested by economic theory and VECM forecasts are considerably more accurate than simple random-walk alternative. Gangopadhyay et al. (2016) suggest that VECM indicates a nx 1 vector of stationary time series (y t ) in terms of constant, lagged values of itself and error correction term. The standard VECM model can be expressed as follows: (1) where ECT refers to the Error Correction Term -a product of an adjustment factor (α) and the cointegrating vector (β). The cointegrating vector shows the long-term equilibrium relationship between the examined variables while the adjustment factors indicate the speed of adjustment towards equilibrium in case there is any deviation. Step 3: Prediction of grain maize annual yields (tonnes/hectare) for following calculation of grain annual yields of grain maize (bushels/tonne). Sample Mean method is used to forecast grain maize annual yields. The formula is as follows: (2) where F is a forecasted value in year t; n is number of observations and Y t is an actual value in year t. The data were taken from Food and Agriculture Organization of the United Nations (FAOSTAT). Step 4: Annual grain yields of grain maize (bushels/tonne) are estimated on the basis of its relationship with grain maize annual yields (tonnes/hectare) given by approximate bushels of grain content in a tonne of corn silage (Lauer, 2005; as cited in Lippert, n.d.). Table 1 depicts approximate bushels of grain contained in a tonne of corn silage. Data were used from Food and Agriculture Organization of the United Nations (FAOSTAT). Source: Lauer, 2005 (as cited in Lippert, n.d.) Step 5: Calculation of future maize silage annual prices derived from grain maize prices and grain yield of grain maize forecast. The formula is as follows: (3) where M is a price of grain maize in year t and C is a grain yield in one ton of maize in year t. Step 6: Net present value (NPV) is used as valuation criteria in order to forecast economic viability of biogas plants and Payback Period is performed as a tool that compares revenues with costs and determines the expected number of years required to recover the original investment. NPV determines the present value of an investment and represents sum of estimated future cash flows in today's value of money (Mészáros and Jašňák, 2014). The formula is as follows (Patinvoh et al., 2017): (4) and (5) where CF is estimated cash flow in year t, r is discount factor and I is the initial investment. CF is a function of income, variable cost and fixed costs in year t. P is price of output, o is amount of output produced at time t, c is cost of input and m is amount of input in year t. FC are fixed cost including annuity, labour costs, G-component, manipulation with materials services, maintenance and service costs. The following formula for Payback Period (PBP), that calculates the time required for the payback of investment, can be used for even cash inflows of a project (Santadkha and Skolpap, 2017): However, if the cash flows of a project are uneven, the payback period is computed by adding the annual cash flows until such time as the original investment is recovered. Step 7: To find out the ceiling price of the maize silage as the input for AD, Break Even Analysis was used. The ceiling price of the input is calculated at the point when total costs equal total revenues. The formula is as follows (Weil and Maher, 2005): and (8) where TR (Total Revenues) = cumulated quantity of electricity produced during lifetime of the project MW * guaranteed selling price of the electricity EUR / MW; VC (Variable Costs) = cumulated amount of maize silage used for AD during lifetime of the project (A) in tones * price of maize silage (C) EUR / ton; FC (Fixed Costs) = cumulated fixed costs during lifetime of the project including investment, capital costs, labour costs, G-component, service and maintenance costs and manipulation services costs. Then the final formula is derived as follows: where C is ceiling price (EUR) of one ton of maize silage at which the biogas plant does not generate any profit nor loss. 3, RESULTS In case of Slovakia, the sharpest increase in number of biogas plants ( Forecast of economic viability of biogas plants in Slovakia Considering the fact that there are not exactly the same BGPs, due to the fact that each BGP is tailored to a specific environment, capacity, location and etc., a general model was constructed, according to literature and analysis of biogas sector and its development, which represents majority of BGPs in Slovakia. a. Scenario description The plant is located nearby a farm to minimize transport cost for input and output materials. To simplify model, the plant is considered as economically autonomous entity. The plant uses 100% maize silage as an input material for wet anaerobic fermentation and was launched in 2013 due to the fact that in the very same year the most BGPs were activated. Its size is 1 MW of electric energy capacity. The input material is bought from the farm at market prices and the final output is electricity that is sold at guaranteed prices for 15 years, heat is used only for internal needs and digestate is provided to the farm for free as a fertilizer which is transported to the fields at the expenses of the farm. Manipulation with input and output materials are provided by the farm and the plant covers the costs which are estimated 500 EUR per month. The lifetime of the project is 15 years. The project involves the initial investment of 3.5 million EUR, where 30% is financed by own capital and 70% is financed with debt with interest rate 3.2%, with maturity 10 years in monthly payments. 55% of the investment costs cover technology included in the second depreciation group with accelerated depreciation for the first two years then the technology is included in the third group (depreciation period of 8 years) with accelerated depreciation since 2015. The rest 45% of the investment is included into fourth depreciation group with accelerated depreciation and since 2015 it is included into fifth group with linear depreciation. Operation costs cover all elements that are inevitable to keep the BGP running. b. Forecast of grain maize prices, grain maize yields in Slovakia and calculation of future maize silage prices Forecast of grain maize prices for years from 2019 till 2027 is shown in Figure 1. The prices were predicted on the basis of relationship between annual price of grain maize and volume of production of grain maize in Slovakia. The forecast estimates steadily increasing trend for the grain maize price over the next years. There is also shown that the 95 percent intervals include relatively wide range of values and the actual future values may differ significantly from estimated ones and in that case following calculations might bring inaccurate deductions. Historical values and Simple Mean method were used to forecast yields of grain maize in Slovakia because these yields are mainly affected by weather conditions, fertilizers and pesticides usage, and technology of production. The mean value of the used data is 6.051 T/ha which is transformed into 96.39 Bu/A for next calculations (Figure 2). Figure 2. Actual and forecasted values of grain maize yields (T/ha) in Slovakia Source: authors´ processing, FAOSTAT Using Table 1 of approximate bushels of grain content in a ton of corn silage, we found out relationship from which we derived a formula to calculate grain yields of grain maize to project prices of silage maize. From the data in Table 1 the relationship was derived as it is show in Figure 3. Grain yields are estimated according to maize yields per hectare and the formula is as follows: (10) where y is grain yield in bushels per one tone of grain maize silage in year t and x is grain maize yield in bushels per one acre in year t. c. Determining NPV of the project and estimation of payback period of biogas plants in Slovakia To forecast economic viability of biogas sector NPV tool was used. The model of a biogas plant was constructed the way it represents as many BGPs in Slovakia as possible and contains general similarities. The scenario does not count with any other income than the one from sale of electricity and only one single input is used. Some BGPs benefit from using different types of inputs that are less costly than maize silage even though may not be as effective as the maize silage or there are also options of selling heat and fertilizers; however, these investments are costly and extremely difficult to generalize. According to Figure 4, the sum of all discounted cash flows is 745 048.01 EUR. NPV value for the project is positive which means, it is worth investing and Slovak biogas industry is economically vital. However, the result is as accurate as the silage maize price forecast, which is the most questionable element apart from technical conditions of BGPs. Even though we can easily estimate cash inflow due to guaranteed prices of electricity for 15 years, government can still interfere and decrease or increase cash inflow. An example is implementing G-component in 2014 the effect of which is as a tax levied on electricity produced from RES. Payback period was calculated to 10 years and the initial investments including cost of capital is estimated to be regenerated by the time their maturity as the debt is to be paid by the 10 th year. The BGP starts to generate profit in the 11 th year of the project life with the cumulative profit in the last projected year 1 375 251.19 EUR ( Figure 5). Source: authors´ processing One of the most criticized week spot of the payback period tool is the fact that it does not take into account time value of money. To overcome this drawback, payback period was calculated also on the basis of discounted cash flow. The result is very similar to the previous one and it estimates 10 years for the investments to be recovered and since 11 th year it starts to generate profit as it is shown in Figure 6. The cumulative profit generated in the 15 th year in today`s value of money is 745 049.01 EUR. Figure 6. Discounted cumulative cash flow (in EUR) Source: authors´ processing The ceiling price of main input for AD -maize silage Using the calculation to determine ceiling price of input for AD we discovered that if the one ton of maize silage costs 38.913 EUR in selected period, the model biogas plant generates zero profit. If the price is less, the plant becomes profitable, the lower the price is, the more profitable the plant is. On the other hand, any price above 38.913 EUR/T makes whole project unprofitable. According to these findings the average price of maize silage during lifetime of BGPs needs to be lower than calculated ceiling price to keep the sector profitable. Risk of increasing maize silage price endangers majority of biogas plants not only in Slovakia but also in whole Europe. The finding is confirmed by reports from year 2013 when the price of maize silage went up to 40 EUR/T and biogas plants were generating loss as some of them admitted so. However, they were making loss, the price was not too high so they kept on production electricity to lower the loss. It can be concluded that the economic viability of biogas sector in Slovakia highly depends not only on subsidies but also on the price of maize silage which recorded significant variations over the past years. The Figure 7 shows how profitable are most of the biogas plants in Slovakia at any given price of the input -maize silage. When the revenues and other than input costs are fixed, we can estimate economic condition of biogas sector and its financial benefits for farms or firms according to price development of the maize silage. CONCLUSION The economic viability of the sector and its benefits for investors were examined using simplified model of a biogas plant scenario as a representing sample. BGP is not only an electric power source, but most importantly, it is supposed to be a stable source of income for farmers designed to help financially as their core business is extremely dependent on weather conditions and therefore very risky. Economically vital biogas sector in Slovakia is thus critical for the investors -farmers. According to literature and biogas sector development analysis, the biogas plant model was constructed to match the majority of Slovak BGPs to have a representative sample of the biogas sector in Slovakia -BGP lunched in 2013 with legislation support from the very same year applying fixed subsidies for the next 15 years, with however later changes in legislation (G-component) included. Scale of the plant is 1 MW electric capacity, life time is 15 years and input material is silage maize bought for market prices. The prices were estimated on the basis of grain maize annual prices and its grain content in a ton of corn silage outlook. Calculation based on silage maize forecast predicts positive net present value. More accurately the investment since 2013 of about 3.5 million EUR with additional operation costs is supposed to be worth 745 049.01 EUR in 15 years. The payback period of 10 years is estimated according to cumulated cash flow and also cumulated discounted cash flow. The analysis indicates economic viability of the sector, which is however, based on the accuracy of the silage maize price outlook. Such a long-term price prediction tends to be unreliable due to the too many random factors effecting the price development. The critical price of silage maize is calculated at the level of 38.92 EUR per ton. Former subsidy system was established when price of one silage ton was about 26 EUR per ton, while in 2013 the price went up to 40 EUR per ton and BGP were generating loss. In the case that the average price of silage maize is over 38.92 over the next 12 years, users of average Slovak BGP and overall Slovak biogas sector will be unprofitable, burden for farmer and the whole concept unsuccessful, waste of money and resources. In that case it will be a question if to let farmers to deal with unfavourable market conditions alone or more subsidize the sector. Moreover, it is also necessary to look for suitable substitutes for standardly used corn silage.
2021-01-07T09:05:07.817Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "706fdd2a2ae06fa31c93ce83a7653b9e25eba906", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.31410/eraz.s.p.2020.41", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fb25daf1e59a1677813c680a361c9b6659d3332c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
2579231
pes2o/s2orc
v3-fos-license
Supporting Student Learning in Computer Science Education via the Adaptive Learning Environment ALMA This study presents the ALMA environment (Adaptive Learning Models from texts and Activities). ALMA supports the processes of learning and assessment via: (1) texts differing in local and global cohesion for students with low, medium, and high background knowledge; (2) activities corresponding to different levels of comprehension which prompt the student to practically implement different text-reading strategies, with the recommended activity sequence adapted to the student’s learning style; (3) an overall framework for informing, guiding, and supporting students in performing the activities; and; (4) individualized support and guidance according to student specific characteristics. ALMA also, supports students in distance learning or in blended learning in which students are submitted to face-to-face learning supported by computer technology. The adaptive techniques provided via ALMA are: (a) adaptive presentation and (b) adaptive navigation. Digital learning material, in accordance with the text comprehension model described by Kintsch, was introduced into the ALMA environment. This material can be exploited in either distance or blended learning. Introduction There is a growing literature of studies focusing on assisting comprehension through personalized learning environments.In the early 1990s, the system Point & Query (P & Q), a hypertext/hypermedia system, was developed [1].Students learned entirely by asking questions and interpreting answers to questions.On average, a learner ends up asking 120 questions per hour, which is approximately 700 times the rate of questions in the classroom.Evaluations of the P & Q software revealed, however, that it is not sufficient to simply expose the students to a series of questions associated with hot spots in a large landscape of hypertext/hypermedia content because the percentage of the learner's P & Q choices were shallow questions [1].The original P & Q software was developed for the subject matter of woodwind instruments and was suitable for high school and college students [2]. AutoTutor is a computer tutor that attempts to stimulate the dialogue moves of a human tutor [3][4][5].AutoTutor holds a conversation in natural language that coaches the student in constructing a good explanation in an answer, that corrects misconceptions, and that answers student questions.AutoTutor delivers its dialogue moves with an animated conversational agent that has a text-to-speech engine, facial expressions, gestures, and pointing.One goal of the tutor is to coach the student in covering the list of 10 expectations.A second goal is to correct misconceptions that are manifested in the students' talk by simply correcting the errors as soon as they are manifested.A third goal is to adaptively respond to the student by giving short feedback on the quality of student contributions (positive, negative, or neutral) and by answering the student's questions.A fourth goal is to manage the dialogue in a fashion that appears coherent and accommodates unusual speech acts by learners.AutoTutor has been evaluated on learning gains in several experiments on the topics of computer literacy [2] and conceptual physics [6].The results of these studies have been quite positive [2]. MetaTutor is a hypermedia learning environment that is designed to detect, model, trace, and foster students' self-regulated learning about human body systems such as the circulatory, digestive, and nervous systems [7,8].Theoretically, it is based on cognitive models of self-regulated learning [9][10][11][12].The underlying assumption of MetaTutor is that students should regulate key cognitive and metacognitive processes in order to learn about complex and challenging science topics [2]. SimStudents, an integrated learner model for history and equation problem solving, uses an ACT-R based cognitive model [13].Other systems include the Empirical Assessment of Comprehension [14] and the model of comprehension and recall that is based on Trabasso and Van den Broek's model [15].In this model, the reader, in order to understand the text, has to find the causal path that links the text from the beginning to the end.Recently, various approaches have been proposed [16], which involve learners in negotiating dialogues, as well as learner models that encourage inspection and modification of the model. W-ReTuDis (Web-Reflective Tutorial Dialogue System) is a web-based open learner modeling system designed to support tutorial dialogue through reflective learning.It models human diagnosis of learner's cognitive learning and cognitive text comprehension.The learner model is open for inspection, discussion, and negotiation.The system promotes learners' personalized reflection through tutorial dialogue, helps learners to be aware of their reasoning, and leads them toward scientific thought.The system offers a two-level open interactive environment: learner level and tutor level.In learner level, the learner participates in the construction of his/her learner model through dialogue activities, which promote reflective learning.In tutor level, the tutor based on the learner model makes decisions concerning the appropriate activity, reflective dialogue, and dialogue strategy for the learner.The evaluation results are encouraging for the system's educational impact on the learners [17]. iSTART (Interactive Strategy Training for Active Reading and Thinking) is a web-based tutoring program that uses animated agents to teach reading strategies to young adolescent (Grades 8-12) and college-aged students [18].The program is based on a live intervention called Self-Explanation Reading Training (SERT) that teaches metacognitive reading strategies in the context of self-explanation [19].SERT was motivated in the context of self-explanation.SERT was motivated too by empirical findings that students who self-explain text develop a deeper understanding of the concepts covered in text, combined with a large body of research showing the importance of reading strategies such as comprehension monitoring, making inferences, and elaboration.SERT was designed to improve self-explanation by teaching reading strategies and in turn to facilitate the learning of reading strategies in the context of self-explanation.SERT has been found to successfully improve students' comprehension and course performance at both the college and high school levels.iSTART was designed to deliver an automated version of SERT that could be more widely available and could adapt training to the needs of the student.The research has shown that SERT is most beneficial for students with the least knowledge about the domain as well as the students who are less strategic or less skilled readers. Our review of these initiatives reveal that the existing learning environments support students' text comprehension via activities which are linked with educational material in the web (Point & Query), via texts (MetaTutor), via texts, questions and dialogue between the tutor and the students (ReTuDis), via teaching guiding reading strategies (i-START).Nevertheless, the above learning environments do not support the following:  The adaptation of learning environment to students' background-knowledge;  The adaptation of learning environment to students' learning preferences;  Students' text comprehension with texts of different cohesion; and,  Students' support and assessment with activities which activate students' application of various reading strategies such as paraphrasing, bridging, elaboration. In this line of research, the learning environment ALMA (Adaptive Learning Models from texts and Activities) was designed and developed.Its design was motivated by the results of previous studies in the field of text comprehension.Gasparinatou and Grigoriadou investigated the role of text cohesion and learners' background knowledge in the comprehension of texts in the domain of computer science [20][21][22].The results showed that high-knowledge readers benefit from a minimally cohesive text, in contrast to low-knowledge readers who learn better from a maximally cohesive text.These empirical findings motivated the design and the development of ALMA.ALMA supports students' text comprehension via texts and activities.It supports learning via: (1) texts with various local and global cohesion for students with low, median, and high background-knowledge; (2) activities which correspond to different levels of comprehension and activate the student to apply different reading strategies while the proposed learning sequence of activities is adapted to students' learning style; (3) feedback during the performing of activities in order to inform, guide, and support students in order to discover their mistakes in order to make any corrections; and (4) individual support and guidance according to students' special characteristics. ALMA can be exploited both in distance and in blended learning where students are supported with both face-to-face and computer based teaching.The adaptation techniques which are provided via ALMA are: (1) Adaptive presentation: the learning environment proposes the student to read the text version which is more appropriate for him according to his background-knowledge; and (2) adaptive navigation: the environment helps students to find paths in the hyperspace of the educational material via the adaptation of the page links to the characteristics of the learner model.In this context, ALMA contributes in the development of principles for the design of learning environments which support text comprehension and provide both individual help and guidance.Furthermore, ALMA contributes in the performance of a prototype adaptive learning environment which supports text comprehension and follows these special principles. The first objective of this paper is to present the learning environment ALMA (Adaptive Learning Models from texts and Activities) which is based on Kintsch's Construction-Integration model for text comprehension [23] and also on Kolb's Learning Style Inventory (LSI) [24].ALMA supports students with four text-versions of the same content but with different cohesion.It also supports students with a series of activities.Both texts and activities help students to reach deep comprehension.The second objective of this paper is to present the assessment of ALMA environment by the students who interacted with it. The Construction-Integration Model Adaptive Learning Models from texts and Activities environment (ALMA) is based on Kintsch's Construction-Integration model for text comprehension [23].This model proposes that reading primarily involves the surface, text-based, and situation model levels of comprehension.Most relevant for our research are the text-based and situation model levels.A good text-based understanding relies on a coherent and well-structured representation of the text, whereas a good situation model relies on different processes, primarily on the active use of long term-memory or world knowledge during reading.Links between text-based and background-knowledge must be activated in the reader's mental representation of the text.Motivated readers encountering a gap in the text will attempt to fill it, and doing so requires accessing information from their background-knowledge, which in turn results in the text information being integrated with long-term memory.This gap-filling process can only be successful if readers possess the necessary background-knowledge. The degree to which the concepts, ideas, and relations with a text are explicit has been referred to as text cohesion, whereas the effect of text cohesion on readers' comprehension has been referred to as text coherence [25,26].Text coherence refers to the extent to which a reader is able to understand the relations between ideas in a text and this is generally dependent on whether these relations are explicit in the text. Texts have local and global structure.Microstructure refers to local text properties and macro-structure to the global organization of text.Micro-structure is generally cued by the text via explicit indicators of relations between concepts and ideas (e.g., connectives, argument overlap, and pronominal reference).Micro-structure can also be constructed on the basis of the learner's knowledge when there are details or relations left unstated in the text.A text's macro-structure can be cued directly by the text via topic headers and sentences.Thus, for a good situational understanding, a single text cannot be optimal for every reader: low-knowledge readers benefit more from an easier, cohesive text, whereas high-knowledge readers should be allowed to infer with harder, less cohesive texts.McNamara et al., examined students' comprehension of four versions of a biology text, orthogonally varying local and global cohesion.They found that readers with low and high background-knowledge benefit from a cohesive and a minimally cohesive text respectively [26].Gasparinatou and Grigoriadou, investigated the role of text cohesion and learners' background-knowledge in the comprehension of texts in the domain of computer science [20,21].The results are in agreement with the results of McNamara et al., and motivated the design and the development of ALMA [26]. Kolb's Learning Style Inventory (LSI) According to Kolb, "Learning is the process whereby knowledge is created through the transformation of experience [24].Knowledge results from the combination of grasping experience and transforming it".He proposes that experiential learning has six characteristic features: (1) Learning is best conceived as a process, not in terms of outcomes; (2) Learning is a continuous process grounded in experience; and (3) Learning requires the resolution of conflicts between dialectically opposed modes of adaptation to the world.For Kolb, learning is by its very nature full of tension, because new knowledge is constructed by learners choosing the particular type of abilities they need.Effective learners need four kinds of ability to learn.From concrete experiences (CE), reflective observations (RO), abstract conceptualizations (AC), and active experimentations (AE).These four capacities are structures along two independent axes, with the concrete experiencing of events at one end of the first axis and abstract conceptualization at the other.The second axis has active experimentation at one end and reflective observation at the other.Conflicts are resolved by choosing one of these adaptive modes, and over time, we develop preferred ways of choosing; (4) Learning is a holistic process of adaptation to the world; (5) Learning involves transactions between the person and the environment; and (6) Learning is the process of creating knowledge which is the result of the transaction between social knowledge and personal knowledge.Kolb describes the process of experiential learning as a four-stage cycle.This involves the four adaptive learning modes mentioned above-CE, RO, AC, and AE-and the transactions and the resolutions among them.The tension in the abstract-concrete dimension is between relying on conceptual interpretation (what Kolb calls "comprehension") or on immediate experience (apprehension) in order to grasp hold of experience.The tension in the active-reflective dimension is between relying on internal reflection (intention) or external manipulation (extension) in order to transform experience [27]. Kolb defines four different types of knowledge and four corresponding learning styles.The main characteristics of the four styles are summarized below: 1. Type 1: the converging style (abstract, active) relies primarily on abstract conceptualization and active experimentation; is good at problem solving, decision making, and the practical application of ideas; does best in situations like conventional intelligence tests; is controlled in the expression of emotion and prefers dealing with technical problems rather than interpersonal issues.2. Type 2: the diverging style (concrete, reflective) emphasizes concrete experience and reflective observation; is imaginative and aware of meanings and values; views concrete situations from many perspectives; adapts by observation rather than by action; interested in people and tends to be feeling-oriented.3. Type 3: the assimilating style (abstract, reflective) prefers abstract conceptualization and reflective observation; likes to reason inductively and to create theoretical models; is more concerned with ideas and abstract concepts than with people; thinks it is more important that ideas be logically sound than practical.4. Type 4: the accommodating style (concrete, active) emphasizes concrete experience and active experimentation; likes doing things, carrying out plans and getting involved in new experiences; good at adapting to changing circumstances; solves problems in an intuitive, trial-and-error manner; at ease with people but sometimes seen as impatient and "pushy" [27]. An Outline of the ALMA Environment ALMA actively engages students in the learning process (Figure 1).Learners differ in their experiences, their expectations, their skills, interests, preferences, and their cognitive or learning style.The basic principle of individualized learning is that a simple teaching strategy is not sufficient for all students.Therefore, the students will be better able to achieve their learning goals more effectively when the pedagogical processes are adapted to their individual differences [28].According to Kolb: "Students with different learning styles respond differently to different teaching approaches and therefore the teaching strategies need to match their learning style" [24].The benefits that arise when designing courses considering the learning styles of learners are: (a) the response of learners in the educational material and (b) the improvement of their performance.According to Sampson and Karagiannidis, the criteria for selecting the learning style model, apart from the theoretical and empirical justification, is that the selected model: (a) must hold an evaluation tool; (b) describes teaching strategies related to each category of learning style; and (c) is appropriate for the content and its cost [29].Moreover, Merill suggests that, in teaching systems (in person or based on technology) in which a learning style model is adopted, it is necessary to choose appropriate teaching strategies for the cognitive objective of teaching and secondarily, based on these strategies, we can choose the most appropriate for each learning style [30].According to Ferraro, the learning style may be more effective for the trainees where the technology fits with the principles of instructional design, wherein the application of the teaching criteria is essential for the selection of the most appropriate learning style model [31].ALMA takes into account readers' learning preferences in order to propose them to start from activities that match their learning preferences and continue with less "learning preferences matched" activities in order to develop new capabilities [24].To achieve this goal, it suggests that the student performs the "Learning-Style Inventory (LSI © 1993 David A. Kolb, Experience-Based Learning Systems, Inc: Boston, MA, USA)".The Learning-Style Inventory describes the way a student learns and how he/she deals with ideas and day-to-day situations in his/her life.It includes 12 sentences with a choice of endings.Consequently, ALMA is adapted to students' learning style resulting in personalized learning.3.Such an approach focuses on adult preferences on specific types of activities and educational material and thus it is considered suitable for an adaptive web based educational environment where students are usually adults with a common interest in the subject of the courses they follow; 4. The approach fits a student-centered teaching by providing useful guidance for the correlation of sequence of the specific type of educational material on the students' preferences, in order to achieve specific learning objectives; 5.It is supported by the questionnaire Learning Style Inventory (LSI) of Kolb, which consists of 12 multiple choice questions, the use of which is easy for trainees; 6.It is low cost and it is available for research purposes; and 7.It focuses on the behavior and on the beliefs of students in the workplace and so it has a great potential in the field of distance education. The design of ALMA is also based on Kintsch's Construction-Integration Model.According to Kintsch, text comprehension always requires the student to apply knowledge: lexical, syntactic, semantic, and domain knowledge, personal experience, and so on.Ideally, a text should contain the new information a student needs to know plus just enough of the old information to allow the reader to link the new information with what is already known.Texts that contain too much of what the student already knows are boring to read and, indeed, confusing (e.g., legal and insurance documents that leave nothing to be taken for granted).Consequently, too much coherence and explication may not necessarily be a good thing [23].Gasparinatou and Grigoriadou investigated the role of text cohesion and learners' background knowledge in the comprehension of texts in the domain of computer science.The results showed that high-knowledge readers benefit from a minimally cohesive text, in contrast to low-knowledge readers who learn better from a maximally cohesive text.Furthermore, the students perform activities which correspond to different levels of comprehension and activate them to apply different reading strategies while the proposed learning sequence of activities is adapted to students' learning preferences [20,22]. ALMA also, takes into account readers' background-knowledge in order to propose the appropriate text version from four versions of a text with the same content but different cohesion at the local and global level.As soon as the student selects the learning goal, ALMA suggests that the student performs a background knowledge assessment test, with scores characterized as "high", "median", and "low".ALMA motivates high knowledge students to read the minimally cohesive text at both local and global levels (lg), median knowledge students to read the text with maximum local and minimum global cohesion (Lg) or with minimum local and maximum global cohesion (Lg) and low knowledge students to read the maximally cohesive text (LG).Thus, ALMA offers individualized support via the technique of adaptive presentation.ALMA, also allows the student to choose the preferred version of text and records the time spent reading it (Figure 2). Educational Material Texts For each learning goal, four text versions are developed through the authoring tool of ALMA (ALMA_auth) which provides the author with the option of developing and uploading the educational material.The author firstly develops the original text lg (the text with the minimal local and global cohesion).By varying the cohesion of the original text, according to rules described below, the author develops four texts with the same content but with different cohesion. The following three types of rules are used to maximize local cohesion [26,35]: • Replacing pronouns with noun phrases when the referent was potentially ambiguous (e.g., in the phrase: "Having determined a packet's next destination the network layer append this address to it as an intermediate address and hands it to the link layer.",we replace both "it" with "the packet.")• Adding descriptive elaborations that link unfamiliar concepts with familiar ones (e.g., "the network topology determines the way in which the nodes are connected," is elaborated to "the network topology determines the way in which the nodes are connected, which means, the data paths and consequently the possible ways of interconnecting any two network nodes").• Adding sentence connectives (however, therefore, because, so that) to specify the relationship between sentences or ideas.• The following two types of rules were used to maximize global cohesion [26,35]: • Adding topic headers (e.g., ring topology, access control methods in the medium). • Adding macro propositions serving to link each paragraph to the rest of the text and the overall topic (e.g., "subsequently, the main topologies referring to wired local networks, and their main advantages and disadvantages, will be examined in more detail"). Educational Material-Activities ALMA supports and assesses students' comprehension through a series of activities such as: text recall, summaries, text-based, bridging inference, elaborative inference, problem solving, case studies, active experimentation, and sorting tasks. Text recall helps students remember the basic ideas in the text by translating it into more familiar words.The students are also encouraged to go beyond the basic sentence-focused processing by linking the content of the sentences to other information, either from the text or from the students' background knowledge.The empirical findings have shown that students who are able to recall the text and go beyond the basic sentence-focused processing are more successful at solving problems, more likely to generate inferences, construct more coherent mental models, and develop a deeper understanding of the concepts covered in the text [36] (e.g., "Describe in your own words the operation of network based on client-server model"). Summaries also encourage students to go beyond the text and like text recall can be perfectly good indicators of well-developed situation models [23] (e.g., "Describe briefly the ways in which networks are interconnected"). Text-based questions, as they demand only a specific detail from the text, measure text memory (e.g., "Which device is used to connect two incompatible networks?").Bridging-inference questions motivate students to make bridging inferences which improve comprehension by linking the current sentence to the material previously covered in the text [37].Such inferences allow the reader to form a more cohesive global representation of the text content [23] (e.g., "Compare the advantages and disadvantages between networks based on client-server model and on peer-to-peer model").Elaborative-inference questions motivate students to associate the current sentence with their own related background knowledge.The most important is that students are encouraged to engage in logical or analogical reasoning process to relate the content of the sentence with domain-general knowledge or any experiences related to the subject matter, particularly when they do not have sufficient knowledge about the topic of the text.Research has established that both domain knowledge and elaborations based on more general knowledge are associated with improving learning and comprehension [38]. Elaborations essentially ensure that the information in the text is linked to information that the reader already knows.These connections to background knowledge result in a more coherent and stable representation of the text content [23,26] (e.g., "Could the internet function properly if we replaced the routers with bridges?").In order to answer this question, the student has to link the information in the text according to which: "Compatible networks are interconnected with a bridge whereas incompatible networks are interconnected with a router" with the information from background knowledge according to which "the Internet consists of incompatible networks".Problem-solving questions motivate students to use the information acquired from the text productively in novel environments.This requires that the text information be integrated with the students' background knowledge and become a part of it, so that it can support comprehension and problem solving in new situations [23] (e.g., "In the following figure, the nodes 01 and 02 consist the network 1 whereas the nodes 03, 04, 05, and 06 consist the network 2. The two networks are interconnected with a bridge.Let us assume that the node 03 intends to send a message to node 02.Describe the process which will be followed") (Figure 3).Sorting task has great potential as a simple task and can be used both as a method of assessment and as a mode of instruction.Students are asked to sort a set of key words contained and not contained in the text, in certain groups.They are encouraged to do this task twice, once before reading the text and once more after reading the text.The sorting data are used to determine how strongly reading the text affected students' conceptual structure concerning the information in the text.We are interested in the degree to which the information presented in the text influences their sorting.Sorting task is an alternative method for assessing situation model understanding.(e.g., "Sort each of the following concepts: client server, administrator, in one of the following categories: client-server model, peer-to-peer model, distributed systems") [23]. Active experimentation activities motivate students to undertake an active role and, through experimentation, to construct their own internal representations for the concept they are studying [39,40] (e.g., Students are given the diagram of a home network (Figure 4) and they are asked if the network operates properly.Next, they are asked to design the same network by themselves via a software tool (e.g., Network Notepad) and check via the software if their original answer was correct).Case studies motivate students to engage in the solution of an authentic and thus interesting problem.They are asked to analyze it and propose solutions.The problem is described in detail and is followed by a series of questions aiming to guide the students in the problem solving procedure (e.g., Students are given to study the process of mission and reception of a message.Then they are given the solution and clarifications about the solution of the problem.Afterwards, they are given a similar problem (e.g., concerning the web-based game, World of Warcraft.The game exploits the internet and specifically the client-server model and it permits students to play in a virtual environment possessing an agent.There are also other users in this environment with whom the students are able to chat when they are on line or send them a message.Next, students are asked to describe the process which will be followed in order for a user to connect with the specific network and to communicate with another user when the other user is: (a) online and (b) offline (Figure 5).Moreover ALMA supports multiple Informative, Tutoring and Reflective Feedback Components, aiming to stimulate learners to reflect on their beliefs, to guide and tutor them towards the achievement of specific learning outcomes and to inform them about their performance [41] (e.g., "Your answer is correct!" or "Your answer is not correct!You may have to read again carefully the paragraph concerning the peer-to-peer model"). Adaptive Navigation As we mentioned before, ALMA actively engages students in the learning process.It takes into account readers' learning preferences in order to propose them to start from activities that match their learning preferences and continue with less "learning preferences matched" activities in order to develop new capabilities [24].For example, the activity-based view of the content provided for converging learning style suggests that the learner should start with the activity of active experimentation (Figure 6).If the learner needs help, he can study the theory or case activity.Afterwards he can do the other activities for further practicing.The activity-based view of the content provided to diverging learning style suggests that the learner should start by studying a case activity, continuing with the theory and then trying to complete the other activities.The activity based view of the content provided to assimilating learning style suggests that the learner should start by reading the theory, continuing with a case activity and then completing the other activities.The activity-based view of the content provided to accommodating learning style, suggests that the learner should start with a problem-solving activity.The learner continues with a case activity and an activity of active experimentation.Afterwards, he completes the other activities.Thus, ALMA offers individualized guidance according to student specific characteristics using the technique of adaptive navigation. Adaptive Presentation As soon as the student selects the learning goal, ALMA suggests that the student performs a background knowledge assessment test, with scores characterized as "high", "median", and "low".ALMA motivates high knowledge students to read the minimally cohesive text at both local and global levels (lg), median knowledge students to read the text with maximum local and minimum global cohesion (Lg) or with minimum local and maximum global cohesion (lG) and low knowledge students to read the maximally cohesive text (LG).Thus, ALMA offers individualized support via the technique of adaptive presentation.ALMA, also allows the student to choose the preferred version of text and records the time spent reading it (Figure 2). The Learner Model The learner model in ALMA, keeps information about:(1) learners' background knowledge level and learning style; and (2) learners' behavior during interaction with the environment in terms of the learning sequence chosen, time spent on reading the text, time spent on an activity, etc.The learner model is dynamically updated during interaction with the system in order to keep track of the learner's present status.During interaction, learners may access their model and view the information kept concerning their progress and interaction behavior (Figure 7).The model which supports the PESY ALMA, is the overlay model [42].The overlay model is based on the representation of the knowledge of the student as an overlap of the field of knowledge.For each section of the learning objective, the learner model maintains a price which is an estimate of the level of knowledge of the trainee.The learner model is updated dynamically during interaction in order to always maintain the current status of trainee.The Adaptive Environment ALMA reserves the model of each student working in the environment and updates it throughout the interaction.The learner model:  provides general information about the student such as user name, gender, learning style, background knowledge, knowledge level, and other characteristics of the learner;  it includes data on the interaction of the learner with the learning content, relating to the course in relation to the didactic design of the environment and the opportunities it offers;  it is updated dynamically during the interaction, in order to always maintain the current status of student; and  the learner has the ability to access his model. As we can see, in Figure 7, the student is informed by the model for his learning style, his background knowledge, the text which ALMA suggests to study according to his background knowledge, the performance in the activities, the performance of his colleagues etc.In particular, the characteristics that the system maintains for each student are: The fact that the learner is informed for the average performance of his colleagues through his model is very important because the student feels part of a group of students who have a common goal while healthy competition is cultivated.The learner model is a useful tool for teachers because: (1) it facilitates the assessment of the behavior and the performance of students during the performance of activities; and (2) it provides individualized feedback, where it is necessary, through ALMA_auth tool.Moreover, the study and the evaluation of learners' preferences with respect to the supplied material, provides a useful information for the assessment of texts, activities, and provided feedback units.Learner characteristics retained in the model, such as background knowledge and learning style are a source of adaptation to the environment. Alma Authoring Tool (ALMA_Auth) ΑLΜΑ also includes the authoring tool (ALMA_auth).This tool provides the author with the option of developing and uploading the educational material.The ALMA_auth firstly gives the possibility to teacher to upload the educational material which satisfies the rules described above (see Sections 4.1 and 4.2) and secondly defines the adaptive techniques (adaptive navigation and adaptive presentation).Thus, it is a precious tool for the teacher to develop and to upload the learning material for the ALMA environment (Figure 8). Specifically, the knowledge field of the environment is informed through ALMA_auth tool and the teacher introduces: (1) the texts and activities; ( 2) the correct answer in each activity; and (3) the units of feedback.For each learning target, ALMA_auth supports the author to write four versions of a text with different local and global cohesion according to the rules discussed in Section 4. Furthermore, the authoring tool supports the author in the development of activities that support learning, by activating the student to apply the reading strategies of paraphrasing, bridging and elaboration.The activities follow the specifications mentioned in Section 4. In addition, for each activity, ALMA_auth supports the author in developing three units of feedback according to the specifications mentioned in Section 4. Alma Forum Finally, ALMA includes a forum where students have the possibility to collaborate with each other and also with the teacher (Figure 9). The Empirical Study The aim of the empirical study was to investigate how the learning design of ALMA, can support the learning process of students, with a wide range of backgrounds and different learning preferences, in the context of introductory computer science courses.The study was conducted during the winter semester of the academic year 2009-2010 in the context of the undergraduate course "Introduction to Informatics and Telecommunications."The course objective is to give students a strong background knowledge in the computer science topics: data storage, data manipulation, operating systems, networking and Internet, and algorithms and programming languages.Specifically, the main research questions were: (1) Do the students agree with the proposed by ALMA text version according to their background knowledge?(2) Do the students agree with the learning sequence proposed by ALMA according to their learning style? Participants The study sample consisted of 77 first-year students who were taking the course "Introduction to Informatics and Telecommunications" at the Department of Informatics and Telecommunications of the National and Kapodistrian University of Athens, Greece.Their participation was in the context of an activity having the following objectives: (1) to study the learning goal "Computer Networks' Principles"; (2) to assess the course designed via ALMA. Procedure The empirical study took place for three weeks and consisted of the following phases: (1) presenting the ALMA environment in the classroom, (2) interacting with ALMA and working out activities, which took place for two weeks, and (3) completing a questionnaire on the effectiveness of ALMA in supporting the learning process in such a course.This phase lasted one week.During these three weeks, students cooperated with each other and the instructor via the ALMA forum. Materials and Tasks In order to investigate how to support the learning and teaching process in the context of the course "Introduction to Informatics and Telecommunications", educational material in the form of text and activities described in unit 4, was developed.Students studied the learning goal "Computer Networks' Principles".All tasks were completed remotely. Data Collection In order to answer the research questions, we analyzed: (1) ALMA log files created automatically by the environment.In particular, students' sequence during interaction with the environment and performance in the activities was identified.This way, we obtained an indication of how ALMA supports students to deepen their knowledge and develop an adequate situational model; (2) the assessment questionnaire completed by the students. Achievement Measures Having an objective to investigate students' exploitation of ALMA facilities and particularly to identify the sequences of actions that students performed in order to study the aforementioned topic, we analyzed ALMA log files.9.6.2.Questionnaire The evaluation questionnaire, filled by the students, consisted of Likert-scale type and other types of questions asking students to express their opinion on the effectiveness of ALMA in supporting the learning process (indicative question is: "Do you agree to study the proposed by ALMA text version according to your background knowledge?").Students' answers, in Likert scale type questions, varied from 1 to 5 (1 indicates "I strongly disagree", 5 indicates "I strongly agree").Additionally the students were given the option to express their opinion about each one of the questions, as well as to make comments and suggestions for the improvement of ALMA.Cronbach's alpha for learning style questionnaire and the assessment questionnaire was 0.70 and 0.81 respectively implying a reasonable level of internal reliability. Results The results showed the following: 9.7.1.Learning Style Seventeen students (22.1%) had the diverging learning style, 30 (39%) the assimilating learning style, 19 (24.7%) had the converging learning style, and 11 (14.3%) the accommodating learning style.9.7.2.Background-Knowledge Questionnaire Thirteen (16.9%) scored less than 40% (low back-ground-knowledge), 17 (22.1%)scored between 40% and 60% (median), and the remaining 47 (61%) performed more than 60% (high).9.7.3.Pre-Reading and Post-reading Sorting Activity We performed one way ANOVA.The results are shown in Table 1.According to the results of Table 1, the students performed better in post-reading sorting activity and the difference was statistically significant (p = 0.00).The performance of the students in the pre-reading and the post-reading activity according to their learning style is presented in the Table 2 and it is independent of the learning style (F(3,73) = 1.910, p = 0.135, pre-reading sorting activity (F(3,73) = 1.205, p = 0.314, post-reading sorting activity).The results were expected because there are experimental results that suggest that learners have preferences on the kind of interaction/presentation of information they receive [43][44][45][46].Specifically in a web-based learning environment, data about the usage of the system during interaction is very important as it allows a direct observation of a learner's behaviour [28]. Furthermore, one-way ANOVA showed that the students with low background knowledge were the most improved in terms of post-reading sorting activity followed by students with median background knowledge and lastly the high knowledge readers which were the ones who appeared to be improved the least in post-reading sorting activity.The difference in improvement was statistically significant (F(2,74) = 12.603, p = 0.000).The results are shown in Table 3.One way ANOVA was conducted.The performance of the students in the rest of the activities according to their background knowledge and their learning style is shown in Tables 4 and 5, respectively. ANOVA showed that most of the students performed very well in all types of activities.Thus, the students via the learning environment ALMA were able to construct both a good text-based model and a good situation model.The performance in activities was independent from the background knowledge. The results show that a statistically significant difference was not observed in the performance of the students according to their learning style and they were expected as we mentioned in Section 9.7.3.The answers of the students in the assessment questionnaire are presented in Table 6.Seventy eight per cent (78%) of the students agreed with the proposed by ALMA text version according to their background-knowledge whereas 77.9% agreed with the learning sequence proposed by ALMA according to their learning style.Eighty six per cent (86%) appreciated that the access to the proposed text was easy whereas 93.5% appreciated that the proposed learning sequence by ALMA is clear. A significant proportion of students (67.6%) considered the completion of the background-knowledge questionnaire easy whereas 77% considered that the completion of the learning style questionnaire was also easy.As it concerns the completion of the pre-reading sorting activity, only 33.8% considered it easy, whereas a significant proportion (88.2%) considered the completion of the post-reading sorting activity easy. As it concerns the activity of active experimentation, a proportion of 61.1% considered the completion of this activity easy.Moreover, 54.6% considered that the completion of the activity elaborativeinference questions easy, 53.3% considered that the completion of the text-recall activity easy, whereas 57.3% considered that the completion of summary activity easy. Moreover, 70.6% of students were satisfied with the Informative Feedback whereas 69% were satisfied with the Tutoring Feedback.An important proportion of 88.1% of students states that the feedback about their knowledge level was comprehensible whereas 84.2% states that the feedback about their knowledge level was useful.A significant proportion, 80.8%, considered the information offered by ALMA in the HELP menu comprehensible, whereas 79.5% considered that the information in HELP menu useful. Students also answered the following question: "If you would had the option to study the learning goal "Networking and Internet" via: (a) the traditional teaching method; (b) the learning environment ALMA; (c) a combination of the traditional teaching method and the learning environment ALMA, what would you prefer for: (1) your undergraduate studies?; (2) postgraduate studies?" A percentage of 83.8% and 81.1% of students would prefer to study the above learning goal, via a combination of the traditional teaching method and the learning environment ALMA for their under-and postgraduate studies respectively.A proportion of 14.9% and 12.2% of students would prefer to study the learning goal via ALMA for their under and postgraduate studies respectively and finally only 1.4% and 6.8% would prefer the traditional teaching method for under and postgraduate studies respectively. Conclusion and Future Plans In conclusion, ALMA could be a valuable tool for supporting the learning process in introductory computer science courses and helping students to deepen their understanding in the undergraduate curricula of Computer Science.Students had a positive opinion about ALMA environment because they were encouraged to use their background knowledge while reading and they believe that ALMA gives them the opportunity to achieve better results in learning from texts in computer science than reading a single text targeted at the level of an average reader.Moreover, students had a positive opinion about the learning sequence proposed by ALMA and they believe that a combination of the traditional teaching method and ALMA environment would be the best for their under-and postgraduate studies. Consequently, ALMA supports both text comprehension and learning preferences.It differs from the other learning environments in text comprehension, which we mentioned in Section 1, in the following: • It supports distance learning. • It offers four versions of a text according to learners' background-knowledge. • It offers a variety of activities in order to support students' comprehension. • It suggests a different learning sequence according to learners' learning style. • It includes an authoring tool (ALMA_auth) which provides the author with the option of developing and uploading the educational material.• It includes a forum (ALMA_forum) where students have the possibility to collaborate with each other and also with the instructor. Our future plans include the summative evaluation of ALMA environment by under-and postgraduate students and also by specialists in the assessment of web based learning environments.We further intend to design and develop educational material for other learning goals both in higher and in secondary education. Figure 2 . Figure 2. ALMA suggests the appropriate text to the student according to his background-knowledge. Figure 3 . Figure 3. Delivery of a message between the nodes of two networks which are interconnected with a bridge. Figure 5 . Figure 5.The route of data in the network. Figure 6 . Figure 6.Learning sequence for student with converging learning style. learning target  Performance in the background knowledge questionnaire  The text or texts versions that have been read  The performance in each activity (quantitative characterization)  The average performance of other learners in activity  The number of activities which the student elaborated  The average number of activities which were elaborated by other students  The feedback requested  The browsing history Figure 8 . Figure 8.The author creates the learning goal via ALMA_auth. Table 1 . Performance in Sorting Activity. Table 2 . Performance in Sorting Activity according to Learning Style. Table 3 . Improvement in post-reading sorting activity according to background knowledge. Table 4 . Performance in Comprehension Activities in relation to background knowledge. Table 5 . Performance in Comprehension Activities in relation to the learning style.
2016-03-14T22:51:50.573Z
2015-10-12T00:00:00.000
{ "year": 2015, "sha1": "2d58aca827235d54089c339904aaa37387f9bbe5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-8954/3/4/237/pdf?version=1444657097", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "2d58aca827235d54089c339904aaa37387f9bbe5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
256224965
pes2o/s2orc
v3-fos-license
Suboptimal Chest Radiography and Artificial Intelligence: The Problem and the Solution Chest radiographs (CXR) are the most performed imaging tests and rank high among the radiographic exams with suboptimal quality and high rejection rates. Suboptimal CXRs can cause delays in patient care and pitfalls in radiographic interpretation, given their ubiquitous use in the diagnosis and management of acute and chronic ailments. Suboptimal CXRs can also compound and lead to high inter-radiologist variations in CXR interpretation. While advances in radiography with transitions to computerized and digital radiography have reduced the prevalence of suboptimal exams, the problem persists. Advances in machine learning and artificial intelligence (AI), particularly in the radiographic acquisition, triage, and interpretation of CXRs, could offer a plausible solution for suboptimal CXRs. We review the literature on suboptimal CXRs and the potential use of AI to help reduce the prevalence of suboptimal CXRs. Introduction The best introduction to suboptimal chest radiographs (CXRs) and artificial intelligence (AI) might start with the words of famous American composer Duke Ellington , "a problem is a chance for you to do your best". In the context of suboptimal CXRs, the words imply a dire need for the best solutions, including education and AI. At the same time, a growing body of evidence urges a cautionary approach to AI and reminds us of the words of the legendary World War II correspondent Edward R. Murrow , "Our major obligation is not to mistake slogans for solutions". While several AI-related studies report promising use-case scenarios for AI applications in CXRs, users must recognize the limitations of AI as well. Prior studies have reported on AI applications in triaging, segmentation, detection, and diagnosis of radiographic findings, as well as risk stratification, outcome prediction, and image optimization of CXRs [1][2][3][4]. Conversely, others draw attention to flaws in research and commercial CXR-AI models with regulatory clearance from the United States Food and Drug Administration (FDA) [5]. In this article, we discuss issues, causes, impact, and potential solutions related to suboptimal CXRs; a similar approach can apply to radiographs of other body parts as well as to other imaging modalities. We review the literature on suboptimal CXRs and the potential use of AI to decrease their prevalence. Optimal and Suboptimal CXRs: The Criteria The American College of Radiology (ACR)-Society of Pediatric Radiology (SPR)-Society of Thoracic Radiologist (STR) Practice Parameters for the performance of chest radiography and European guidelines on quality criteria for diagnostic radiographic images provide guidelines for the specifications of the exam [6,7]. These guidance documents define optimal CXRs as those with optimal exposure as visibility of the lung parenchyma at a mid-gray level; inclusion of both lung apices and costophrenic angles; optimal position without overlapping of scapulae and arms on the lungs; centering of the vertebral column between the clavicles; appropriate definition of lower thoracic vertebrae and retrocardiac pulmonary vessels; and collimation to limit exposure to body parts beyond thorax. Figure 1 illustrates various causes of suboptimal CXRs as a result of deviations from optimal radiography techniques. Suboptimal CXRs can be related to low or high gray-level exposure of lung fields (related to under-or over-exposure); non-inclusion of entire lungs from apices to costophrenic angles, rotation or oblique acquisition without centering of the vertebral column between the clavicles, chin, arm, or removable foreign bodies (such as lockets, zippers, coin, and watches) obscuring parts of anatomy. Other deficiencies include inadequate definition of lower thoracic vertebrae and retrocardiac pulmonary vessels, low lung volumes from poor inspiratory breath-hold, technical inadequacy resulting in increased noise and processing and cassette-related artifacts, lack of proper collimation to limit exposure beyond lungs, unintended lordotic or angulated projections. Suboptimal CXRs: The Problem Poor quality exams not only affect the diagnostic interpretation but also have an economic impact. The national average cost of CXRs in the US is $ 420, with substantial variations across different locations and sometimes in the same region [8]. Given that 40% of the 3.6 billion worldwide imaging studies performed every year are CXRs, the cost implication of rejected and repeated suboptimal CXRs can be enormous [9]. A repeat radiograph is associated with increased radiation exposure, additional time and Suboptimal CXRs: The Problem Poor quality exams not only affect the diagnostic interpretation but also have an economic impact. The national average cost of CXRs in the US is $420, with substantial variations across different locations and sometimes in the same region [8]. Given that 40% of the 3.6 billion worldwide imaging studies performed every year are CXRs, the cost implication of rejected and repeated suboptimal CXRs can be enormous [9]. A repeat radiograph is associated with increased radiation exposure, additional time and resources, workflow issues, diagnostic delays, and potential limitations and pitfalls in interpretation with persistent suboptimality. Issues related to suboptimal radiography do not have simple solutions, as in the quote, "A problem well-stated is a problem half-solved", from John Dewey (1859-1952), an American philosopher, psychologist, and educational reformer. Although suboptimal radiography is often related to errors in its acquisition, not all causes leading up to suboptimal CXRs stem from inadequate technologist training or a lack of attention to detail. Often, and especially for portable CXRs in acutely sick patients on life support or severely debilitating conditions, there is little a technologist or high-end acquisition technologies can do to obtain optimal CXRs. However, in a world besotted with technological innovations, where solutions often search for problems or amplify some issues to emerge as saviors, it is critical to clearly define the magnitude of the problem from suboptimal CXRs before justifying conventional mitigating steps or proposing cutting-edge remedies with AI. With an ever-increasing use of imaging [10], there is a need for improved quality control. Quality control in radiography is vital for all three main types of radiography, including conventional/film radiography, computed radiography (CR), and digital radiography (DR). Conventional/film radiography has several limitations, including dose reduction, fixed non-linear grey-scale response, incompatibility with the PACS (Picture Archiving and Communication System), and environmental and storage issues [11]. Although CR is less expensive than DR and offers multiple-size detector cassettes, it can produce poor-quality radiographs and is labor-intensive. Overexposure leading to suboptimal or rejected conventional radiographs can be missed with CR and DR due to the ability to correct the window level and width on the viewing workstation. DR results in higher quality radiographs and opportunities to enhance or manipulate radiographs after acquisition to reduce exposure-related issues. However, image manipulation with DR cannot fix problems outside of radiographic exposure, such as patient positioning, low lung volumes, obscuring body parts or artifacts, clipped anatomy of the lungs, and inadequate collimation. Thus, all radiography technologies are vulnerable to suboptimal quality and, therefore, require surveillance and quality control measures. Various causes of suboptimal CXRs and reject rates are summarized in Table 1. A 2015 study from Tschauner et al. reported that only 4% of CXRs fulfilled all criteria for optimal pediatric CXRs [12]. The study evaluated the quality of pediatric radiographs for meeting the European guidelines with the primary focus on optimal collimation of CXRs since the optimal field size is vital in reducing radiation dose. The authors reported only 49% of radiographs were performed at the peak of inspiration and 76% of examinations without rotation or tilting. From a review of 80-0 CXRs, Okeji et al. [13] subcategorized CXR quality based on patient details, anatomical markers, anatomic coverage, full inspiration, artifacts, position of scapula, radiographic exposure, blurring, rotation, and darkroom processing faults. Only 17% of CXRs met the optimal quality criteria, with inadequate collimation being the most common cause of suboptimal CXRs (83%, n = 664/800 CXRs) [14]. Several publications have reported on the reject rate for radiographs [12][13][14][15][16][17][18][19][20][21]. Reject rate refers to suboptimal radiographs that are rejected or discarded, and often require repeat radiographs to obtain a diagnostic quality radiograph. For CXRs, prior research reported the reject rates varying between 4% and 15% ( Table 2) [14][15][16][17][18][19][20][21]. Jabbari et al. evaluated 5695 radiographs in Iran and reported an 11% repeat or reject rate. Problems related to exposure (over-and under-exposure) were the commonest cause of rejection. Other causes of suboptimality included position faults, patient motion, and processing faults leading to artifacts or exposure-related issues. The pelvis and upper limb radiographs had the highest and lowest repeat rate of 14% and 4%, respectively [14]. A similar study from Namibia reported errors in patient positioning as the major cause of rejection, followed by issues related to under or over-exposure [15]. The overall departmental reject rate was 8%, with a 10% reject rate for CXRs. The 16% (mammogram), 13% (skull), 10% (cervical spine), and 8.3% (thoracic spine) repeat rates were higher than the overall average [15]. Foos et al. performed a study in a university and community hospital setting to analyze the reject rate for CR examinations [16]. CXRs were the most frequently performed examinations and had a reject rate of 9% and 8.8% at the university and community hospitals, respectively. The reason for rejection presented in their study included clipped anatomy, positioning errors, patient motion, artifacts, clipped markers, incorrect markers, and low and high exposure index. Shoulder, hip, and spine radiographs had a reject rate of 9-11%, 10%, and 8-11%, respectively [16]. Jones et al. reviewed 66,063 radiographs from one year using an automated recording system [17]. Default reasons for technologists to select when rejecting radiographs were positioning issues, wrong patient identification number, exposure errors, test images, and artifacts. A blank field was also provided for technologists to enter a free text cause for rejecting radiographs if necessary. A total of 6002 radiographs were rejected over the duration of the study from multiple modalities. They reported a reject rate of 3% for portable and 30% for decubitus CXRs. The reject rates for pelvis, shoulder, humerus, cervical, thoracic, and lumbar spine radiographs were 19%, 14%, 13%, 12-25%, 11-27%, and 10-16%, respectively. Positioning errors accounted for 77% of the rejection, while 10% of rejected radiographs were from exposure-related issues. Their study also highlights the importance of an automated radiation exposure system to address the issues related to the reject rate [17]. The study by Sadiq et al. conducted a reject repeat analysis of a plain radiograph in Nigeria [18]. The 37% reject rate for CXRs was significantly greater than the overall reject rate of 29%. Under-and over-exposure accounted for 36% and 24% of rejections, while clipped anatomy, excessive patient rotation, and artifacts contributed to 22%, 5%, and 4% rejections, respectively. Compared to the CXRs, postnasal space, paranasal sinus, and pelvis radiographs had higher reject rates with 58%, 43%, and 67%, respectively. Ali et al. conducted a study during the COVID-19 pandemic to evaluate the reject rate [20]. They reported an overall reject rate of 17%. The causes of rejection in the order of its frequency were positioning, artifacts, motion, collimation, labeling, exposure errors, and machine/detector faults. The 23% reject rate for CXRs was 15% higher than the overall average. In their study, skull radiographs (45%) had the highest reject rate, followed by pelvis (35%), abdomen (28%), and neck (21%). These studies highlight the prevalence and causes of issues related to the quality of radiographs. The high variation between the studies on reject rates could be related to the subjective rejection of radiographs by the technologists at different sites. Suboptimal CXRs: Impact and Issues The substantial difference between what is deemed as suboptimal (as high as 83-96% in some studies) [12,13] versus the 4-15% reject rate [14][15][16][17][18][19][20][21] is likely related to the fact that suboptimal CXRs are far more common, and therefore, less often rejected. A low reject rate might also imply that the CXRs either have minor reasons (such as CXRs with a minor degree of rotation) or unsolvable issues (such as low lung volumes in ventilated or debilitated patients). Alternatively, radiology services might have a high degree of tolerance for suboptimal CXRs, resulting in fewer rejected and reacquired images for reasons related to costs, workflow, and ability to resolve underlying etiologies of suboptimal CXRs. The ongoing COVID-19 pandemic has led to an increase in the rejection rate in our department from <5% to as high as 9% in our quaternary healthcare practice due to a combination of staff shortages juxtaposed with increased demand for CXRs while maintaining a safe distance and minimizing patient contact. The impact of suboptimal CXRs is non-trivial. For example, in a critically ill or unstable patient, clipped lung apices or overlying anatomy can limit the evaluation of pneumothorax, apical pneumonia, or lesions. Likewise, an underexposed image or one with low lung volumes can limit the evaluation of lung bases and the position of lines and tubes. Excessive patient rotation can affect the evaluation of lung, hila, and cardiomediastinal abnormalities. Artifacts can mimic lesions, triggering additional diagnostic tests or repeat radiographs and causing patient anxiety. Suboptimal radiographs can also lead to misinterpretation resulting from false positive or false negative interpretations of CXR findings. Beyond the adverse impact of suboptimal CXR on diagnostic interpretation, reacquisition can delay patient care, which is especially important for urgent or critical findings. They can negatively affect patient workflow and cause patient inconvenience, especially in outpatient settings where patient recall might be necessary to repeat CXRs. The latter can happen with film/conventional radiography and CR, where images are not available for immediate viewing. With DR, technologists have immediate access to radiographs and can verify optimality and reacquire before letting the patient leave. Reacquisition also increases the technologists' workload. Given the profound clinical importance of CXRs, a lower frequency of suboptimal CXRs is desirable but a challenging goal. Yet, once a problem is stated, perhaps perseverance can bring success with the words of Amelia Earhart (1897-1937), the first female aviator to fly solo across the Atlantic-"The most difficult thing is the decision to act, the rest is merely tenacity". Despite extensive guidelines on CXR image quality [6,7], mitigation of suboptimality and reject rates remains problematic. Quality control and improvement are challenging. For one, they are labor-intensive, time-consuming, and often require manual review of radiographs. Although DR is conducive to immediate mitigation with rejection and reacquisition, additional radiographs still entail additional radiation exposure to patients. With a quick image quality review and rejection analysis with DR, there are minimal delays and workflow issues, but such an option is tedious for conventional radiography and CR. Another benefit of the DR system pertains to post-acquisition image enhancement and manipulation to salvage some suboptimal radiographs. A focus on efficiency and productivity often requires technologists to maximize patient throughput and give the backseat to quality control measures. Usually, the rejected radiographs are not archived and only become statistics for monitoring databases and information. While these statistics are valuable tools for audit and surveillance, they represent a lost opportunity to "show and tell" or "see and remember". These should be considered opportunities for what to avoid and improve among the causes of recurrent problems resulting in suboptimal or rejected radiographs. Therefore, retention of rejected radiographs in some form is a valuable educational resource for preventing the recurrence of some suboptimal radiographs [22]. Beyond documentation of a radiation event, suboptimal and rejected radiographs can provide information on the cause and need for rejection and, more importantly, whether the repeat radiograph mitigated the issue with rejected radiographs. As the technologists acquiring the radiographs are usually tagged to the image, they can receive personalized feedback on the errors while avoiding punitive actions [23]. Individual technologists must not be held accountable for the poor quality, as the cause is usually multifactorial. For example, it may not be possible to avoid underexposed suboptimal CXRs in morbidly obese patients or CXRs with clipped lung bases in severely hyperinflated lungs. Positive reinforcement with rewards can motivate and inspire radiographers to put in additional effort and attention to optimizing radiographic acquisition. It is essential to record the reject rates for both portable and fixed radiography equipment. In addition, the conventional mitigation strategy of auditing and review is necessary to understand the scope and impact of suboptimal CXRs. Coupled with continuous learning and feedback on quality issues with radiography, these can help mitigate suboptimal radiography but require additional staffing on quality assurance personnel such as in our institution. Mitigation: The New Direction While surveillance and education can reduce suboptimality and reject rates, mitigation might benefit from new thinking given the multifactorial causes of suboptimality. Perhaps Albert Einstein (1879-1955) was correct when he stated, "We cannot solve our problems with the same level of thinking that created them". So, is AI the new level of thinking or mitigation for suboptimal CXRs and other radiographic examinations? AI is ushering in a new revolution in medicine, and medical imaging is at the forefront of AI applications due to its massive digital footprint. For example, in radiography, and specifically for CXRs, several AI algorithms triage and detect radiographic findings [1][2][3][4]24]. There is little doubt that some causes of suboptimality are not always solvable, such as exposure issues in an extremely large patient or low lung volumes in critically ill, unconscious patients. For others, such as clipped anatomy or overlapping structures or artifacts, AI can help. AI algorithms from some commercial entities (such as Qure.ai, Annalise, and Carestream) also target qualitative aspects of CXRs. For example, fixed Carestream's radiography units use AI before and after image acquisition. For positioning the patient, they utilize two RGBD cameras (Red, Green, Blue, and Depth) to collect patient information and transfer it to an AI-based pose-detection algorithm and classifier. The information on fixed DR units helps automatically adjust the Bucky height to the patient and helps radiographers. This smart positioning system communicates essential aspects of ideal CXR to the radiographers, such as patient contact with Bucky, center alignment, patient orientation, tilt, and hand position. Such information can help radiographers avoid patient positioning errors [25]. Another AI-based feature (Smart Noise Cancellation, Carestream) based on a deep convolutional neural network trained to predict input image noise can result in 2 to 4× noise reduction without loss of sharpness of anatomical structures [26]. With noise cancellation, users can reduce radiation dose by up to two-fold, especially relevant for neonates and small children. AI algorithms can also determine the patient size and adjust or adapt automatic exposure control settings on some fixed radiography units to ensure adequate quality. Another use of cameras and AI on radiography units involves the recognition of shoulder joints to determine the correct collimation field size and settings. While reducing radiation dose to body regions beyond the chest, this AI-based smart collimation feature saves the radiographer's time and decreases subjectivity with manual collimation adjustment [25]. Furthermore, post-acquisition noise reduction filters can help improve the quality of CXRs, as reported in several studies [27,28]. Fukui et al. reported the potential for up to 72% radiation dose reduction for portable DR with the use of noise reduction software to improve the image quality of low radiation dose CXRs [27]. Many radiography vendors (such as AGFA, Fujifilm, GE, and Siemens) also offer options such as auto-positioning for CXRs using AI integration and cameras. For example, Siemens YSIO X.pree X-ray system deploys an AI-integrated 3D camera for automatic body-part detection and collimation adjustment in less than 0.5 s [29]. Two AI vendors have introduced algorithms that analyze some causes of suboptimal CXRs. Annalise AI algorithm [30] for CXRs evaluates patient rotation, cervical flexion, underinflation (low lung volumes), under-or over-exposure, and clipped or obscured anatomy. A similar AI algorithm from Qure.ai assesses incompletely imaged CXRs and specifies the excluded anatomy (such as left lung, left apex, or left costophrenic angle), patient rotation, under-or over-exposure, and incomplete inspiration (low lung volumes) [31]. We have developed a suite of home-grown AI algorithms to assess different causes of suboptimal CXRs on the COGNEX Vision Pro Deep Learning platform, which allows non-programmers to build AI models without having any programming knowledge [32]. Our models identify clipped anatomy (such as apices and lung bases), over-and underexposed CXRs, patient rotation, obscured anatomy by chin or arms projecting on the chest, and low lung volume due to inadequate inspiration, as shown in Figure 2. We intend to deploy these AI algorithms to perform a post-acquisition image quality audit of CXRs. Such audits will help track suboptimality and develop case-based continuous learning for radiographers. The trajectory of AI applications in CXRs suggests an ongoing and expanding suite of AI applications using camera-mounted, AI-enabled radiography units to automate positioning, centering, rotational, and collimation tasks. Such systems can help reduce errors relative to legacy radiography units. In addition, post-acquisition, AI algorithms can evaluate CXRs and prompt radiographers to repeat radiographs as needed. Although best integrated into the radiography units, such image quality assessment AI algorithms can help conduct retrospective audits for suboptimal CXRs and identify the scope and magnitude of suboptimal CXRs. The trajectory of AI applications in CXRs suggests an ongoing and expanding suite of AI applications using camera-mounted, AI-enabled radiography units to automate positioning, centering, rotational, and collimation tasks. Such systems can help reduce errors relative to legacy radiography units. In addition, post-acquisition, AI algorithms can evaluate CXRs and prompt radiographers to repeat radiographs as needed. Although best integrated into the radiography units, such image quality assessment AI algorithms can help conduct retrospective audits for suboptimal CXRs and identify the scope and magnitude of suboptimal CXRs. While AI algorithms can help avoid and identify causes of suboptimal CXRs, radiographers' participation is critical. At the time of preparing this manuscript, there were no publications on AI use in suboptimal CXRs. In addition, several questions remain unanswered on the accuracy and performance of these AI applications and algorithms, such as on portable CXRs and in the presence of complex patient anatomy. We hope that our review will trigger further research and verify the robustness and generalizability of available AI solutions. The ultimate question for the future is whether the trajectory of scientific developments beside AI will bring completely autonomous robotic radiographic units to remove human errors in radiography. While such development might reduce human errors, portable radiography, particularly in an acutely sick patient, is challenging beyond manual and technical issues. Such challenges are likely to continue due to issues of complex patient geometry, anthropometry, and sometimes from an expanding and advanced life support system that continues to advance in parallel to the science that helps mitigate the existing issues. Conclusion In summary, a substantial proportion of CXRs are suboptimal and require reacquisition. However, the reacquisition of rejected CXRs involves additional radiation exposure, workflow issues, and delays in patient care. While awareness, audit, and continuous education represent vital strategies to mitigate the high frequency of While AI algorithms can help avoid and identify causes of suboptimal CXRs, radiographers' participation is critical. At the time of preparing this manuscript, there were no publications on AI use in suboptimal CXRs. In addition, several questions remain unanswered on the accuracy and performance of these AI applications and algorithms, such as on portable CXRs and in the presence of complex patient anatomy. We hope that our review will trigger further research and verify the robustness and generalizability of available AI solutions. The ultimate question for the future is whether the trajectory of scientific developments beside AI will bring completely autonomous robotic radiographic units to remove human errors in radiography. While such development might reduce human errors, portable radiography, particularly in an acutely sick patient, is challenging beyond manual and technical issues. Such challenges are likely to continue due to issues of complex patient geometry, anthropometry, and sometimes from an expanding and advanced life support system that continues to advance in parallel to the science that helps mitigate the existing issues. Conclusion In summary, a substantial proportion of CXRs are suboptimal and require reacquisition. However, the reacquisition of rejected CXRs involves additional radiation exposure, workflow issues, and delays in patient care. While awareness, audit, and continuous education represent vital strategies to mitigate the high frequency of suboptimal CXRs, automation with AI-integrated cameras and enabled algorithms will likely help the quality of CXRs. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article.
2023-01-25T16:17:39.829Z
2023-01-23T00:00:00.000
{ "year": 2023, "sha1": "31f7731de71cfff039849264133ec6b28c0a0d00", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "37c79e2b179d3b33760644305bc1a5f6ed938bb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
212427351
pes2o/s2orc
v3-fos-license
Swearword Translation in Steve Jobs: A Communicative/Semantic Perspective Swearwords are commonly used in daily communications, but how to translate swearwords appropriately has received relatively little attention. This research explores swearword translation from English into Chinese based on Peter Newmark’s theory of communicative and semantic translation through a case study of the book Steve Jobs. Through detailed analyses, it is proved that translating swearwords should be reader-oriented so that the translations can produce an equivalent effect on the target readers. It is also found that many elements should be considered in swearword translation, such as character of swearer, context, source language and culture as well as target language and culture. Introduction Translation, as the essential tool in the communication between different languages and cultures, should cover every aspect of life, whether noble or low, decent or vulgar. However, academic studies have so far seldom touched upon the translation of dialogue, especially the low or vulgar aspect such as swearwords. Swearword translation is a dilemma for translators, for swearwords are often regarded as indecent and inappropriate for readers, especially teenage and children readers. Whether to translate them and how to translate them still remain controversial. The lack of sophisticated or comprehensive studies into swearword translation hinders people's understanding of swearwords in different languages and cultures, and intercultural communications at large. In view of this situation, this research conducts a case study of swearword translation in the popular book Steve Jobs from the perspective of Peter Newmark's communicative and semantic translation theory, hoping to provide readers with some knowledge of whether and how to translate swearwords from English into Chinese. Literature Review Oxford Dictionary defines SWEARWORD as "rude or offensive language used especially when angry". Swearwords date back to a long time ago when human beings began to express feelings or emotions through language. However, they are often regarded as embarrassing because they are likely to bring about negative responses on the part of the listeners. There have been many protests or banning on the use of swearwords in public places, such as TV, broadcasting or movies, yet swearing is an indispensable part of everybody's life. "No people have ever abandoned its habits of swearing merely because the State… forbad it" (Wajnryb, 2004, p. 5). relief mechanism whereby excess energy is allowed to escape without doing anyone serious injury, while doing the swearer some good. With the liberalization of society as well as of literature, swearwords are frequently used in literary works. However, scholars' views on such a trend vary. Some argue against the popular use of swearwords in literature because literature, as processed and regulated language, reflects the value and attitude of the author and thus authors should take up the social responsibility to make sure that their works do not cast a negative effect on readers, especially on teenage or children readers (Huang, 2009). If swearwords are to be used, they should be used appropriately, otherwise, they would lower the value of such work, and more importantly, pollute the cultural environment (Ma, 1996). Some tactics can be applied to deal with swearwords, such as replacing them with symbols like "X" or "……" in Chinese, which will represent the swearwords but in a more indirect or decent way (Wu, 1987). Others insist that swearing is a way to relieve one's emotions or feelings, through a look into which, the person's emotions or feelings, like anger, surprise, or frustration, can thus be understood. Sometimes, swearwords do not just reflect what a person is, but more significantly, what a society or a culture is. They serve as an epitome of people's life as well as spirit (Chen, 2011). As strong emotional words, swearwords play an important role in literary works. Whether said by the narrator as in the case of The Catcher in the Rye, or by the character as in the case of Steve Jobs, swearwords create a full personality of the swearer. Through these words, readers can have a better or more complete idea of the personality of the character or further the theme of the literary work. Though swearwords play such a significant part, discussions on translation of swearwords generally fall into two major groups. Some argue that it is better to translate swearwords literally in order to preserve the original flavor and cultural concept, which will in turn help promote the communication between two languages and cultures as target readers can, through reading the translations, know more about the swearwords and the cultural indications behind them, ultimately a deeper understanding of the source language and culture (Gu, 2010). Others hold the opinion that swearwords should be translated in a way to reproduce their communicative effects on the target readers. The translator should try to make the translations as natural as possible to achieve functional equivalence (Golan, 2006). However, previous researches were mainly conducted on swearwords as a whole or specifically on swearwords in movie subtitles. No research ever investigated into the translation of swearwords in biographies. Moreover, there was a lack of well-grounded theoretical foundations. The categorizations of swearwords did not have a clear-cut line. The specific translation approach to be applied to a specific type of swearwords is yet to be found. Theoretical Framework Whenever there is translation, there are heated debates over whether to translate literally or freely, or whether to adopt a reader-oriented or author-oriented approach. The gap between emphasis on source and on target language is finally narrowed due to the appearance of communicative and semantic translation theory. In Approaches to Translation, Peter Newmark (2001) introduced the concept of communicative and semantic translation. Communicative translation "attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original" while semantic translation "attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meaning of the original" (Newmark, 2001, p. 39). These two methods are widely different. Communicative translation focuses on target readers, who want to read the translated work without any difficulty or obscurity and thus expect the translation to be like a native work in language as well as in culture while semantic translation emphasizes more on the original. Generally, communicative translation tends to be "smoother, simpler, clearer, more direct, more conventional" while semantic translation is likely to be "more complex, more awkward, more detailed, more concentrated" (Newmark, 2001, p. 38). In light of the communicative approach, the translator has the freedom "to correct or improve the logic; to replace clumsy with elegant, or at least functional, syntactic structures; to remove obscurities; to eliminate repetition and tautology; to exclude the less likely interpretations of an ambiguity; to modify and clarify jargon" or to "normalize bizarreries of idiolect" (Newmark, 2001, p. 42). In contrast, in semantic translation, the translator is trying to preserve the original flavor and tone. Peter Newmark related his translation methods to the text functions: expressive, informative and vocative. Expressive text focuses on the mind of the speaker, the writer, the originator of the utterance, informative function on external situation, the facts of a topic, reality outside language, including reported ideas or theories and vocative function on the readership, i.e., the addressee (Hoffman, 1996, p. 140). In fact, every single text must perform the three functions together, but will have one function in domination. According to Newmark, semantic translation guarantees the expressive function while communicative translation caters to the informative and vocative functions (Newmark, 2001, p. 47). Swearword Translation in Steve Jobs In biographies, the expressive function stands out. But what text function do swearwords emphasize, expressive, informative or vocative? Which approach should be applied to swearword translation in biographies, semantic translation or communicative translation? This section presents a detailed analysis of the translation of swearwords from English to Chinese in the biography Steve Jobs to find answers to the above questions. Introduction to Steve Jobs Steve Jobs is the authorized biography of Steve Jobs, designer, inventor, and chief executive officer of Apple Inc., written at the request of Jobs by the acclaimed biographer Walter Isaacson, a former executive at CNN and Times who has written best-selling biographies about Benjamin Franklin and Albert Einstein. Based on more than forty interviews with Jobs conducted over two years, in addition to interviews with more than one hundred family members, friends, adversaries, competitors, and colleagues, Isaacson was given "exclusive and unprecedented" access to Jobs's life. Jobs was said to have encouraged the people interviewed to speak honestly. He asked for no control over its content other than the book's cover, and waived the right to read it before it was published. Originally planned for release on March 6, 2012, the release date was moved forward to October 24, 2011 due to his death on October 5, 2011. The translated Chinese version was done by Yu Qian, Guan Yanqi, Wei Qun, Zhao Mengmeng and Tang Song from mainland China. Swearwords in Steve Jobs Swearwords, as emotional intensifiers, show feelings or emotions of the speakers or swearers. In Steve Jobs, there are a lot of swearwords. The majority were said by Jobs himself. When he felt upset, annoyed, dissatisfied, depressed, or angry, he tended to swear. These swearwords also reveal his characteristics as a straightforward and emotional perfectionist. A summary is given below on the sources, distributions and functions of swearwords in this book. Swearword Translation in Steve Jobs Halliday (2007) claimed that different language structures will perform different functions such as ideational meta-function, interpersonal meta-function and textual meta-function, and thus cast different effects on target readers, which will in turn require different translation methods. Accordingly, this research categorizes swearwords in Steve Jobs into four types: independent pattern, reference pattern, insertion pattern and replacement pattern with different sentence structures, functions, effects, and explains the appropriate translation methods for each category. Independent Pattern Independent pattern refers to the swearwords that make up an independent or separate part in a sentence by themselves, such as "Bullshit", "Fuck", often with an exclamation mark at the end. Organs or sexuality Lasseter 【32】He found out that he was cheated by his friend and trapped the company in trouble "去你妈的! "拉塞特 愤怒地说道 The swearwords in Table 2 are all in independent pattern. They are interjections in each sentence, which, if deleted, will bring no change to the original sentence structure. Among the four categories, independent-patterned swearwords constitute the simplest unit but cast the most powerful effect. When people have the strongest emotions, they tend to say least, but the least speech will in return produce the strongest effect. Such strong emotions are not confined to negative feelings like anger or annoyance, but can also apply to positive ones like surprise or excitement. In Table 2, the former two examples, "Holy shit" and "Holy Christ", are exclaimed with positive sentiments. For example, Example 1 was sworn by both Steve and Woz when they finally found the journal after great effort. In fact, in terms of character, compared to the straightforward, emotional or temperamental, rebellious and aggressive Steve Jobs, Woz is quite reserved and obedient. This "Holy shit" completely shows how excited they felt in that situation as well as their pursuit or even desire of a perfect technical feat as this journal they found could answer their doubts and finally led to their later success in Blue Box. In terms of subject of the swearwords, "Holy" is related to religion or God, and "shit" to excrement. "Shit" is equivalent to "放屁" or "吃屎" in semantic meaning, but the latter is often used to express negative emotions in Chinese, such as disagreement or despise, while "Holy shit" in this situation is regarded as an exclamation, an expression of surprise. If "shit" is translated semantically, target readers of the translated swearwords, from a different culture, will react differently, and thus have a different idea of the matter being described and the people involved. Therefore, this swearword should be handled with the communicative approach, which is what the translator did by using "天哪". In Example 4, Lasseter was normally quite polite, decent, calm or reserved without the habit of swearing. However, "Fuck you" is quite a strong swearword in English. With such a strong swearing from such a decent man, readers can imagine how outrageous Lasseter was when he found himself cheated. In terms of subject, "Fuck you" is related to sexuality. The translation "去你妈的" provide target readers with a similar experience with original readers of "fuck you". They both render the strong emotion of anger and the translation in this sense has achieved the closest effect to that of the original text. Reference Pattern Reference pattern, also known as metaphor/simile pattern, is in fact using the swearwords (like "shit", "shithead", "bozo") to refer to something or somebody for an insult or humiliation. In Halliday's theory, this is a relation clause with attributive or equative pattern (Halliday, 2007, p. 185). Below is a detailed analysis of the reference-patterned swearwords in Steve Jobs. Let's stop this bullshit! Excrement Jobs 【8】He scolded the engineer about the display of word-processing programs. 别说这狗屁玩意儿! 8 People were either "enlightened" or "an asshole." Their work was either "the best" or "totally shitty." Organs + excrement Jobs 【10】Jobs referred to different people. 人要么就是"受到过启示 的",要么就是"饭桶"; 人 的工作成果要么就是"最棒 的", 要么就是"完全的垃圾" In this category, the swearwords are mostly nouns as objects and function as vehicles in metaphors, such as those in Example 5, 7 and 8 of Table 3. In Example 6, the swearword "shit" itself serves as a vehicle in a simile. Among the four categories, reference pattern ranks the second in degree of swearing. In Example 5, "fucking dickless assholes" is what Jobs said when scolding the staff for their mistake. It is related to organs or sexuality in subject, thus a very strong and direct one. However, its translated version, "没有生殖器 的混蛋" is quite disappointing. Though with a footnote saying "原文是 fucking dickless assholes, 缩写就是 FDA", this translated version still sound awkward as no one will swear like that in Chinese. In Example 6, "That design looks like shit" is Jobs's comment on the design of his colleague. This "shit", referring to the design, is a strong insult. In terms of subject, it is related to excrement, which has its Chinese equivalent "狗屎". The translated version "一坨狗屎" is fine as it produces both a semantic and communicative equivalent, preserving the original meaning of excrement as well as its communicative effect as an insult or criticism. In fact, reference-patterned swearwords do not just perform as an expression of emotions of the swearers, but more so as a description of their ideas towards the reference objects. As a result, it is more an objective statement, whose original meaning should better be preserved, making semantic translation the preferred strategy. Communicative translation is used only when there is no equivalent in the target language and culture. Insertion Pattern Insertion pattern means that a swearword serves as an insertion in a sentence. In an insertion pattern, the swearword is usually an adverb or adjective as an intensifier of the adjective or noun it describes. For example, in "pretty damn smart", the swearword "damn" is an intensifier of the adjective "smart" to make it stronger in emotion. Because of this, this pattern produces quite a mild swearing effect, far weaker than the previous two categories. What the hell is he doing in any of this conversation?" Religion or god Amelio 【24】He was angry about Jobs getting into the discussion about his position in the company. 他到底为什么会参与这样 的讨论 One common feature of these swearwords is that the focus of the sentence is not them but the adjectives or nouns they describe. As a result, semantic translation seems not suitable. For example, it is quite difficult to translate "fucking" in Example 10 by using the semantic approach. In terms of subject, "fucking" belongs to sexuality, which translated into Chinese will be "操". However, it makes no sense to combine such a Chinese swearword with the rest of the sentence. Moreover, considering Friedland's neutral attitude here as well as the translation difficulty, the only method is perhaps to lower the swearing degree or even remove it. Otherwise, some strange sentence will come out, like "该死的有体臭的嬉皮士" in Example 11, as a result of semantic translation to preserve both sentence structure and meaning. In this example, "Goddamn" belongs to religious subject, with no swearword equivalent in Chinese. So, in translation, there is a need to change it to another subject. "该死的" would be a good choice. However, the sentence structure sounds unspontaneous to the Chinese ears. To translate insertion-patterned swearwords, it is difficult to maintain the original meaning or structure, making semantic translation inappropriate or even impossible. In this case, solutions can be found through lowering or removing the swearing degree or reforming the original structure. Thus, communicative translation is the preferred strategy. Replacement Pattern Replacement pattern means that the swearword replaces the original word in a phrase or sentence. That is to say, the original sentence or phrase is not swearing, and the swearword added to replace a certain word is just an intensifier of the tone instead of performing its semantic meaning. For example, "shit" in "you don't deserve shit" does not really mean its dictionary meaning: excrement, but is just an intensifier. Similar to insertion pattern, replacement pattern produces a very weak swearing effect, so a removal or lowering of swearing can be considered. If replacement-patterned swearwords are translated semantically, the sentences will make no sense. For instance, in Example 13, if "shit" in "You don't deserve shit" is translated semantically, it would become "你不配得到狗屎". It would be confusing or even incomprehensible to Chinese readers. Also in Example 15, "I don't give a shit about Apple", what on earth does it mean by the semantic translation of "不 给苹果一堆狗屎"? Another example is "fuck" in Example 15. "Fuck" means a sexual activity. If such a semantic meaning is translated, Chinese readers will be shocked when they read "而且还会和我带进苹果的那 些人都发生性关系". Therefore, "fuck", here translated into "干掉", is a communicative approach. It serves as a pun, which preserves the sexual element and at the same time the substantial meaning of "beat" or "fire". Generally, replacement-patterned swearwords do not perform their semantic meanings. If the semantic approach is adopted on such texts, the translation will be unreadable or incomprehensible. Therefore, translators should figure out what these words really mean, try to analyze emotions of the swearers and then add independent swearwords to express such emotions. In order words, communicative translation is more appropriate for this category. Conclusion To conclude, in swearword translation, a lot of elements, such as character of swearer, context, language habit of either culture and so on, should be taken into account before the translator decides which translation method to apply. Based on Peter Newmark's communicative and semantic approach to translation, among the four categories of swearwords in Steve Jobs, independent-patterned, insertion-patterned and replacement-patterned swearwords should be generally translated with the communicative approach while reference-patterned ones predominantly via the semantic approach. Most people argue that translation of biography should be author-oriented rather than reader-oriented. However, this research proves that reader-oriented translation is a better approach. The aim of a biography is to let readers ells.ccsenet.org English Language and Literature Studies Vol. 10, No. 1;2020 know what the character has experienced in his life, and what personalities of the character have been revealed from such experiences. In this sense, it is not just about information but also about feelings, i.e., how readers feel about this character. As a result, the speech of the character should produce an equivalent effect on the target readers. Only in this way will the target readers feel the same towards this character as the original readers do. Swearwords, as a kind of speech with the strongest emotions, should also be handled in the same manner.
2020-02-13T09:22:23.760Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "935e27147f2b0cf9362868108c330036e434aefd", "oa_license": "CCBY", "oa_url": "http://www.ccsenet.org/journal/index.php/ells/article/download/0/0/42000/43684", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0feffe52eff24d82e348f42e9cca8b78b5618dac", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
232382727
pes2o/s2orc
v3-fos-license
Molecular Mechanisms of Obesity-Linked Cardiac Dysfunction: An Up-Date on Current Knowledge Obesity is defined as excessive body fat accumulation, and worldwide obesity has nearly tripled since 1975. Excess of free fatty acids (FFAs) and triglycerides in obese individuals promote ectopic lipid accumulation in the liver, skeletal muscle tissue, and heart, among others, inducing insulin resistance, hypertension, metabolic syndrome, type 2 diabetes (T2D), atherosclerosis, and cardiovascular disease (CVD). These diseases are promoted by visceral white adipocyte tissue (WAT) dysfunction through an increase in pro-inflammatory adipokines, oxidative stress, activation of the renin-angiotensin-aldosterone system (RAAS), and adverse changes in the gut microbiome. In the heart, obesity and T2D induce changes in substrate utilization, tissue metabolism, oxidative stress, and inflammation, leading to myocardial fibrosis and ultimately cardiac dysfunction. Peroxisome proliferator-activated receptors (PPARs) are involved in the regulation of carbohydrate and lipid metabolism, also improve insulin sensitivity, triglyceride levels, inflammation, and oxidative stress. The purpose of this review is to provide an update on the molecular mechanisms involved in obesity-linked CVD pathophysiology, considering pro-inflammatory cytokines, adipokines, and hormones, as well as the role of oxidative stress, inflammation, and PPARs. In addition, cell lines and animal models, biomarkers, gut microbiota dysbiosis, epigenetic modifications, and current therapeutic treatments in CVD associated with obesity are outlined in this paper. Introduction The obesity epidemic has spread globally in the past four decades. Nowadays, more than a third of the world population is obese or overweight [1,2]. Obesity prevalence in children and young adolescents aged 5-19 years has increased from 4% in 1975 to 18% in 2016 [1]. In the United States, according to data reported in 2015 to 2016, the prevalence of obesity was 39.8% in adults, while in adolescents (12 to 19 years) was 20.6%, for children 6 to 11 years of age was 18.4%, and for children 2 to 5 years of age it was 13.9% [3]. In 2019, an estimated 38.2 million children under the age of 5 years had overweight or were obese [4]. Excess white adipocyte tissue (WAT), mainly visceral accumulation, is highly associated with dyslipidemia, systemic insulin resistance, hypertension, metabolic syndrome, obstructive sleep apnea, as well as type 2 diabetes (T2D), atherosclerosis, and cardiovascular disease (CVD) [2,5]. Globally, CVD is the main cause of death; an estimated 17.9 million people died in 2016, which represented 31% of all global deaths. Of these deaths, 85% were related to heart attack and stroke. One-third of these deaths occur prematurely in The heart can allocate fat in three compartments: epicardial adipose tissue (EAT), paracardial adipose tissue (PAT), and pericardial fat [5]. In CVD development, perivascular adipose tissue is important; it includes EAT adjoining coronary arteries and fatty deposits around the aorta and the medium and small arteries. Moreover, the expression of various vasoconstrictors such as resistin, Ang II, and chemerin is increased in perivascular adipocytes [9]. The amount of EAT is strongly correlated with visceral obesity [9]. EAT local negative effects in the myocardium include natural compression, local delivery of free fatty acids (FFAs) and cardioactive hormones, and the release of pro-inflammatory adipokines, resulting in cardiac maladaptive morphologic and dysfunction changes. Furthermore, an increase in EAT also causes coronary calcification, atheromatous plaque formation, and coronary artery disease (CAD) [5,28]. Ng et al. reported that increased EAT volume index and insulin resistance were independently associated with increased myocardial fat content and interstitial myocardial fibrosis. Augmented EAT volume also was associated with a diminution in LV global longitudinal strain [29]. EAT thickness, or PAT volume, was associated with low high-density lipoprotein (HDL) levels, an increase in fasting glucose, higher C-reactive protein (CRP), and other cardiovascular risk characteristics. It is important to note that most of these correlations were removed after adjusting for VAT mass [28]. PAT mass significantly correlates with hypertension, elevated triglycerides, and decreased HDL, but after VAT adjustment, this correlation disappeared. However, epidemiological studies showed an association between PAT volume and calcium deposition in coronary arteries, even after adjustment for VAT [28]. Therefore, excess body fat content is involved in the pathogenesis of CAD and CVD. Effects of Adipokines in CVD Adiponectin, an adipokine mainly secreted from WAT, is diminished in obesity due to both decreased adiponectin secretion and reduced receptor expression [30]. Furthermore, in obese patients affected with CAD, production of adiponectin by EAT is decreased [9]. Adiponectin is inversely correlated with cardiovascular risk factors such as hypertension, atherosclerosis, dyslipidemia, and hyperglycemia and is a potential therapeutic target for diastolic dysfunction. In the heart, adiponectin protects and prevents myocardial hypertrophy, cardiac fibrosis, atherosclerosis, inflammation, nitrative and oxidative stress, and angiogenesis [14]. Additionally, adiponectin antagonizes the actions of endogenous vasoconstrictors, including the activity of renal sympathetic nerves, and induces natriuresis by inhibiting the secretion of aldosterone. Adiponectin ameliorates the capacity of the heart to sustain pressure or volume overload and protects the heart against ischemic injury [31]. On the other hand, leptin is an adipokine that regulates appetite and body fat mass mostly through the central nervous system. Adipocyte-release of leptin is directly correlated with fat deposition. The heart expresses high amounts of leptin receptors, and cardiomyocytes can release leptin. In murine models, mutations in leptin or its receptor conduce to altered metabolism in cardiomyocytes, cardiac steatosis, and cardiac dysfunction [31]. Obese patients often have elevated levels of leptin and are resistant to its actions on the central nervous system to inhibit food intake. In obesity, leptin-mediated increases in aldosterone promote sodium retention, increase cardiac filling pressures, exacerbate remodeling, and accelerate the progression of HF. The interaction of leptin with Ang II and mineralocorticoid receptors facilitate the inflammatory process and can cause cardiac hypertrophy and fibrosis. Furthermore, both increases in leptin and a diminution of adiponectin signaling likely contribute to obesity-related HFpEF [31]. In mice fed HFD, autotaxin accumulation was associated with cardiac dysfunction in obese mice. On the other hand, autotaxin blockade protected obese mice against structural cardiac disorders, hypertrophy, and LV dysfunction [32]. Neprilysin is not considered an adipokine; however, it is expressed on the surface of mature adipocytes. People with obesity have elevated levels of neprilysin. This molecule degrades endogenous natriuretic peptides, inhibiting renal sodium reabsorption, suppressing aldosterone secretion from the adrenal gland, in addition to inhibiting inflammation and fibrosis [31]. In obese subjects with HFpEF, soluble neprilysin levels are high; its inhibition, on the contrary, reduces ventricular overload and improves LA overfilling in these patients [31]. Therefore, adipokines play an important role in the protection of cardiac dysfunction. The Role of Oxidative Stress in CVD Pathogenesis There is data about the connection between obesity, oxidative stress, and an increase in HF. Hyperglycemia and insulin resistance increase reactive oxygen species (ROS) production and promote inefficiency in the antioxidant systems in obese rats [33]. In young patients with obesity, this condition promotes disturbed mitochondrial function, ROS release, and cell death [34]. Moreover, cardiac steatosis generates several lipotoxic intermediates such as acylcarnitine, diacylglycerol (DAG), and ceramides. It was reported that DAG and ceramides generate ROS, and ceramides increase ROS production through disruption of mitochondrial electron transport chain, thereby inducing apoptosis and insulin resistance. DAG induces insulin resistance via protein kinase C (PKC) signaling, which inhibits insulin receptor substrate 1 (IRS-1) phosphorylation. Furthermore, ROS increases some oxidative stress markers such as 8-hydroxy-2-deoxyguanosine (8-OHdG) and protein carbonyl, and possibly reduced cell viability [35,36]. In the heart, cardiac myocytes, endothelial cells, and neutrophils are sources of ROS through NADPH oxidase overactivity, allowing myocardial remodeling, including contractile dysfunction and structural alterations [37,38]. ROS cause these cardiac alterations by the following mechanisms: (1) ROS activate a broad variety of hypertrophy signaling kinases and transcription factors, such as tyrosine kinase Src, regulating smooth muscle function through of the control of actin-cytoskeleton dynamics [39]; GTP-binding protein Rac, associated with hypertrophy and smooth muscle cell proliferation, endothelial dysfunction, as well as hypertension, and atherosclerosis [40]; mitogen activated protein kinase (MAPK), and c-Jun N-terminal kinase (JNK), related to cell growth, differentiation, development, the cell cycle, survival and cell death; and the nuclear factor-kappa B (NF-κB) pathway, related to pro-inflammatory gene transcription [41,42]; (2) ROS promote apoptosis through to apoptosis signal regulating kinase 1 (ASK1) modulation; (3) ROS cause DNA strand breaks via activating of nuclear enzyme poly(ADP-ribose) polymerase-1 (PARP-1), allowing survival and cell death regulation; (4) ROS can activate matrix metalloproteinase (MMPs), which are increased in the failing heart; and (5) ROS directly affect contractile function by modifying proteins implicated in excitation-contraction coupling such as sarco/endoplasmic reticulum Ca 2+ ATPase (SERCA) [37,41], thus leading to cardiac dysfunction. Leptin is a mediator of cardiac alteration in obesity, specifically can exert pro-fibrotic and pro-oxidant effects through the activation of PI3k-Akt signaling pathway, and subsequently the activation of TGF-β and connective tissue growth factor (CTGF) [43,44]. Studies carried out in rats fed HFDs have shown that leptin locally leads to heart alterations associated with obesity through induction of collagen production, which is mediated by oxidative stress and by the mTOR signaling pathway [44]. Other evidence of the relationship between oxidative damage and CVD development is the nuclear factor erythroid-2 related factor (Nrf2), an important regulator of redox signaling that acts as a transcriptional activator of antioxidant response elements (ARE)-responsive genes such as hemeoxygenase-1 (HO-1), glutathione-S-transferase (GST), glutathione peroxidase (GPx), NAD(P)H quinone oxidoreductase 1 (NQO1), superoxide dismutase (SOD), catalase (CAT), and glutathione reductase (GR), just to name a few [45]. Some studies have demonstrated that Nrf2-knockout mice develop cardiac hypertrophy, while activation of this nuclear factor by specific pharmacological activators such as epigallocatechin 3-gallate is effective to induce the expression and activation of Nrf2 in adipose tissue of obese mice, improves lipidemic control, and decreases the oxidative process, which could improve cardiovascular function [45,46]. Additionally, oxidative stress causes cardiac hypertrophy via oxidation of cysteines in class II histone deacetylases, which are master negative regulators of hypertrophy [47]. This evidence suggests that the oxidative process plays an important role in the pathogenesis of CVD, and antioxidant therapy may be a suitable option for the treatment of this disease. When cardiac damage occurs, as seen in obesity-related stretching or diabetes-related glycosylation, fibroblasts differentiate into myofibroblasts and acquire pro-fibrotic and pro-inflammatory properties [2,51]. Myofibroblasts can be activated by advanced glycation end products (AGEs) with or without TGF-β, and this response involves the AGE receptor (RAGE) and extracellular-signal-regulated kinases (ERK)1/2 activation [52]. Activated cardiac myofibroblasts produce pro-inflammatory cytokines (interleukin-1 (IL-1), IL-6 and TGF-β), vasoactive proteins (Ang II, endothelin-1 (ET-1), atrial natriuretic peptide (ANP) and B-type natriuretic peptide (BNP)), noradrenaline, ischemia, reperfusion, and mechanical stimuli [52]. With respect to IL-6, it has fibrogenic actions, and its pro-fibrogenic response is related to STAT3-stimulation, which leads to collagen production by cardiac fibroblasts or by TGF-β stimulation [53]. In vitro studies show that IL-11, another member of the cytokines IL-6 family, plays a critical role in the pathogenesis of fibrosis, and its inhibition alters the activation of fibroblasts induced by TGF-β [54]. With respect to TNFα, it can induce fibrosis acting on cardiac fibroblasts [55,56]. Additionally, monocytes and macrophages also have an important role in pro-inflammatory mediators production, such as cytokines and pro-fibrogenic growth factors [57]. In an early inflammation stage, monocytes with pro-inflammatory, phagocytic, and proteolytic properties are recruited, and there is an expression of CCR2 chemokine receptors [58]. On the other hand, TGF-β stimulates the activation of myofibroblasts by up-regulating α-SMA expression [2]. Our group demonstrated that in the ventricular tissue of mice subjected to a model of nonalcoholic steatohepatitis (NASH) induced by high fat and sugar diet, an up-regulation in α-SMA and Col I and III, among other mRNAs, takes place. In addition, cardiac hypertrophy and fibrosis were found [59]. Activation of the RAAS in WAT leads to insulin resistance through activation of the mTOR/S6K1 signaling pathway, in addition to increasing oxidative stress [5]. mTOR induces the activation of glucocorticoid-regulated kinase 1 (SGK1) and epithelial sodium channel, inducing fibrosis in adipose, cardiovascular, and renal tissue. These pathophysiological processes decreased endothelial nitric oxide synthase (eNOS) activation and nitric oxide (NO) bioavailability in association with increased cardiovascular stiffness and impaired relaxation [5]. Excess aldosterone is closely associated with systemic inflammation, endothelial dysfunction, arterial stiffness, hypertension, and cardiac hypertrophy [60]. In summary, pro-inflammatory adipokines secreted by EAT and WAT modify the normal electromechanical changes in atrial tissues, left atrial (LA) enlargement, and cardiac remodeling (characterized by fat accumulation, fibrotic infiltration, and hypertrophy), increasing the risk of AF, which is the most common form of arrhythmia [2,5,28]. Meanwhile, TGF-β, leptin, and Ang II are potent stimulators of collagen synthesis, thus causing cardiac, pericardial, and vascular fibrosis. Together, these alterations in the ECM cause abnormalities in cardiac contraction, relaxation, and conduction, leading to HF [2,31]. The use of pharmacological molecules with the ability to modulate the signaling pathways discussed may be an important strategy for the treatment of CVD. Left Ventricular Hypertrophy In obesity-generated insulin resistance, left ventricular hypertrophy (LVH) is defined as an increase in LV mass [30]. LVH occurs in two ways, especially in severe obesity; (1) concentric hypertrophy, which is caused by chronic pressure overload and leads to decreased LV volume and augmented wall thickness, and (2) eccentric hypertrophy that originates from volume overload and generates dilation and thinning of the heart wall [28,61]. Cardiomyocyte hypertrophy is a consequence of FFA accumulation and lipotoxicity. Furthermore, fat invasion impairs cardiac contractility and restricts the dilating capacity of the left ventricle [30]. Our group reported cardiomyocyte hypertrophy in mice with NASH induced by a high-fat/high-carbohydrate diet [59]. Glucotoxicity affects cardiomyocytes, and impaired insulin regulation raises Ang II, leading to myocardial hypertrophy, fibrosis, and apoptosis [30]. Hyperinsulinemia also activates the sympathetic nervous system, which promotes myocardial dysfunction [30]. Decreased adiponectin activity contributes to LVH through insulin resistance but also lost its effects in inflammation reduction, endothelial cell adhesion prevention, and decreasing foam cell accumulation in the heart. Meanwhile, hyperleptinemia promotes unfavorable cardiac sequelae, including elevated ROS in the heart, cardiomyocyte apoptosis, and direct induction of cardiac hypertrophy [30]. All members of natriuretic peptides family ANP-also known as atrial natriuretic factor (ANF)-BNP and C-type (CNP) have the ability to affect cardiovascular and endocrine systems through their actions over diuresis, natriuresis, vasorelaxation, as well as aldosterone and renin inhibition. Under hypertrophic conditions, ANP and BNP inhibit myocardial hypertrophy [61]. Data from a meta-analysis of 22 echocardiographic studies showed a relationship between obesity and LVH. In addition, a cardiac magnetic resonance study suggested a predominant concentric hypertrophic pattern in obese men and both concentric and eccentric hypertrophic in obese women [15]. Additionally, LVH and diastolic dysfunction are present in obese normotensive children. In adults, LVH is linked with ventricular arrhythmias and HF, conferring a four-fold risk of CVD morbidity and mortality [30]. Hemodynamic Alterations Obesity, particularly visceral adiposity, can be linked with three different phenotypes of HF, (1) HF with a reduced ejection fraction, (2) HF with HFpEF, and (3) high-output HF. All these phenotypes are characterized by a high secretion of aldosterone and sodium retention [31]. In the first type of HF, obesity is commonly associated with a mild decrease in systolic function. In HF with HFpEF, if systolic function is preserved, but the distensibility of the heart is impaired due to inflammation or fibrosis, then sodium preservation and plasma volume expansion induce cardiac overfilling rather than cardiac dilatation. Myocardial, pericardial, and vascular fibrosis increased ventricular and aortic stiffness, explaining why cardiac chambers are only modestly enlarged in elderly people with obesity and HF. High-output HF occurs when the heart is able to undergo significant ventricular enlargement. Sodium retention associated with obesity and plasma expansion can lead to marked cardiac dilation with a normal systolic function. The heart can accommodate and expel the large volume of blood it receives; this results in a high output state with higher cardiac filling pressures [31]. Additionally, alterations of right heart hemodynamics, particularly the elevation of the pulmonary artery and right atrial pressures, can be found in extremely obese subjects. However, these findings are not typical in the asymptomatic stage [28]. Diastolic Dysfunction Diastolic dysfunction occurs by alteration in ventricular relaxation, distensibility, or filling [14]. Abnormalities of LV diastolic performance related to metabolic diseases are characterized by delayed relaxation, with elevated LV filling pressure being less common. Nevertheless, the utilization of conventional Doppler for LV filling evaluation and LA enlargement may be somewhat problematic in overweight subjects, in whom the effects of increased loading can be an impediment to appropriate interpretation of results [28]. Several studies have reported mild diastolic dysfunction in obese subjects. This involved different echocardiographic measures such as prolonged LV relaxation time, augmented E/e ratio, and lower E/A ratio, suggestive of diastolic filling alterations and increased filling pressures. In addition, the prevalence of diastolic dysfunction augments with the gravity of obesity [15]. Systolic Dysfunction LV systolic function, assessed with LV ejection fraction by standard echocardiography, is normal or supranormal in obese individuals. Other studies using novel echocardiographic techniques showed subclinical systolic contractile abnormalities on tissue velocity and deformation in obese individuals without coronary or structural heart disease. Furthermore, these obese subjects showed a reduced spectral pulsed-wave systolic velocity, as well as a decreased regional and global tension, but the LV ejection fraction remained in a normal range [15]. Several studies have shown slightly decreased LV systolic function in obese and diabetic rats, transgenic and obese mice with HFD-induced insulin resistance, and transgenic mice with cardiac steatosis. However, in sheep with obesity induced by a high-calorie diet, LV systolic function verified by LV ejection fraction was not altered [15]. Dyslipidemia and hyperglycemia effects on the cardiac tissue in obese individuals are illustrated in Figure 1. Peroxisome Proliferator-Activated Receptors, Keys Modulator in the Cardiac Fibrosis Process Peroxisome proliferators are molecules with pleiotropic functions such as an increase in the number of peroxisomes, β-oxidation, and hypolipidemia; all these effects are regulated by the PPARs, and in the cardiac tissue are expressed in endothelial cells, vascular smooth muscle cells, and macrophages [62]. In the heart, the main role of PPARs is specifically β-oxidation and mitochondrial bioenergetics, which makes them a promising therapeutic target for cardiac disease treatment. Several studies have reported the biological roles of PPARs in CVDs, including cardiac hypertrophy and HF [62,63]. The family of PPARs is mainly composed of three isoforms: PPARα, PPARβ/δ, and PPARγ, each one with a specific tissue distribution pattern. PPARα is expressed in tissues with a high oxidative capacity and energy consumption, such as the heart and liver. PPARγ is expressed in adipose tissue or in some conditions of liver damage; finally, PPARβ/δ is more ubiquitous, expressed in the heart and skeletal muscle, and intestine [64,65]. PPARs can be activated by many endogenous ligands, such as long-chain fatty acids and eicosanoids, binding with different affinity to these receptors [66]. Moreover, many synthetic ligands have been designed for the different isoforms of PPARs to postulate them as therapeutic targets in the treatment of various chronic degenerative diseases [67]. Gene transcription regulated by PPARs is carried out when agonists are coupled to the ligand-binding domain of each PPAR, inducing heterodimerization with other members of the nuclear receptor superfamily, such as the retinoic X receptor, which binds to a sequence of repetitions known as the PPAR response element (PPRE) [68]. In cardiac tissue, PPARs have several functions beyond their characteristic roles, these functions including extracellular matrix remodeling, oxidative stress, and inflammation. Regarding this, there exists strong evidence that PPARα activation is necessary to prevent cellular oxidative damage; therefore, a chronic inactivation of the PPARα signaling pathway may upset the balance between oxidant products and antioxidant defenses, allowing cardiac damage [19]. In Pparγ knockout mice, it was demonstrated that this nuclear factor plays an important role in cardiomyocytes and has the ability to prevent myocardial ischemia-reperfusion damage by modulating NF-κB function and subsequently inflammation response [69]. The role of PPARα in the myocardium has been elucidated in Pparα-/knockout mice, demonstrating a reduced cardiac function [70]; this response is associated with structural defects in mitochondria, and consequently, an increase in oxidative damage [71]. The PPARα agonists were developed to treat dyslipidemia, for example, fibric acid derivatives and fibrates, which retards the development of atherosclerosis in ApoE-/-and LDLR-/-mice [72]. The drug fenofibrate exerts some PPARα-dependent and independent actions in microvascular endothelial cells, reducing ET-1 expression [73]. Furthermore, there is evidence that fenofibrate is more effective in patients with high triglyceride levels and low HDL-cholesterol. However, the mechanisms have not been entirely elucidated. Other drugs, such as gemfibrozil, reduced cardiovascular events, including coronary heart disease, myocardial infarction, and stroke in T2D patients, in a clinical study [72]. In addition, PPARα agonists such as clofibrate, and bezafibrate, as well as synthetic ligands of PPARγ, which include the thiazolidinedione drug class (rosiglitazone and pioglitazone), have been shown effective options for CVD treatment associated with metabolic diseases. However, they have several side effects that limit the safe use of these drugs [72]. It is important to mention that new PPARs agonists are currently being developed, which have few side effects, and could be an alternative treatment option for CVD. Recently, we demonstrated that prolonged-release pirfenidone (PR-PFD) is an agonist for PPARα [74], and PR-PFD reduces cardiac fibrosis in a mouse NASH model [59]. Therefore, the mechanisms of action of PPARs are versatile with a therapeutic potential to treat CVD and other metabolic diseases. In summary, the activation of PPARα prevents oxidative damage, while the activation of PPARγ modulates the inflammatory response by NF-κB. The challenge will be to design therapeutic strategies based on activation of PPARs, but with minimal side effects. Figure 2 illustrates the ways by which oxidative stress, inflammation, and the role of PPARs participate in obesity-related CVD development. Cells 2021, 10, x FOR PEER REVIEW 10 of 28 but with minimal side effects. Figure 2 illustrates the ways by which oxidative stress, inflammation, and the role of PPARs participate in obesity-related CVD development. Epigenetics of Obesity-Linked Cardiac Dysfunction Multiple epigenetic processes, including DNA methylation, histone modification, and the expression of non-coding RNA molecules, affect gene expression, influencing the health and adaptability of the organism. Epigenetic changes are heritable and can be maternally and paternally transmitted to the offspring [75]. Hence, an unhealthy lifestyle influences not only our epigenome but those of our descendants. However, environmental exposure and lifestyle can also define epigenetic patterns throughout life. Epigenetic modifications are reversible, different among cell types, and can potentially lead to disease susceptibility by producing long-term changes in gene transcription [76]. Epigenetic mod- Epigenetics of Obesity-Linked Cardiac Dysfunction Multiple epigenetic processes, including DNA methylation, histone modification, and the expression of non-coding RNA molecules, affect gene expression, influencing the health and adaptability of the organism. Epigenetic changes are heritable and can be maternally and paternally transmitted to the offspring [75]. Hence, an unhealthy lifestyle influences not only our epigenome but those of our descendants. However, environmental exposure and lifestyle can also define epigenetic patterns throughout life. Epigenetic modifications are reversible, different among cell types, and can potentially lead to disease susceptibility by producing long-term changes in gene transcription [76]. Epigenetic modifications are potent modulators of gene transcription in the vasculature and might significantly contribute to the development of obesity-induced endothelial dysfunction, altering transcriptional networks implicated in redox homeostasis, mitochondrial function, vascular inflammation, and perivascular fat homeostasis [77]. Obesity-related vascular dysfunction is characterized by increased collagen deposition within the vascular wall, inflammatory infiltrate, perivascular fat accumulation, and progressive arterial thickening; ultimately, an augmented arterial stiffness in large vessels and a reduced lumen ratio in vessels. Obesity causes functional, morphological, and metabolic cardiac abnormalities, leading to HF or AF [20,70]. Epigenetic mechanisms implicated in the development of metabolic cardiomyopathy are described in Figure 3. transcriptional networks implicated in redox homeostasis, mitochondrial function, vascular inflammation, and perivascular fat homeostasis [77]. Obesity-related vascular dysfunction is characterized by increased collagen deposition within the vascular wall, inflammatory infiltrate, perivascular fat accumulation, and progressive arterial thickening; ultimately, an augmented arterial stiffness in large vessels and a reduced lumen ratio in vessels. Obesity causes functional, morphological, and metabolic cardiac abnormalities, leading to HF or AF [20,70]. Epigenetic mechanisms implicated in the development of metabolic cardiomyopathy are described in Figure 3. , fat mass and obesity-associated protein (FTO), and brain derived neurotrophic factor (BDNF) gene methylation, as well as serum TNF-α increase. These events up-regulate DNMT1 and, in consequence, lead to DNA hypermethylation. Specifically, increased methylation in Pitx2c and sarco/endoplasmic reticulum Ca 2+ ATPase (SERCA) promoters provoke a decrease in expression, contributing to heart failure pathophysiology. HDAC4 is a master negative regulator of cardiac hypertrophy, where it is found oxidized, triggering derepression of pro-hypertrophy genes like myocyte enhancement factor 2 (MEF2) and serum response factor (SRF). These molecular mechanisms lead to atrial fibrillation and heart failure. HDACs: histone deacetylases; HDAC4: histone deacetylase 4; DNMT1: DNA methyltransferase 1; Pitx2c: paired-like homeodomain transcription factor 2, isoform c. DNA Methylation Changes in Obesity-Related CVD DNA methylation is the covalent attachment of a methyl group to the C5 position of cytosine, usually in CpG rich regions. The methyl groups may physically block transcription factors binding to DNA, or they can act as a binding site for transcriptional repressors, such as histone deacetylases. DNA methyltransferase 1 (DNMT1) is responsible for the recognition of the hemimethylated dsDNA following mitosis and for the methylation of , fat mass and obesity-associated protein (FTO), and brain derived neurotrophic factor (BDNF) gene methylation, as well as serum TNF-α increase. These events up-regulate DNMT1 and, in consequence, lead to DNA hypermethylation. Specifically, increased methylation in Pitx2c and sarco/endoplasmic reticulum Ca 2+ ATPase (SERCA) promoters provoke a decrease in expression, contributing to heart failure pathophysiology. HDAC4 is a master negative regulator of cardiac hypertrophy, where it is found oxidized, triggering de-repression of pro-hypertrophy genes like myocyte enhancement factor 2 (MEF2) and serum response factor (SRF). These molecular mechanisms lead to atrial fibrillation and heart failure. HDACs: histone deacetylases; HDAC4: histone deacetylase 4; DNMT1: DNA methyltransferase 1; Pitx2c: paired-like homeodomain transcription factor 2, isoform c. DNA Methylation Changes in Obesity-Related CVD DNA methylation is the covalent attachment of a methyl group to the C5 position of cytosine, usually in CpG rich regions. The methyl groups may physically block transcription factors binding to DNA, or they can act as a binding site for transcriptional repressors, such as histone deacetylases. DNA methyltransferase 1 (DNMT1) is responsible for the recognition of the hemimethylated dsDNA following mitosis and for the methylation of the daughter strand. DNMT3a and DNMT3b are responsible for de novo methylation during embryogenesis, establishing new methylation patterns specific to each cell type [71]. The ten-eleven translocation (Tet) enzymes can remove the methyl group in DNA [78]. Perturbation of the redox-sensitive hypoxia inducible transcription factor (HIF) signaling has been found to be pivotal in the regulation of weight gain. Hypoxia inducible factor 3 subunit alpha (HIF3A) showed pronounced methylation in adipose tissue and blood cells associated with BMI [79]. Also, in the umbilical cords of infants, the first intron of the HIF3 gene showed higher methylation in three CpGs that were associated with adiposity and greater infant weight [80]. Fat mass and obesity-associated protein (FTO) is an alpha-ketoglutarate dependent dioxygenase that acts as a regulator of fat mass, adipogenesis, and energy homeostasis. Mansego et al. showed that methylation levels of FTO and brain derived neurotrophic factor (BDNF) were associated with body weight gain and body weight [81]. On the other hand, Kao et al. showed that TNF-α, a pro-inflammatory cytokine, was involved in heart diseases and obesity, directly enhancing cardiac methylation through the up-regulation of DNMT1 [82]. Moreover, Ang II may increase the expression of DNA methyltransferase in arteries, and the inhibition of DNA methyltransferase could be important to avoid vascular wall thickness [83]. Correspondingly, Ang II-treated cardiomyocytes cell line exhibited reduced expression of Pitx2c protein associated with a 20% enhanced expression of DNMT1 protein. Pitx2c controls the growth of pacemaker cells in the left atrium and regulates cardiac sodium flow, conduction velocity, and resting membrane potential in cardiomyocytes. HF increases cardiac DNA methylation in the Pitx2c promoter downregulating Pitx2c by 50% [84]. Another mechanism involved is related to calcium regulation. SERCA dysfunction causes HF and increases the occurrence of cardiac arrhythmia. SERCA plays a critical role in calcium re-uptake after calcium-induced calcium release. SERCA2a promoter is enriched with CpG islands. In TNF-α-treated cardiomyocytes, methylation of the ATP2A2 gene promoter is increased by three times with decreased expression of the protein, impairing calcium regulation [82]. Hypermethylation contributes to the pathophysiology of HF. Activation of RAS enhanced oxidative stress, and increased circulatory TNF-α in HF can predispose to DNA hypermethylation. Both animal and human studies have found a higher occurrence of DNA methylation in HF cardiomyocytes compared to normal hearts [85,86]. Movassagh et al. identified three angiogenesis-related genes (AMOTL2, ARHGAP24, and CD31) that were differentially methylated in human HF [87]. Role of Histone Modifications in Obesity-Linked Cardiomyopathy Histone modifications control the accessibility of nucleosomes for transcription and influence the binding capacity of other proteins to histones through changes in local hydrophobicity. Histones are subject to methylation, acetylation, phosphorylation, ADPribosylation, ubiquitination, and SUMOylation, among other modifications. Acetylation neutralizes the positive charge of the lysine in histones, weakening the bound with negatively charged DNA and releasing chromatin for gene transcription. Histone acetylates (HATs) catalyze histone acetylation, a reaction reversed by histone deacetylases (HDACs). Instead, histone methylation plays a role in gene expression that can induce opposite transcriptional patterns depending on which lysine residue is being methylated. For example, methylation of lysine 4 on histone 3 (H3K4) is an established marker for gene transcription, while methylation of H3K9 is known to be a repressive marker [88]. The mechanisms whereby HDACs modulate cardiac function are complex; some HDACs display antihypertrophic properties, whereas others exhibit pro-hypertrophic effects. For ex-ample, loss of function of HDAC5 or HDAC9 has been associated with binding and silencing myocyte enhancer factor 2C (MEF2C), leading to a higher susceptibility to cardiac hypertrophy and cardiac failure [89]. In contrast, HDAC4 (a master negative regulator of cardiac hypertrophy) was found to repress the activities of MEF2 and serum response factor (SRF) under physiological conditions, but in cardiac hypertrophy became oxidized, causing it to shuttle out of the nucleus and allow de-repression of pro-hypertrophy genes [47]. In experimental studies, HDAC inhibitors benefit the cardiac suppression of oxidative stress and inflammation, inhibiting MAP-kinase signaling and enhancing the clearance of protein aggregate and autophagic flux [90]. Insights into the mechanism include that HDACs are part of a chromatin repressor complex, which inhibits transcription of myosin heavy-chain-associated RNA transcripts (Mhrt); a long noncoding RNA, which protects the heart against pathological hypertrophy [91]. Additionally, modulation of inflammation by HDACs has important implications for cardiac diseases. In an experimental study, drug-induced HDAC inhibition attenuates cardiac hypertrophy and fibrosis and improved cardiac function [92]. Besides, experimental deletion of HDAC9 resulted in an atheroprotective effect due to increased accumulation of total acetylated H3 and H3K9 at the promoters of ATP-binding cassette transporter (ABCA1), ATP Binding Cassette Subfamily G Member 1 (ABCG1), and PPARγ in macrophages [93]. In a model of spontaneously hypertensive rats, HDAC4 demonstrated pro-inflammatory effects that mediate the further development of hypertension [94]. In summary, global methylation occurs at certain specific gene regions or specific histone positions and affects the transcription and expression of critical regulatory genes, which play key roles in arterial endothelial cell dysfunction, redox imbalance, cardiac fat accumulation, and inflammation in obesity-linked cardiomyopathy. Gut Microbiota of Obesity Associated with CVD The term microbiota refers to the assemblage of microorganisms living in a specific environment. The human microbiota is composed of bacteria, viruses, fungi, and other single-cell organisms colonizing different anatomic areas [95]. Around 80% of the human microbiota in healthy adults is represented by Firmicutes and Bacteroidetes, including Actinobacteria, Proteobacteria, Verrucomicrobia, and Fusobacteria, among other phyla [96]. Scientific evidence supports that changes in microbiota can promote diseases, including metabolic diseases like obesity, lipid disorders, T2D, and CVD. Mechanisms include insulin resistance, inflammation, vascular and metabolic impairment [23,96]. Increased IBM is associated with a higher risk for coronary diseases, suggesting a link between the gut and the heart [97]. The interplay between diet, gut microbiota, and host energy metabolism are linked to short-chain fatty acids (SCFAs), the end products of bacterial metabolism of dietary undigested polysaccharides, which are acetate and propionate produce by Bacteroidetes and butyrate produced by Firmicutes [23,98]. Metabolic functions of main SCFAs are displayed in Figure 4. Patients with obesity present an overgrowth of Lactobacillus, Escherichia coli, and Faecalibacterium, among other bacteria. This "obese microbiota" showed an increased ability to extract calories from the diet [99]. Differences between the microbiota of obese and healthy individuals have been described, linking lower numbers of Bacteroidetes and the abundance of Firmicutes with obesity [100]. However, this observation remains controversial [99]. Moreover, obese patients present with less microbial diversity compared to lean subjects [96,101]. Additionally, obesogenic diets are poor in complex carbohydrates, mainly found in vegetables and fruits. This lack of dietary fiber leaves gut bacteria without a substrate to produce the end products of fermentation (SCFAs). Growing evidence has shown that a reduction in the levels of gut butyrate generates local inflammation and foam cell formation, contributing to gut barrier disruption and favoring bacterial translocation including mobilization of lipopolysaccharides (LPS), trimethylamine N-oxide (TMAO) and phenylacetyl glutamine (PAGIn) [102]. As described in Figure 5, LPS and TMAO in general circulation induce systemic inflammation; leading to macrophage activation and favoring formation of atherosclerosis plaques [103,104]. IBM is associated with a higher risk for coronary diseases, suggesting a link between the gut and the heart [97]. The interplay between diet, gut microbiota, and host energy metabolism are linked to short-chain fatty acids (SCFAs), the end products of bacterial metabolism of dietary undigested polysaccharides, which are acetate and propionate produce by Bacteroidetes and butyrate produced by Firmicutes [23,98]. Metabolic functions of main SCFAs are displayed in Figure 4. Patients with obesity present an overgrowth of Lactobacillus, Escherichia coli, and Faecalibacterium, among other bacteria. This "obese microbiota" showed an increased ability to extract calories from the diet [99]. Differences between the microbiota of obese and healthy individuals have been described, linking lower numbers of Bacteroidetes and the abundance of Firmicutes with obesity [100]. However, this observation remains controversial [99]. Moreover, obese patients present with less microbial diversity compared to lean subjects [96,101]. Additionally, obesogenic diets are poor in complex carbohydrates, mainly found in vegetables and fruits. This lack of dietary fiber leaves gut bacteria without a substrate to produce the end products of fermentation (SCFAs). Growing evidence has shown that a reduction in the levels of gut butyrate generates local inflammation and foam cell formation, contributing to gut barrier disruption and favoring bacterial translocation including mobilization of lipopolysaccharides (LPS), trimethylamine N-oxide (TMAO) and phenylacetyl glutamine (PAGIn) [102]. As described in Figure 5, LPS and TMAO in general circulation induce systemic inflammation; leading to macrophage activation and favoring formation of atherosclerosis plaques [103,104]. Along with SCFAs, TMAO is a key molecule derived from the bacterial metabolism that plays an important role in the development of CVD. TMAO is generated from dietary choline, L-carnitine, and betaine, which are metabolized by bacteria to produce trimethylamine (TMA), and further converted to TMAO in the liver. High levels of circulating Along with SCFAs, TMAO is a key molecule derived from the bacterial metabolism that plays an important role in the development of CVD. TMAO is generated from dietary choline, L-carnitine, and betaine, which are metabolized by bacteria to produce trimethylamine (TMA), and further converted to TMAO in the liver. High levels of circulating TMAO induced the activation of NF-κB, increasing the expression of genes with pro-inflammatory effects, which increased oxidative stress. TMAO contributes to platelet hyperreactivity and thrombosis [105]. In addition, TMAO has been associated with coronary artery disease, prolongation of the hypertensive effect of Ang II, and poor prognosis in chronic and acute HF [106]. The extensive role of TMAO in CVD is reviewed in the article by Yang et al. [105]. Additionally, butyrate can impact the prevention of CVD by increasing the expression of ABCA1 in macrophages and, in consequence, promote apoA-I-mediated cholesterol efflux and increasing PPARγ levels [107]. In addition, butyrate has been shown to regulate reverse cholesterol transport via stimulating apoA-IV-containing lipoprotein secretion. Some butyrate-producing bacteria are Roseburia intestinalis, Butyrivibriocrossotus, and Faecalibacterium prausnitzii, which are almost depleted in atherosclerotic CVD patients [102,108]. In healthy individuals, these types of bacteria are present in high amounts [109,110]. Another microbiota mechanism involved in CVD development is their effect on arterial hypertension. Evidence showed that gut dysbiosis induces and maintains high levels of blood pressure through the SCFAs effects, systemic inflammation, and vasoactive metabolites, including serotonin, dopamine, norepinephrine, p-cresol sulfate, indoxyl sulfate, and TMAO [96,111]. Yang et al. [24] evaluated the dysbiosis effect in a rat model of hypertension and in a small cohort of hypertensive patients. They observed a significant decrease in microbiota diversity and abundance in rats with hypertension, accompanied by an increase in Firmicutes/Bacteroidetes ratio and low levels of acetate and butyrate-producing bacteria. In addition, in the small cohort of patients with hypertension, they described a similar dysbiosis. After patients received oral minocycline, the microbiota balance was reestablished, and the ratio Firmicutes/Bacteroidetes was reduced [112]. Some studies support the causality role of microbiota imbalance in hypertension; dysbiotic fecal samples were transferred from hypertensive patients into normotensive mice, resulting in elevated blood pressure in rodent recipients [112]. Also, SCFAs interact with host cells via G protein-coupled receptors (GPCRs), including Gpr41 and Olfr78; where activation of Gpr41 leads to hypotension, and Olfr78 increases blood pressure [112]. These results demonstrate that high blood pressure is associated with gut microbiota changes. Additionally, the interaction between host and microbiota impacts proteins related to epithelial, lipid metabolism, and central nervous system functions. Zhernakova et al. [113] evaluated the association between plasma concentrations of 92 CVD-related proteins in patients (n = 1500); the results showed a microbial association in 41 proteins. Genetic and microbial factors collectively explain approximately 76% of the inter-individual variation, revealing succinct evidence for the microbial role in CVD. Other bacterial metabolites like succinate can cause cardiac hypertrophy in murine models, whereas its levels are increased in patients with hypertrophic cardiomyopathy [96]. Troseid et al. [114] described the association of TMAO with disease severity and survival of patients with chronic HF; the results showed an elevated plasma level of TMAO in patients (n = 155) with chronic HF compared with the control group. Approximately 50% of patients with the highest levels of TMAO died or received a heart transplant during 5.2 years of follow-up [115]. The field of microbiota and their relation with diseases is rapidly growing. The richness of the microbiome and their metabolites will raise more scientific evidence of their role in the development of obesity-related CVD and their possible manipulation to treat it. Table 1 summarizes the bacteria associated with CVD pathophysiology. Type of Bacteria Description Phascolarctobacterium, Proteus mirabilis and Veillonellaceae Propionate/acetate producers with a positive correlation between obesity and their augmented abundance in HFD-fed rats [116] Roseburia intestinalis and Faecalibacterium prausnitzii Reduced in diabetic patients [117] Prevotellaceae and Archaea H2-producing and utilizing bacteria accelerated fermentation and increased SCFAs production, and high-energy uptake [118] Escherichia coli, Klebsiella spp., Enterobacter aerogenes, Ruminococcus gnavus, and Eggerthella lenta Abundant in patients with atherosclerotic CVD compared with healthy subjects [119] Campylobacter, Shigella, Salmonella, and Yersinia enterocolitica Pathogenic bacteria colonizing the gut of patients with chronic HF [120] Escherichia/Shigella Abundancy in decompensated HF vs. compensated patients [121] Turicibacter, Roseburia, Lachnospira, and Romboutsia Positive relation with high levels of serum triglyceride, cholesterol, and low-density lipoprotein; and negative relation with the serum high-density lipoprotein in HFD-fed rats [121] Akkermansia muciniphila Decreased amounts in obesity and diabetic patients vs. healthy individuals; higher quantity is associated with improvement in cardiac metabolic parameters in obesity [122] Lactobacillales, Bacteroides and Prevotella Higher amounts of Lactobacillales and lower levels of Bacteroides and Prevotella has been observed in patients with coronary artery disease [123] Prevotella Abundant levels in patients with hypertension compared with healthy controls [124] Lactobacillales, Collinsella, Enterobacteriaceae, and Streptococcus spp Levels altered in patients with atherosclerotic CVD [125] Veillonella spp and Streptococcus spp Presented in atherosclerotic plaques in human patients. Correlated with their abundance in the oral cavity [126] HFD: high-fat diet, SCFAs: short-chain fatty acids, CVD: cardiovascular disease, HF: heart failure. Biomarkers Proposed for Cardiovascular Diseases Biomarkers are traditionally classified according to their intended use as screening, diagnostic or prognostic biomarkers. Precision, high sensitivity, and specificity are fundamental characteristics of an ideal biomarker. In 2009, the American Heart Association defined the criteria for the evaluation of new biomarkers for clinical use [21]. In Table 2, currently used and proposed biomarkers for the diagnosis of CVD are enlisted. TnI and TnT Troponins regulate calcium-mediated interaction between actin and myosin, thus are related to myocardial contractility TnI and TnT are currently used as necrosis markers because their serum levels may be predictive for cardiovascular death in subjects with myocardial infarction and HF. They are proposed as biomarkers for diabetic cardiomyopathy [21,127] sST2 sST2 is related to inflammatory and immune processes sST2 is a cardiac biomarker cleared by the FDA for prognosis and diagnosis of chronic HF [21,127] FABP4 FABP4 plays an essential role in the development of insulin resistance and atherosclerosis FABP4 is a potent biomarker of FAs metabolic alterations and CVD [128] FABP3 FABP3 transports FAs from the plasma membrane to mitochondria for β-oxidation FABP3 might be a suitable diagnostic tool in systolic dysfunction, hypertrophic and dilated cardiomyopathy, including HF [127] Biomarker Functions Disease Associated PIIINP PIIINP is an indicator of extracellular matrix turnover PIIINP is proposed as a marker of early LV dysfunction in patients with insulin resistance, as well as patients with HF with HFpEF [21,127] Galectin-3 Galectin-3 is locally secreted by activated macrophages and fibroblasts, and it has a pro-fibrotic action Galectin-3 has been proposed as a good prognostic biomarker of LV systolic dysfunction and HF in diabetic patients, as well as an indicator of cardiac tissue remodeling and fibrosis [21,127] Adiponectin Adiponectin is a cardioprotective agent Adiponectin has been proposed as a biomarker of HF [20] NGAL NGAL functions as an inflammatory regulator of the innate immune system NGAL is a promising diagnostic biomarker of CVD (atherosclerosis, acute coronary syndrome, stable coronary artery disease, and HF) [129] IGFBP-7 IGFBP-7 is a modulator of insulin receptor activity and signaling IGFBP-7 is a promising biomarker for collagen deposition, fibrosis, and cardiac hypertrophy in diabetes, as well as for diastolic dysfunction and HF [127] ANP and BNP: natriuretic peptides A and B, HF: heart failure, HFmrEF: mid-range ejection fraction, HFpEF: preserved ejection fraction, TnI: troponin I, TnT: troponin T, sST2: soluble suppression of tumorigenesis 2, FDA: food and drug administration, FABP4: fatty acid-binding protein 4, FABP3: Fatty acids binding protein 3, FAs: fatty acids, PIIINP: pro-collagen type III aminopeptide, LV: left ventricle, NGAL: neutrophil gelatinase-associated lipocalin, CVD: cardiovascular disease, IGFBP-7: insulin-like growth factor binding protein-7. Cell Lines and Animal Models Used to Study Cardiac Alterations Associated with Obesity Several cell lines and animal models have been used to study obesity. Similar to humans, obesity in certain animal models is associated with co-morbidities such as systemic insulin resistance, diabetes, and hypertension [20]. Table 3 summarizes the most common cell lines and animal models used in the development of lipotoxic cardiomyopathy. Males and females showed augmented heart weight, interstitial and perivascular fibrosis, cardiac lipid accumulation and increased oxidative stress [143] Bama miniature pigs They have metabolic similarities to humans: lack of brown fat, and proportional organ sizes and cardiovascular systems 23 months fed with a high-fat, high-sucrose diet (37% sucrose, 53% control diet, and 10% pork lard) Pigs developed symptoms of metabolic syndrome and showed cardiac steatosis and hypertrophy. Insulin levels and heart weight were increased [127] Mongrel dogs Healthy dogs Six weeks with a standard diet supplemented with 6 g/kg of rendered pork fat; 21,025 kJ/day (27% carbohydrate, 19% protein, and 53% fat) Male dogs showed increased fasting insulin and markedly reduced insulin sensitivity, including a reduction in left ventricular function [144] AC16: cardiomyocyte cell line, HL-1: cardiac muscle cell line, H9C2: rat cardiomyoblast cell line, C57BL/6J: commonly called Black 6 mouse, T2D: type 2 diabetes, HFpEF: preserved ejection fraction, HFD: high-fat diet, ZDF: Zucker diabetic fatty, RMH-B: standard rat diet chow, LV: left ventricle. Renin-Angiotensin-Aldosterone Inhibitors Direct renin inhibition may be a promising antifibrotic therapy. It was reported that the oral renin inhibitor aliskiren has effects on collagen metabolism in cardiac fibroblasts and avoided myocardial collagen deposition in a non-hypertrophic mouse model of myocardial fibrosis, suggesting that aliskiren might be an effective therapy in HFpEF [145]. Aliskiren was approved by the Food and Drug Administration (FDA) in 2007 as antihypertensive. In addition, aliskiren might have renoprotective effects, which are independent of its blood pressure lowering effects in individuals with hypertension, T2D, and nephropathy [146]. Angiotensin-converting enzyme inhibitors (ACEIs) such as lisinopril, enalapril, and captopril, prevent the conversion of inactive AngI into active AngII, and have been used effectively in the treatment of several human diseases, including hypertension, congestive HF, coronary artery diseases, and diabetic nephropathy [147][148][149][150]. For example, lisinopril can regress myocardial fibrosis and improve LV diastolic function, while enalapril antagonizes the activation of the TGF-β signaling pathway [147,149]. Aldosterone plays a key role in the regulation of blood pressure and plasma sodium levels, promoting sodium retention in the renal tubules. In animal models, heart interstitial and perivascular fibrosis is caused by chronic administration of aldosterone and high salt intake. Treatment with spironolactone, an aldosterone antagonist, has been shown to prevent the increase in total and interstitial collagen in the heart [151,152]. Nutraceutics and Supplements Recent data have shown the usefulness of a nutraceutical supplementation approach for the prophylaxis and treatment of heart disease. In particular, a large amount of clinical data showed the protective effect of polyunsaturated fatty acids in CVD. Therefore, linoleic acid (LA, 18: 2, omega-6) and linolenic acid (ALA,18:3, omega-3), called essential fatty acids, must be included in the diet [153]. ALA in combination with pirfenidone has shown an enhanced antioxidant effect [154]. Epigallocatechin-3-gallate (EGCG) is the most abundant and powerful catechin in green tea. In rats, EGCG inhibits cardiac fibroblasts proliferation and improves cardiac hypertrophy via inhibition of oxidative stress. EGCG decreased collagen synthesis and fibronectin expression in rat cardiac fibroblasts induced by Ang II. Moreover, it markedly ameliorated the excessive expression of CTGF and cardiac fibrosis via the blockage of NF-κB signaling pathway in hypertrophic stimulation. However, a high dose of EGCG results in cardiac collagen synthesis and aggravates cardiac fibrosis in mice [155]. Quercetin is the most widely distributed flavonoid, is abundant in red onions, citrus fruits, grains, among others. In rats being fed a Western diet supplemented with quercetin, cardiac remodeling was prevented by inhibition of the NF-κB signaling pathway and by the promotion of Nrf-2 and its downstream molecules. Luteolin is a flavone abundant in thyme, onion, broccoli, and cauliflower. Inhibits cardiac fibroblasts proliferation through reduction in oxidative stress in vitro. Luteolin blocks NOX2 and NOX4 in cardiac hypertrophy, thereby decreases the phosphorylation of JNK and TGF-β1 expression and reduces cardiac fibrosis [155]. Apigenin is another flavonoid; it modulates the activity of PPARγ and glucose/lipid metabolism. Also, apigenin attenuates myocardial injury induced by isoproterenol through the regulation of Pparγ in diabetic rats. Apigenin also mitigates cardiac remodeling by inhibition of oxidative stress, NF-κB pathway, and apoptosis, and through the reduction in cardiac fibrosis in streptozotocin-induced diabetic cardiomyopathy [155]. Isoflavones, such as genistein and daidzein, are found in soybeans and have beneficial antifibrotic effects on cardiac remodeling. Genistein inhibits TGF-β1-induced proliferation, as well as collagen production and myofibroblast transformation. Anthocyanins such as malvidin-3-glucoside, delphinidin-3-glucoside, cyanidin-3-glucoside, petunidin-3-glucoside, and peonidin-3-glucoside extracted from grape skins have protective effects on the ischemia/reperfusion. The flavanone hesperidin, presented in citrus peels, showed beneficial cardiovascular effects in animal models due to its antioxidative and antiapoptotic properties and increased Nrf-2 mRNA expression protecting the heart of aged rats. Naringenin has been shown to possess protective effects on lipid metabolism. In H 2 O 2 -treated cardiomyoblasts, naringenin treatment decreased stress-induced apoptotic cell death and lipid peroxidation and increased the reduced glutathione [155]. Histone Deacetylases (HDACs) Inhibitors Several studies have demonstrated that HDACs are dysregulated in cardiac fibrosis, and many reports of preclinical studies showed the important role of HDAC inhibitors in the treatment of cardiac fibrosis. Valproic acid (VPA) attenuated cardiac hypertrophy and fibrosis through acetylation of the mineralocorticoid receptor in spontaneously hypertensive rats [156] and avoided right ventricular hypertrophy. Also, it was reported to reduce Ang II-induced pericyte-myofibroblast transdifferentiation and cardiac fibrosis by HDAC4-dependent phosphorylation of ERK [156]. MPT0E014, a pan-HDAC inhibitor (HDACI), downregulated TGF-β and Ang II type I receptor (At1r) in isoproterenol-induced dilated cardiomyopathy, whereas Trichostatin A (TSA) induced myocardial repair and prevented cardiac remodeling through c-kit signaling. TSA was also founded to reverse atrial fibrosis and reduced the incidence of arrhythmia without affecting the level of Ang II. Lyu et al. demonstrated that Class I HDAC inhibitor CI-994 reduced atrial fibrillation and fibrosis [156]. Mocetinostat, a selective Class I HDACI, inhibits the up-regulation of HDAC1 and HDAC2 in an animal model of congestive HF by reversing myofibroblast phenotype and increasing apoptosis, reducing fibrosis, and improving cardiac function by potentially blocking Akt signaling and inducing cell cycle arrest via p21/p53 [157]. In addition, the treatment with mocetinostat in cardiac fibroblasts from failing hearts showed decreased Col III, fibronectin, TIMP1, and IL-6mediated STAT3 signaling [158]. Mocetinostat was also able to decrease Ang II-induced fibrosis [158], and attenuate IL-6/STAT3 signaling and decrease interstitial fibrosis and scar size in ventricular tissue in an animal model of HF [156]. MGCD0103 is another selective class I HDACI, which causes inhibition of Ang II-induced cardiac fibrosis by controlling the differentiation of bone marrow-derived fibrocytes. Inhibition of class I HDACs with an apicidin derivative was found to prevent cardiac hypertrophy and failure in preclinical studies. Inhibition of HDAC6 by tubacin, decreased TGF-β1-induced myofibroblast markers and reduced cardiac fibrosis [156]. The specific HDAC3 inhibitor, RGFP966, prevented diabetic cardiomyopathy reducing cardiac dysfunction, hypertrophy, and fibrosis; avoiding the elevation of phosphorylated ERK1/2 (an initiator of cardiac hypertrophy) in OVE26 mice diabetic hearts [159]. Antioxidative Stress Therapies Oxidative stress and fibrosis are involved in cardiac remodeling and failure. Allopurinol was shown to decrease myocardial oxidative stress and improve diastolic dysfunction in Ang II-induced hypertensive mice. Furthermore, allopurinol prevented cardiac fibrosis through modulation of the TGF-β1/SMAD signaling pathway [160]. Curcumin administration was able to suppress the deposition of Col I and Col III in the heart tissue of diabetic rats, accompanied by a reduction in TGF-β1 production, suppression of TβR II levels, and SMAD2/3 phosphorylation, and increased SMAD7 expression. Similar effects were found in human cardiac fibroblasts exposed to high glucose [162]. Dysfunction of cardiac mitochondria is a hallmark of HF and causes oxidative stress. Therefore, special emphasis has been placed on vitamin E (α-tocopherol), vitamin C, and Coenzyme Q10, which were found to have antioxidant activity in experimental models and patients with HF. However, in some clinical trials, antioxidants have shown disappointing results, except for vitamin c, and coenzyme Q10 [163]. Transforming Growth Factor-β Inhibitors Pirfenidone and tranilast are two clinically-approved drugs, which have effects on inflammation and other fibrotic pathways. Furthermore, both drugs inhibit TGF-β signaling and have recently garnered interest as a potential treatment for cardiac fibrosis [164]. Pirfenidone inhibited TGF-β1 expression and the pro-fibrotic effects of TGF-β signaling, decreasing expression of Smad-7, TIMP-1, PAI-1, Col I, Col III, and Col IV [165]. In the short-term, pirfenidone and spironolactone treatment reversed cardiac as well as renal fibrosis and reduced the increased diastolic stiffness without normalizing cardiac contractility or renal function in streptozotocin-diabetic rats [166]. In addition, pirfenidone prevented myocardial steatosis and fibrosis in a mouse model of nonalcoholic steatohepatitis (NASH) by overexpressing PPARα, PPARγ, ACOX1, and CPT1A protein levels and decreasing Timp1, Col I, Col III mRNA levels [59]. Based on the fact that pirfenidone might be a promising agent for the treatment of CVD, the PIROUETTE (Pirfenidone in patients with HF and preserved left Ventricular Ejection fraction) trial was designed as a randomized, double-blind, placebo-controlled phase II trial to evaluate the efficacy and safety of 52 weeks of treatment with pirfenidone in patients with chronic HFpEF [167]. Regarding tranilast, it suppresses TGF-β expression and activity, inducing downregulation of collagen production in fibroblasts. In multiple animal models of cardiomyopathies, including experimental diabetes in rats, tranilast was reported to reduce myocardial fibrosis [164]. For instance, in streptozotocin-induced diabetic (mRen-2) 27 rats, tranilast attenuated cardiac matrix deposition by reducing phospho-Smad2 levels [168]. In a similar model, tranilast improved LV systolic and diastolic function without affecting SMAD phosphorylation but attenuated TGF-β-induced p44/42 MAPK phosphorylation [168]. The antifibrotic effects of tranilast were associated with an inhibition of TGF-β signaling and suppression of the infiltration of inflammatory cells, including monocytes and macrophages. Furthermore, mRNA levels of TGF-β1, plasminogen activator inhibitor 1 (PAI-1), MCP-1, IL-6, pro-collagens were decreased, as well as myocardial fibrosis and collagen accumulation in deoxycorticosterone acetate/salt hypertensive rats receiving tranilast. Similar results were found in renovascular hypertensive rats and hypertensive (mRen-2) 27 rats. Interestingly, in these studies, the inhibition of cardiac fibrosis by tranilast is independent of changes in blood pressure, suggesting a direct effect on cardiac fibrosis, with potential for HF treatment [168]. Conclusions In this review, we have described the pathophysiologic mechanisms involved in obesity-related CVD, as well as the role of PPARs, epigenetic modifications, and gut microbiota dysbiosis associated. It has become clear that hyperglycemia, as well as excess FFAs, and triglyceride levels promote WAT dysfunction, leading to an altered expression of pro-inflammatory cytokines, adipokines, and hormones, which activate pathological processes, such as oxidative stress and inflammation on WAT and cardiac tissue. In obesity, the renin-angiotensin-aldosterone system (RAAS) is activated, causing amplification of inflammation and structural remodeling, thus inducing cardiac and vascular damage, as well as other structural alterations leading to cardiac dysfunction. This review also provides information related to biomarkers currently in use and those proposed as diagnostic biomarkers of CVD, together with in vitro and animal models commonly used in CVD. In addition, therapeutic treatments for CVD were examined. Therefore, this review was conceived to provide an update of knowledge related to CVD associated with obesity, to improve understanding of the main pathological mechanisms involved, and to resume the potential therapeutic strategies available.
2021-03-29T05:15:33.471Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "49aa9300c4bea9deed73358fde30a3308cc7e2db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/10/3/629/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49aa9300c4bea9deed73358fde30a3308cc7e2db", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
81914916
pes2o/s2orc
v3-fos-license
Updating traumatic optic neuropathy Traumatic optic neuropathy (TON) is the affectation of the visual function, secondary to a damage caused by a direct or indirect traumatic mechanism over the optic nerve. It occurs in approximately 0.5 to 5% of closed head injuries, and in 2.5% of patients with maxillofacial trauma and mid-face fractures. The types of TON are direct, anterior indirect, posterior indirect, and chiasmal. This work aims to offer an updating in traumatic optic neuropathy. We made a search in international data bases such as PubMed, ClinicalTrial, Ebsco, Hinari and so on, and found 32 articles which were used in this review article. We used the following keywords: traumatic optic neuropathy, optic nerve, trauma, visual loss, visual disease. 70% of the articles correspond to the last five years. This review was redacted using Microsoft Office Word 2016 in a laptop Asus with Window 10 system. We made a compilation with diverse therapeutic options based principally in axonal regeneration developed by researchers during the last decade. The present review article provides an updating regarding potential strategies for axonal regeneration and optic nerve repair, focusing on the researches of many investigators around the world. Nowadays, therapeutic options have advanced in many fields, but still more researches must be done to find a definitive solution for traumatic optic neuropathy in a near future. INTRODUCTION The visual diseases are a real health problem with high repercussion for individuals, the family and society.In the world there are approximately 285 million of people with visual disability; 39 million are blind and 246 million have low vision (WHO, 2014). Visual loss could be caused by different etiology.Worldwide, non-corrected refractive defects are the most important cause of visual disability, but in countries with low and medium earned income, cataracts are still the principal cause of blindness (WHO, 2014).In no few times, visual loss could be due to the optic nerve damage or optic neuropathy (Fuentes et al., 2014), such as: ischemic, demyelinating, hereditary, tumoral, toxicnutritional, inflammatory, glaucoma, and traumatic.This research concentrates on traumatic optic neuropathy (TON).This entity is defined by the American Academy of Ophthalmology as the partial or totally affectation of the visual function, secondary to a damage caused by a direct or indirect traumatic mechanism over the optic nerve (American Academy of Ophthalmology, 2014Ophthalmology, -2015)).Since more than a half of century, some authors (Mariotti, 1952;Hsu, 1952;Lazorthes and Anduze, 1952) from different parts of the world have been talking about this entity, and nowadays it is continuous being a concern for the researchers.For that reason, we have the objective to offer an updating in traumatic optic neuropathy. Every year, 500 000 patients are notified with unilateral traumatic blindness (Fuentes et al., 2016).Traumatic optic neuropathy occurs in approximately 0.5 to 5% of closed head injuries, and in 2.5% of patients with maxillofacial trauma and midface fractures (Chan, 2007).A recent study (Singman et al., 2016) asserts that in populations in England, for both adults and children, the overall incidence of traumatic optic neuropathy is approximately 1/million; notably, approximately 80% of the patients were males, and the majority of cases suffered relatively minor head injuries with neither orbital nor skull fracture.This suggests that indirect traumatic optic neuropathy may be more common than direct traumatic optic neuropathy. MATERIALS AND METHODS A search in international data bases such as PubMed, ClinicalTrial, Ebsco, Hinari, Scielo, Cochrane and found 32 articles in Spanish and English languages principally, which were used in this review article were made.The following keywords were used: traumatic optic neuropathy, optic nerve, trauma, visual loss, visual disease.The 70% of the articles corresponds to the last five years, showing a high actualization level.This review was redacted using Microsoft Office Word 2016 in a laptop Asus with Window 10 system. RESULTS AND DISCUSSION After searching and reading many articles, the following classification of different types of traumatic optic neuropathy still valid today was found. Direct Penetrating object causing direct injury to the optic nerve by complete or partial transaction of nerve or contusion of nerve; hemorrhage or foreign body compressing the optic nerve; initial variable level of vision that often worsens.Orbital hemorrhage may cause orbital compartment syndrome.Enlarged optic nerve sheath may be seen on CT scan. Anterior indirect Sudden rotation of anterior displacement of globe with object causing injury to the anterior segment of the optic nerve, often at the cribrosa sheet.Rare type of traumatic optic neuropathy.Peripapillary vitreous hemorrhage.Partial or complete optic nerve head avulsion.Papilledema, venous congestion, central retinal artery occlusion, retinal edema. Posterior indirect Frontal or midfacial trauma or trauma that may appear Chiasmal Severe closed head injuries or an abrupt traction on the globe may cause chiasmal injury; variable visual field defects.Central visual acuity may be normal.Anosmia, diabetes insipidus, or other endocrinopathies; skull base fractures and other neurological deficits may be present. Production mechanism About production mechanism we can say that two basic mechanisms for TON are understood.Saxena et al. (2014) explain: Direct mechanical injury to the optic nerve causes a tear or interruption of the nerve which has a worse prognosis.Indirect injury is a closed injury causing a reactionary edema in the nerve sheath which can compromises the vascular supply and neurotrophic supply of the ganglion cells by compressing the nerve in the tightly packed optic canal.In both processes, there is retrograde degeneration of the ganglion cells which are irreversibly lost. Based on 174 postmortem examinations by Crompton (Crompton, 1970) on patients who died after closed head trauma, optic nerve dural sheath hemorrhages was found in 83% of patients.Interstitial optic nerve hemorrhages occurred in 36% of these patients; two-thirds had the hemorrhage within the optic canal; tears and ischemic lesions occurred in 44% of patients; in 81%, these involved the intracanalicular optic nerve, and in 54% these affected the intracranial optic nerves (Crompton, 1970). The nerve and its sheath are tightly fixed to the bony canal within a confined space.In indirect TON cases, optic nerve injury results from shearing forces to the fibers or to the vessels supplying the nerve.Cadaveric skull studies (Saxena et al., 2014) have demonstrated that if a force is given at the frontal bone or malar eminences they are concentrated and transferred to the optic canal.As the dural sheath is tightly adhered to the periosteum inside the optic canal this force is transferred to the nerve.Such injury leads to ischemic injury to the optic nerve axons within the optic canal followed by optic nerve swelling.This increases the intraluminal pressure of the canal further exacerbating axonal degeneration and compromises the vascular blood supply (Saxena et al., 2014).Less frequently is bilateral traumatic optic neuropathy, but there is a case reported by Allon et al (Allon et al., 2014) where they explain: There was no contact between the hematomas and the nerve, and there was no direct compression on the nerve.The injury was facial, similar to previous reports of TON. Other authors also say that the primary TON is involved in overactive, rupture, contusion, and distort of the optic nerve.This type of injury always leads to immediate blindness.The secondary TON may be compromised by nerve edema both within the confines of the bony optic canal and the optic sheath (He et al., 2015).Apoptosis is programmed cell death involving active cellular processes through final common pathways.Injured retinal ganglion cells release extracellular glutamate that induces excitotoxicity (Vorwerk et al., 2004).High glutamate concentrations activate N-methyl-d-aspartate (NMDA) receptors that allow entry of excessive calcium into the cell.It has been shown that optic nerve crush leads to an increase in extracellular vitreal glutamate, but the steps by which axotomy induces excitotoxic damage to ganglion cells is still being studied (Chan, 2007). Besides ischemia, inflammation contributes to further neural damage.Mediators of inflammation are released to attract polymorphonuclear lymphocytes and macrophages (Chan, 2007).Within the first 2 days after injury, polymorphonuclear lymphocytes predominate to cause immediate tissue damage.They are then replaced by macrophages by about 7 days after injury.These macrophages are thought to contribute to delayed tissue damage, as in delayed posttraumatic demyelination (Kanellopoulos et al., 2000).Macrophages release glial promoting factors.This astroglial response after spinal cord injury may inhibit axonal regeneration processes.Inhibition of macrophage responses have been shown to decrease reactive gliosis, as shown in spinal cord injury studies (Schuettauf et al., 2000). Diagnosis The diagnosis can be clinically confirmed paying attention to signs and symptoms referred by patients.They can refer reduced visual acuity, color vision, and/or visual field, as well as a relatively afferent pupillary defect.Direct ophthalmoscopy of the nerve is also expected to appear normal, though optic atrophy or pallor is expected to develop.Automated visual field testing should be offered; however, the vision of subjects may be too poor to glean useful results.In most cases, testing with visual evoked potentials (VEP) is not needed to establish the diagnosis.However, in questionable cases, VEP may provide confirmatory data.VEP may also have predictive value; patients with better responses on VEP may be more likely to regain some or all of their vision (Singman et al., 2016). With all these signs and symptoms, traumatic optic neuropathy can be usually confused with other entities that cause damage to the optic nerve.For that reason, it is necessary to know the differential diagnosis with other diseases.Ischemic optic neuropathy refers to infarction of any portion of the optic nerve from the chiasm to optic nerve head.Clinically, it is divided into anterior and posterior forms by the presence or absence of swelling of the optic nerve head, respectively; findings in unilateral ischemic optic neuropathy include a relative afferent pupillary defect and demonstrable visual field loss (Patel and Margo, 2017). Optic neuritis has great variety of visual field defects, contrast sensibility alterations, and edema of the disc (Kanski, 2016).Therefore, the relatively afferent pupillary defect can be very important to establish the difference with other optic neuropathies; speed, constriction radius and latent are more decreased in optic neuritis than in ischemic optic neuropathy (Yoo et al., 2017).In posterior optic neuritis the direct ophthalmoscopy of the nerve is normal, like in initial TON.Other disease with optic nerve damage is Cuban epidemic optic neuropathy, but this has a typical standard fundoscopy with bilateral loss of the retinal nerve fiber layer with the shape of bow tie in the disc-macular bundle; and sectorial temporal disc pallor (Fuentes, 2011).This entity appears as an epidemic (Fuentes, 2016) and it is not the case of traumatic optic neuropathy which has the precedent of trauma and visual recovery in patients with severe trauma, especially in those in whom visual acuity could not be obtained due to various factors, because the initial visual acuity is the strongest predictor of visual recovery (Bodanapally et al., 2015).Lee et al. (2016) found significant thinning of the entire retina, retinal nerve fiber layer, and ganglion cell layer plus inner plexiform layer (GCIPL) in TON eyes, and a remarkable reduction of GCIPL in early phase TON.They also demonstrated a correlation between morphological changes in the retinal layers and visual functions, including visual field defect and P100 latency and amplitude.Therefore, analyzing each retinal layer using SD-OCT was helpful to understand TON pathophysiology and assess optic nerve function. About treatment The basic concept is to decompress the nerve by either decreasing the edema by steroid therapy or creating more space by surgical decompression.But nowadays, therapeutic options are on the way of axonal regeneration.Saxena et al. (2014) assert the controversy in therapy of TON primarily stems from two facts.Firstly, literature lacks a well-executed randomized controlled clinical trial, due to both the relative difficulty in recruitment of adequate numbers and the highly heterogeneous presentation of such patients.The second reason is the unpredictable yet frequent incidence of spontaneous recovery. Corticosteroids have also been offered to patients for TON.A Cochrane review from 2013 (Yu-Wai-Man and Griffiths, 2013) found one double masked, placebocontrolled and randomized study in which high-dose IV corticosteroids were offered within 1 week of the injury causing ITON; there was no significant benefit over observation. But, treatment of TON with steroids is strictly contraindicated in cases where severe head trauma accompanies the ocular damage.In cases without head trauma steroid treatment may be applied, although its benefit is questionable (Allon et al., 2014).The International Optic Nerve Trauma Study (Levin et al., 1999) similarly concluded that neither steroid therapy nor decompression showed clear benefits. According to He et al. (2016), in China, currently surgical decompression of the optic canal is the main approach for traumatic optic neuropathy in neurosurgery.Theoretically, optic nerve decompression reduces intracanalicular pressure and allows the removal of any impinging bony fragment, assisting in the reestablishment of nerve function.They apply different surgical techniques (endoscopic, transorbital and transcranial approaches).They also say that it is generally believed that endoscopic optic nerve decompression offers several advantages over other surgical approaches.It requires no external incision.There is no orbital retraction during the procedure.The endoscopes could provide an optimal visual field. A pilot study exploring the efficacy of intravenous erythropoietin has been published (Entezari et al., 2014).The drug was administered within 2 to 3 weeks of onset, and the treated cohort demonstrated improved best corrected visual acuity.Notably, the rationale for this treatment was that erythropoietin may provide neuroprotection and support axonal growth. We also found a study (Morgan-Warren, 2013) that concluded: Laboratory studies are unlocking multiple extrinsic and intrinsic factors, and the candidate signaling pathways responsible for retinal ganglion cell (RGC) death and axon regeneration failure in the adult visual system that could be targeted for clinical treatment.The phosphoinositide-3-kinase and serine/threonine kinase (PI3K/Akt) pathway, which mediates axon growth and protein synthesis through glycogen synthase kinase (GSK3b) and mammalian target of rapamycin (mTOR) signaling, respectively, is a promising candidate pathway, altering the balance of signaling in mTOR and linked pathways to promote RGC survival and axon regeneration. The use of mesenchymal stem cells in cell therapy in regenerative medicine has great potential, particularly in the treatment of nerve injury.Umbilical cord blood reportedly contains stem cells, which have been widely used as a hematopoietic source and may have therapeutic potential for neurological impairment (Chung et al., 2016).).We also found another article (Jiang et al., Fuentes and Pelier 109 2013) that certifies the use of umbilical cord blood stem cells for traumatic optic neuropathy; but, by the moment, the experimentation is just in animals. We knew about the first confirmatory, large-sample, double blind, randomized, multi-center clinical trial to establish the efficacy and safety of repetitive transorbital alternating current stimulation (rtACS) in patients with vision impairments caused by optic nerve damage (Gall et al., 2016).This trial is a new and important advance in the search of the best treatment for TON. Researchers continuously talk about axon regeneration and we found another article (Li et al., 2017) published last year that provides an overview regarding potential strategies for axonal regeneration of RGCs and optic nerve repair; it focuses on the role of cytokines and their downstream signaling pathways involved in intrinsic growth program and the inhibitory environment together with axon guidance cues for correct axon guidance.A more complete understanding of the factors limiting axonal regeneration will provide a rational basis, which contributes to develop improved treatments for optic nerve regeneration. Still other treatments are in varying stages of development such as acupuncture (Huang and Qian, 2008), drugs (Chien et al., 2015;Bei, 2017), brimonidine (Lindsey et al., 2015), and another therapeutic options (Henrich-Noack, 2013;Kyung et al., 2015;Dun and Parkinson, 2017).The mechanism of the protective effect needs further in-depth study; however, all these findings are encouraging and open the possibility that further treatment and researches may become achievable in the future. Conclusion Traumatic optic neuropathy is an affectation that can cause visual loss; so, we consider it important to do this review article trying to make an updating in the most important aspects of TON.The present review provides an updating regarding potential strategies for axonal regeneration and optic nerve repair, focusing on the researches of many investigators around the world.Nowadays, therapeutic options have advanced in many fields, but more researches must be done to find a definitive solution for traumatic optic neuropathy in a near future.
2019-03-18T14:04:46.515Z
2018-11-30T00:00:00.000
{ "year": 2018, "sha1": "df3014edbb9a12ed278291b32a87d004717dae55", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/IJMMS/article-full-text-pdf/B9E5EF959265.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "df3014edbb9a12ed278291b32a87d004717dae55", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250095350
pes2o/s2orc
v3-fos-license
Regular expressions for tree-width 2 graphs We propose a syntax of regular expressions, which describes languages of tree-width 2 graphs. We show that these languages correspond exactly to those languages of tree-width 2 graphs, definable in the counting monadic second-order logic ( CMSO ). 2012 ACM Subject Classification Mathematics of computing → Discrete mathematics Keywords Introduction Regular word languages form a robust class of languages.One of the witnesses for this robustness is the variety of equivalent formalisms defining them.They can be described by finite automata, by monadic second-order (MSO) formulas, by regular expressions or by finite monoids [3,6,10].Each of these formalisms has some advantages, depending on the context where it is used.For example, MSO is close to natural language, regular expressions define regular languages via their closure properties, automata have good algorithmic properties and can be used as actual algorithms to decide membership in a language, etc.Similarly, regular tree languages have equivalent formalisms, for various kinds of trees [11,13,9]. We will here further generalize the structures considered, by moving to graphs of bounded tree-width.Intuitively, they can be thought of as "graphs which resemble trees".In this framework, we already know that counting MSO (CMSO), an extension of MSO with counting predicates, and recognizability by algebra are equivalent [1,2], yielding a notion that could be called "regular languages of graphs of tree-width k".Engelfriet [7] also proposes a regular expressions formalism matching this class, but by his own admission, these expressions closely mimic the behavior of CMSO.The main feature missing in Engelfriet's regular expressions is a mechanism for iteration, which is the central operator for word regular expressions: the Kleene star. In this paper, we propose a syntax of regular expressions for languages of tree-width 2 graphs, that follow more closely the spirit of regular expressions on words, using Kleene-like iterations.This constitutes a first step towards the long term objective of obtaining such expressions for languages of graphs of tree-width k.We believe the case of tree-width 2 is already a significant step in itself.Graphs of tree-width 2 form a robust class of graphs with several interesting characterizations.One of them is the characterization via the forbidden minor K 4 , the complete graph with four vertices.By the Robertson-Seymour theorem [12], it is known that for every k ∈ N, the class of tree-width k graphs is characterized by a finite set of excluded minors.However, this result is not constructive, and only the forbidden minors for k ≤ 3 are known. Tree-width 2 graphs ▶ Definition 1 (Graphs).A graph G is a tuple (V, E 1 , E 2 , s, t, l 1 , l 2 , ι, o), where V is a set of vertices, E 1 and E 2 are two disjoint sets of unary and binary edges, s : E 1 ⊎ E 2 → V and t : E 1 → V are a source and a target functions specifying the source and the target of each edge 1 , l 1 : E 1 → Σ 1 and l 2 : E 2 → Σ 2 are labeling functions indicating the label of each edge, ι is the input vertex and o is the output vertex, ι and o are the interface vertices of G.All the vertices of G which are not interface vertices are called inner vertices.The interface of G is the pair (ι, o) if ι ̸ = o, or the vertex ι otherwise.An a-edge is an edge labeled by the letter a.We say that G is unary if ι = o, and binary otherwise.The interface of a binary edge e is (s(e), t(e)), the interface of a unary edge e is s(e).An interface in G is a list of vertices of length 1 or 2. A graph is empty if it has no edges, and if all its vertices are interface vertices.▶ Remark 2. What we call here a graph is what is usually called a hypergraph (because of the unary edges) with interface.We depict graphs with unlabeled ingoing and outgoing arrows to denote the input and the output, respectively. ▶ Definition 3 (Paths).A path p of G is a non-repeating list (v 0 , e 1 , v 1 , . . ., e n , v n ) where v i is a vertex of G and e i is an edge of G, such that the interface of e i is either (v i−1 , v i ) or (v i , v i−1 ), for every i ∈ [1, n].The path p is directed if the interface of e i is (v i−1 , v i ) for every i ∈ [1, n].The vertex v 0 is the input of p, v n is its output and (v 0 , v n ) its interface.The path p is safe if it does not contain an interface vertex of G as an inner vertex.▶ Definition 5 (tw 2 graphs).Consider the signature σ containing the binary operations • and ∥, the unary operations • and dom, and the nullary operations 1 and ⊤.We define tw 2 terms as the terms generated by the signature σ and the alphabet Σ: We define the graph of a term t, G(t), by induction on t, by letting: and interpreting the operations of the syntax as follows: In the picture above, we represent the graph G by an arrow from its input to its output.For example, the graph dom(G) is obtained from G by relocating the output to the input.We usually write tu instead of t•u and give priorities to the symbols of σ so that ab ∥ c parses to (a•b) ∥ c.We define the set of tw 2 graphs as the graphs of the terms above.The graphs of a and a ∥ 1, where a ∈ Σ, are called atomic. We will sometimes identify terms with the graphs they generate.For example we may say that (a ∥ b) is binary or connected to say that its graph is so. ▶ Example 6. Below, from left to right, two tw 2 graphs and a graph which is not tw 2 . ▶ Definition 8 (Graph languages).Sets of graphs are called graph languages.A graph language is unary or binary if all its graphs have this arity. Counting monadic second-order logic We introduce CMSO, the counting monadic second-order logic, which is used to describe graph languages. ▶ Definition 9 (The logic CMSO).Let V be the relational signature which contains two binary symbols source and target, two unary symbols input and output and a unary symbol a for each (unary or binary) letter a ∈ Σ. Let X 1 be a countable set of first-order variables and X 2 a countable set of set variables The formulas of CMSO are defined as follows: where r is an n-ary symbol of V, x 1 , . . ., x n , x ∈ X 1 , X ∈ X 2 and k, m ∈ N. Free and bound variables are defined as usual.A sentence is a formula without free variables.We use the usual syntactic sugar, for example φ ⇒ ψ as a shortcut for ¬φ ∨ ψ. We define the semantics of CMSO formulas.To handle free variables, CMSO formulas are interpreted over pointed graphs. ▶ Definition 10 (Semantics of CMSO).Let G be a graph and Γ be a set of variables.An interpretation of Γ in G is a function mapping each first-order variable of Γ to an edge or vertex of G, and each set variables to a a set of edges and vertices of G.A pointed graph is a pair ⟨G, I⟩ where G is a graph and I is an interpretation of a set of variables Γ in G.If Γ is empty, we denote it simply as G. Let φ be a CMSO formula whose free variables are Γ and let ⟨G, I⟩ be a pointed graph such that I is an interpretation of Γ.We define the satisfiability relation ⟨G, I⟩ |= φ as usual, by induction on the formula φ.Here is an example of the semantics of some CMSO formulas: a(e) : e is an a-edge. If φ is a sentence, we define L(φ), the graph language of φ as follows: ▶ Definition 11 (CMSO definability).A graph language is CMSO definable if it is the graph language of a CMSO sentence. ▶ Example 12.The language of graphs having an a-edge from the input to the output is definable in CMSO, by the following formula for instance: Note that the graphs of this language may not be tw 2 graphs. ▶ Example 13.The set of tw 2 graphs is a CMSO definable language.Indeed, tw 2 graphs are those graphs which exclude K 4 , the complete graph with four vertices, as minor.The set of graphs which exclude a fixed set of minors can be easily defined in CMSO [5]. The set of tw 2 graphs having an a-edge from the input to the output is definable in CMSO, by the conjunction of the formula φ of Ex. 12 and the formula defining tw 2 graphs. We state below a localization result, which allows us to transform a CMSO sentence into another one which talks only about a part of the original graph. ▶ Proposition 14.Let φ be a CMSO sentence, x, y ∈ X 1 and X ∈ X 2 .There is a CMSO formula φ |(x,X,y) such that, for every graph G and interpretation I : (x → s, X → H, y → t), such that (s, H, t) is a subgraph of G, we have: There is another presentation of the syntax of CMSO, where we remove first-order variables and the formulas including them, and add the following formulas: X ⊆ Y and r(X 1 . . ., X n ) where r is an n-ary symbol of V. The formula X ⊆ Y is interpreted as "X is a subset of Y " and r(X 1 . . ., X n ) as "for each i, X i is a singleton containing x i and r(x 1 . . ., x n )".This presentation is more convenient in proofs by induction as there are less cases to analyze. Recognizability We Operations on graph languages The operations of σ can be lifted from graphs to graph languages in the natural way.We say that an operation on graph languages is CMSO compatible if, whenever its arguments are CMSO definable, then so is its result. ▶ Proposition 18.Union and the operations of σ are CMSO compatible. We define two additional operations: substitution and iteration. ▶ Definition 19 (Substitution and iteration).Let x be a letter, L and M be tw 2 graph languages and let be G a tw 2 graph.We define the set of graphs G[L/x] by induction on G as follows: where o is an n-ary operation of σ.We define M [L/x] as: We define similarly the simultaneous substitution M [ ⃗ L/⃗ x], where ⃗ L and ⃗ x are respectively a list of tw 2 graph languages and a list of letters of the same length. For every n ≥ 1, we define the language L n,x and the iteration µx.L as follows: ▶ Remark 20.Substitution and iteration are not CMSO compatible in general.For instance, the iteration of the CMSO language {axb}, which is the set {a n xb n | n ∈ N}, is not CMSO definable.However, under a guard condition that we introduce later, we recover CMSO compatibility. I C A L P 2 0 2 2 121:6 Regular Expressions for Tree-Width 2 Graphs We finally consider two restricted forms of iteration called Kleene and parallel iteration. ▶ Definition 21 (Kleene and parallel iteration).We define the Kleene iteration L + and the parallel iteration L ∥ of a language L as follows, where x is a letter not appearing in L: A graph language is (of type) series, parallel, domain or test if all its graphs have this type. Pure graphs and modules There is a canonical way to decompose pure graphs of type series, parallel and test. ▶ Proposition 25 ([4]).Let G be a pure graph.The graph G has the following shape: Here is a picture illustrating this proposition: A. Doumane 121:7 The following picture illustrates a unary and binary island of a graph.▶ Remark 28.Our notion of modules is different from the one usually used in graph theory, more precisely in the setting of modular decompositions. ▶ Remark 29.The parallel composition of two islands of a graph G with the same interface is also an island of G with the same interface.Similarly, the parallel composition of two modules of a graph G with the same interface is also a module of G with the same interface.This justifies the following definition. ▶ Definition 30 (Maximal islands and modules). Let G be a graph and I an interface in G. The maximal island at I is the parallel composition of all the islands of G whose interface is I, we denote it by max-island G (I).The maximal module at I is the parallel composition of all the modules of G whose interface is I, we denote it by max-module G (I). ▶ Remark 31.The maximal module at a given interface does not always exist. Multiset pre-expressions are defined as follows: Multiset expressions are those pre-expressions, where each sub-term appearing under a parallel iteration, is built using a single element a ∈ Σ m (all the other operations are allowed).The graph language of an expressions is defined as usual. ▶ Remark 36.To see why the condition on multiset regular expressions is useful, consider the expression e = (a ∥ b).The language of its parallel iteration is the set of multiset graphs which have the same number of a-edges and b-edges, and this is not a CMSO definable language. Context-free expressions ▶ Definition 37 (Context-free expressions).We define context-free expressions as the set of terms generated by the following syntax: where e w and e m are respectively word and multiset regular expressions.We define the language of a context-free expression e, denoted L(e), by induction on e, interpreting the operations of the syntax as described in Sec.2.4. Regular expressions for tw 2 graphs will be defined as a restriction of context-free expressions, where substitution and iteration are allowed only under a guard condition that we shall explain in the following. The guard condition ▶ Definition 38 (Guarded letters).Let G be a graph and x a letter.We say that: x is s-guarded in G if x is binary and every x-labeled edge of G is parallel to a module. x is p-guarded in G if x is binary and no x-labeled edge of G is parallel to a module. A. Doumane 121:9 x is unary and no x-labeled edge of G is parallel to a module.Let τ ∈ {s, p, d, t} be a type and L a graph language.We say that x is τ -guarded in L if it is τ -guarded in every graph of L. ▶ Definition 39 (Guard condition).Let x be a letter, M a tw 2 graph language and L a pure language of type τ .The substitution We say that the iteration µx.L is of type τ if L is of type τ . ▶ Definition 40 (Regular expression). A regular expression is a context-free expression where every substitution and iteration is guarded.A language of graphs is regular if it is the language of some regular expression. ▶ Remark 41.When L is test and x is a unary letter, then µx.L is always guarded. ▶ Proposition 42.We can decide if a context-free expression is regular. ▶ Remark 43.Be aware that prop.42 is about deciding a syntactic property of e, namely that the iterations and substitutions are guarded.However, the problem of determining if a context-free expression defines a CMSO language is undecidable.This apparent contradiction comes from the fact that some context-free expressions, which are not guarded, define CMSO languages, as we shall see in the upcoming examples. Examples ▶ Example 44.The iteration µx.axb is not guarded.Indeed, the language of axb is series, as it contains a single series graph G.However, the letter x is not s-guarded in G, because it is not parallel to any module of G.The graph of this iteration look like this: ▶ Example 45.The iteration µx.a(x ∥ c)b is guarded.Indeed the language of a(x ∥ c)b is series, actually it contains a single graph G, depicted below left, which is series.The letter x is s-guarded in G, because it is parallel to a module, namely the c-edge.The graph of this iteration look like this: Note the similarity between the graph language of µx.axb and that of µx.a(x ∥ c)b: the former is obtained by forgetting the c-edges of the latter.Yet, the latter is CMSO definable, while the former is not.In the case of µx.a(x ∥ c)b, the c-edges will guide a CMSO formula to relate the a-edges and the b-edges of the same iteration depth.This is the main intuition behind the guard condition for series languages. ▶ Example 46.The iteration µx.(axa ∥ axa) is guarded.Indeed, the language of (axa ∥ axa) is parallel, as it contains a unique graph G (the left graph below) which is parallel.The letter x is p-guarded because all the occurrences of x are not parallel to any module of G.Note that the graphs of this expression have the following shape: they all start with a binary tree whose edges are labeled by a, end ends with the mirror image of this tree, while the corresponding leafs are connected by an x-edge.Those trees are colored in red below.At first glance, this expression dos not seem to be CMSO definable, as it seems that we need to test whether a graph starts and ends with the same tree.We will see however that the language of this expression, as those of all regular expressions, is CMSO definable.The guard condition is not "perfect", in the sense that some non-guarded context-free expressions might generate CMSO definable languages, as shown in the following example. ▶ Remark 48.Intuitively, the guard condition allows only those graphs where series and parallel operations alternate.This why we add the word and multiset expressions: to allow graphs where we can iterate only series or parallel operations respectively. Main result The main result of this paper is the following theorem: ▶ Theorem 49.Let L be a language of tw 2 graphs.We have: Thanks to Thm. 17, CMSO definability implies recognizablity.We show that regularity implies CMSO definability in Sec. 5 and that recognizabilty implies regularity in Sec. 6. Companion relations ▶ Definition 50 (Companion relation).Let G be a graph.Two paths of G are orthogonal if they do not share any edge, and whenever they share a vertex, it is necessarily an interface vertex of one of them.A set of paths is a set of orthogonal paths if its paths are pairwise orthogonal. A relation R on the vertices of G is a companion relation if there is a set of orthogonal paths P such that (v, w) ∈ R iff (v, w) is the interface of a path p ∈ P .We say that p is a witness for (v, w), and that P is a witness for the relation R. ▶ Example 51.The relation indicated by the green dotted arrows below is a companion relation.This is not the case for the one indicated by the red dotted arrows. We introduce CMSO r , an extension of CMSO where quantification over companion relations is possible. ▶ Definition 52 (The logic CMSO r ).Let X r be a set of relation variables, whose elements are denoted R, S, . . . .The formulas of CMSO r are of the following form: As for CMSO, we need to define the semantics of a formula over pointed graphs to handle free variables. ▶ Definition 53 (Semantics of CMSO r ).Let G be a graph and Γ be a set of variables.An interpretation of Γ is as usual, but here every relation variable is mapped to a binary relation on the vertices of G.We define the satisfiability relation ⟨G, I⟩ |= φ as usual, by induction on the formula φ.The only new cases are the quantification ∃R which is interpreted as "there exists a companion relation R on the vertices of the graph", and the formulas (x, y) ∈ R which are interpreted as "there is a pair of vertices (x, y) in R". The logic CMSO r have the same expressive power as CMSO To guess a companion relation in CMSO, we show how to encode a set of guarded paths by a collection of sets called a footprint. ▶ Definition 54 (Frontier edges of a path).Let p = (v 0 , e 1 , v 1 , . . ., e n , v n ) be a path.If n > 1, we call e 1 the opening edge of p and e n its closing edge.If n = 0, we call e 0 its single edge.Opening, closing and single edges are called the frontier edges of p, the other edges are called its inner edges. ▶ Definition 55 (Footprint). A footprint in a graph G is the following collection of data: a partition of the vertices of G into non-path and path vertices, a partition of edges into non-path and path edges, a partition of path edges into frontier and inner edges, a partition of frontier edges into opening, closing and single edges and a partition of path edges into direct and inverse edges. The partition of path edges into direct and inverse ones provides them with a new orientation: they conserve their original orientation if they are direct, or get reversed (we swap the source and target) if they are inverse edges. Let F be a footprint.A path p is encoded by F if its edges and vertices are path edges and path vertices of F, if its inner, frontier, opening, closing and single edges are edges of the corresponding sets in F.Moreover, p must form a directed path with the new orientation dictated by F. ▶ Example 56.We represent below a footprint in the left graph of Ex. 51.Non-path edges and vertices are grey, path vertices are black, opening edges are green, closing edges are yellow, single edges are pink and all the other inner edges are black.For path edges, we display the new orientation induced by the footprint instead of the original one.The set of paths encoded by this footprint are a witness that the green relation of Ex. 51 is a companion relation. ▶ Proposition 57. Let G be a graph and P a set of orthogonal paths of G. There is a footprint F such that P is the set of paths encoded by F. ▶ Corollary 58.If a language is CMSO r definable then it is CMSO definable.Regular implies CMSO definable ▶ Theorem 59.If a language is regular, then it is CMSO definable. To prove Thm. 59, we proceed by induction on regular expressions.The cases of word and multiset regular expressions follow from the similar result for words and commutative words.The cases of union and the operations of the signature σ follow from Prop.18.We are left with the cases of substitution and iteration; the rest of this section is dedicated to proving the following proposition. ▶ Proposition 60.Let x be a letter and L and M be languages of tw 2 graphs.We have: We handle the case of iteration, the case of substitution being similar.We show first that the iteration of a CMSO definable language, without any guard condition, is definable in an extension of CMSO where we are allowed to quantify existentially over sets of subgraphs of the input graph, which we call CMSO d .This logic is obviously strictly more expressive then CMSO, because it amounts to quantify over sets of sets.Based on this, we show that the guarded iteration of a CMSO definable language is definable in CMSO r , the extension of CMSO with companion relations defined in the previous section.This concludes the proof, the logic CMSO r being equivalent to CMSO. Decompositions When a graph is in the iteration µx.L of some language L, it is possible to structure it into a tree shaped decomposition, such that each part of this decomposition "comes from L".In the following, we define such decompositions. ▶ Definition 61 (Independent graphs).Let G be a graph and H, K be subgraphs of G.We say that H and K are independent if they do not share any edge; and whenever they share a vertex, it is necessarily an interface vertex of both H and K. ▶ Definition 62 (Decompositions). A decomposition of G is a set D of modules of G such that G ∈ D and for every pair of graphs in D, they are either independent, or module one of the other.We call the graphs of a decomposition its components.We call the interfaces of D the set of interfaces of its components. Let H and K be components of a decomposition D. We say that H is a child of K, if H is a module of K, and if there is no component C of D, distinct from H and K, such that H is a module of C and C is a module of K. A. Doumane 121:13 ▶ Definition 64 (L-decompositions).Let L be a graph language.An L-decomposition of a graph G is a decomposition of G such that the x-body of each of its components is in L. ▶ Remark 65.The body of a component is a subgraph of G, but its x-body is not a subgraph of G in general, because of the added x-edges.▶ Proposition 66.Let L be a graph language.We have: The logic CMSO d Let φ be a CMSO formula defining a graph language L. Using Prop 66, we can express that a graph G is in the iteration µx.L by guessing a decomposition D of G, and ensuring that the x-body of each component satisfies φ.But guessing a set of subgraphs is not expressible in CMSO.This is why we introduce CMSO d , an extension of CMSO where this is allowed. ▶ Definition 67 (CMSO d logic).Let X d be a set of graph set variables, whose elements are denoted X , Y . . . .The formulas of CMSO d are of the following form: Free and bound variables are defined as usual.As for CMSO, we need to define the semantics of a formula over pointed graphs to handle free variables. ▶ Definition 68 (Semantics of CMSO d ).Let G be a graph and Γ be a set of variables. An interpretation of Γ is a function mapping every first-order variable of Γ to an edge or vertex of G, every set variable to a set of edges and vertices of G, and every graph set variable to a set of subgraphs of G. We define the satisfiability relation ⟨G, I⟩ |= φ as usual, by induction on φ.The only new cases compared to CMSO are the quantification ∃X which is interpreted as "there exists a set of subgraphs X ", and the formulas (s, Z, t) ∈ X which are interpreted as "the graph whose input is s, whose output is t and whose set of edges and vertices is Z, is an element of X ". ▶ Proposition 69.There is a CMSO d formula decomp(X ), without graph set quantification, such that for every graph G and every set of subgraphs D of G, we have: Iteration is expressible in CMSO d Given a CMSO formula φ, we construct a formula φ having X as unique free variable, which expresses the fact that the x-body the head of the decomposition X satisfies φ.To construct φ , we need the following definition. ▶ Definition 70 (Complete sets). Let D be a decomposition of a graph G. Let H be a set of edges and vertices of G.We say that H is complete if, whenever it contains an edge or an inner vertex of a child C of G (seen as a component of D), then it contains all the edges and inner vertices of C. Let K be a set of edges and vertices of the x-body of G.We denote by complete D (K) the set of edges and vertices of G, obtained from K by replacing every x-edge coming from a child C of G by the set of edges and inner vertices C. I C A L P 2 0 2 2 121:14 Regular Expressions for Tree-Width 2 Graphs ▶ Remark 71.Note that if H is complete, there is a set S such that H = complete D (S). Here is a picture illustrating complete sets.The green part is the body of G and the purple modules are its children.The yellow sets are complete, but the pink one is not. ▶ Proposition 72.The following formulas are CMSO d definable: : Y is the set of edges and inner vertices of a child of the input graph w.r.t. the decomposition X .is-complete X (Y ) : Y is complete wrt X . body-edge X (Y ) : Y is a singleton containing an edge from the body of the input graph wrt X .sourceX (Y, Z) : childX (Z) and Y is a singleton containing the input of the corresponding child.target X (Y, Z) : the same as above, where input should be replaced by output. choiceX (Y, Z) : Z contains all the body elements of Y , and for every child contained in Y , Z contains exactly one element of this child. We construct the formula φ by induction on the structure of φ.We suppose that φ is build using the syntax of CMSO where only set variables are allowed. ▶ Definition 73.Let φ be a CMSO formula whose free variables are Γ.We define the CMSO d formula φ , whose free variables are Γ ∪ {X }, by induction as follows: Transfer results are results of this form: to check that a transformation f(G) of a structure G satisfies a formula φ, construct a formula f −1 (φ) that G should satisfy.The proposition below is a transfer result, where the transformation is the x-body.▶ Proposition 74.Given a CMSO sentence φ defining, there is a CMSO d formula φ having X as unique free variable, such that for every graph G and every decomposition D of G whose components are non-empty: A. Doumane 121:15 The formula φ expresses the fact that the x-body of the head of a decomposition satisfies φ.Using this formula and the localization construction of Prop.14, we construct a formula µx.L saying that the x-body of all the components of a decomposition satisfy φ. ▶ Definition 75.If φ is a CMSO formula, we let µx.φ be the following CMSO d formula: The following proposition says that the language of µx.φ is the iteration of that of φ. ▶ Proposition 76.If φ is a CMSO formula defining a language of non-empty graphs, then: Guarded iteration of CMSO languages is CMSO r The idea here is that when the iteration µx.L is guarded, L-decompositions can be encoded by sets of edges and vertices and by companion relations. The case of test languages Let µx.L be a guarded iteration of type test, G ∈ µx.L and D an L-decomposition of G. Suppose that G is the left graph below, and that the red vertices are the interfaces 3 of D. We claim that, thanks to the guard condition, this information is enough to reconstruct the whole decomposition D.More precisely, we claim that the components of D are exactly the maximal modules of G, whose interfaces are the red vertices, as depicted above. ▶ Definition 78.Let G be a graph and S be a set of vertices of G.We define D t (S) as the set of maximal modules of G, whose type is test, and whose interfaces belong to S. ▶ Proposition 79.Let µx.L be a guarded iteration of type test.We have: Proof.(⇒) Follows from Prop.66.To prove (⇐), we define the property P n as follows: We prove, by induction on n, that P n is valid for every n ≥ 1, and this is enough to conclude. 3 Recall that test graphs are unary, hence all the components of a decomposition of G are unary.When n = 1, take S to be the singleton containing the interface of G.We have that D t (S) = {G} and since G ∈ L, we have that D t (S) is an L-decomposition of G. I C Let G ∈ L n+1,x .By definition, there is a k-context H and graphs G 1 , . . ., G k such that: Thanks to the guard condition, there is no module of H parallel to a hole of H.For every i ∈ [1, k], let S i be the set of vertices provided by the induction hypothesis applied to the graph G i .Here is a picture illustrating these notations: The set of subgraphs D defined below is an L-decomposition of G. Let φ be a CMSO formula whose language is L. We transform the CMSO d formula µx.φ of Def.75, whose language is µx.L, into a CMSO formula µx g .φ of the same language.The formula µx g .φ is obtained by replacing the quantification ∃X .by the set quantification ∃S., and by replacing every sub-formula of µx.φ of the form (s, Z, t) ∈ X by this formula: The last part of this formula is expressible in CMSO thanks to Prop.32.The language of µx g .φ is the set of graphs for which we can find an L-decomposition encoded by a set of vertices S, and this is precisely the language µx.L thanks to Prop.79. ◀ The case of domain languages Let µx.L be a guarded iteration of type domain, G ∈ µx.L and D an L-decomposition of G. Contrarily to the test case, the interfaces of D are not enough to reconstruct D. Indeed, in this case, a component of D whose interface is v is not necessarily the maximal module at v, but some domain module of interface v, among possibly many others.A way to determine if a domain module is in the decomposition is to check whether it contains an interface of the decomposition.This works only for the components which are not the leaves of the decomposition.This is why we need to say explicitly which domain modules are the leaves.We do so by coloring the edges of the later. 121:17 In the following, we show that a set of vertices of a graph (representing the interfaces of a decomposition) together with a coloring of this graph (indicating which modules are leaves), is enough to recover the decomposition. ▶ Definition 82 (Coloring, active modules.).A coloring of a graph G is a set of its edges called leaf edges.A module of G is active if it contains a leaf edge.Proof.Similar to the proof of prop.79.◀ As in the previous section, we use Prop.84 to get the following theorem. ▶ Theorem 85. Suppose that µx.L : d be a guarded iteration.We have: The case of parallel languages The case of guarded iterations of type parallel is similar to the test case.Let µx.L be a guarded iteration of type parallel, G ∈ µx.L and D an L-decomposition of G.We show that the interfaces I of D is enough to recover the whole decomposition D, because its components are the maximal modules of G whose interfaces belong to I.However, in this case, I is no longer a set of vertices, but a set of pairs of vertices, that is a relation on the vertices of G. We will show that this relation is necessarily a companion relation.Using this result and the fact that CMSO and CMSO r have the same expressive power, we prove that the iteration is CMSO definable. ▶ Definition 86 (D p (R)).Let G be a graph and R a relation on the vertices of G.We define D p (R) as the set of maximal modules of G, whose type is parallel, and whose interfaces belong to S. ▶ Proposition 87.Let µx.L be a guarded iteration of type parallel.We have: ▶ Proposition 88.Let µx.L be an iteration of type parallel and let G a graph.The interfaces of every L-decomposition of G form a companion relation. Proof.We prove by induction on n ≥ 1 that the interfaces of every L-decomposition of depth n of some graph G form a companion relation, witnessed by a set of paths P , such that the interface of G is witnessed by two parallel paths of P . When n = 1, the decomposition D is reduced to the graph G. Since G is parallel, it has two parallel paths whose interface is the interface of G. Take P to be these two paths. Suppose that D is a decomposition of depth n + 1. Hence it is of the form: where D i is an L-decomposition of depth at most n, of a graph G i , for every i ∈ [1, k].Let P i be the set of paths provided by the induction hypothesis for D i , and let p i , q i be the two paths witnessing the interface of G i , for i ∈ [1, k]. We set H := x-body D (G).Since H is parallel, it has two parallel paths p and q whose interface is the interface of H.We transform the paths p and q of H into the paths p ′ and q ′ of G as follows.The paths p ′ and q ′ are obtained from p and q respectively by the following procedure: if e is an x-edge of H which is substituted by some G i , then replace e by the path of p i .Let P be the following set of paths: The set P is orthogonal and witnesses the interfaces of D.Moreover, the interface of G is witnessed by two parallel paths of P , namely p ′ and q ′ .This concludes the proof.◀ ▶ Theorem 89.Suppose that µx.L is a guarded iteration of type parallel.We have: L is CMSO definable ⇒ µx.L is CMSO definable The case of series languages Let µx.L be a guarded iteration of type series, G a graph in µx.L and D an L-decomposition of G whose set of interfaces is I.As for the domain case, the set I is not enough to reconstruct the decomposition D, and we need a coloring of the graph to determine which modules are the leaves of the decomposition D. We show also that the set of interfaces I is a companion relation, which will be enough to conclude.▶ Theorem 93.Suppose that µx.L is a guarded iteration of type series.We have: Recognizable implies regular ▶ Theorem 94.If a language of tw 2 -graphs is recognizable, then it is regular. We proceed gradually, by showing that this result holds for domain-free graphs, for domain-free graphs, then for tw 2 graphs. ▶ Definition 95 (Domain-free).A graph is domain-free if all its domain modules are atomic. To give an example of how these proofs work, suppose that we have the following property: Proof.Let L be a language of domain graphs, A an algebra whose domain is D, h : G tw2 (Σ) → A a homomorphism and F ⊆ D such that h −1 (F ) = L. Let us show that L v , the set of graphs over Σ whose image is v, is regular for every v ∈ D. We associate every v ∈ D with a new letter x v and let Γ := {x v | v ∈ D}.If Q ⊆ D, we denote by X D the letters of Γ corresponding to these elements.We extend the homomorphism h to tw 2 -graphs over the alphabet Σ ∪ Γ by letting h(x v ) = v for every x v ∈ Γ. Let v ∈ D, Q ⊆ D and X ⊆ Γ.We define the set of graphs L Q,X v as follows.We let G be in this set if and only if: G is a domain graph over the alphabet Σ ∪ X, the image of G is v, the image of the strict domain modules of G belong to Q. Let us show that L Q,X v is regular when X Q ∩ X = ∅.We proceed by induction on the size of Q. Suppose that Q = ∅.This case is based on the following lemma, obtained by case analysis on the graph G. ▶ Lemma 98.Let G be a domain graph whose domain modules, distinct from G itself, are all atomic.There is a domain-free graph H such that G = dom(H). For every w ∈ D, let M X w be the set of domain-free graphs over the alphabet Σ ∪ X whose image is w.By Lem.98, we have the following equation: which concludes the base case, thanks to Prop.96.To handle the inductive case, we notice the following equality: Conclusion We are interested in studying the complexity-theoretic properties of our expressions.For instance understanding the complexity of deciding whether an expression is guarded, and what are the costs of translations between different formalisms (expressions, CMSO, algebra).This can help us get a better grasp of what role these expressions can play, and what is the fine interplay between these different formalisms.As stated in the introduction, this work on tree-width 2 graphs is meant to constitute a first step towards the case of tree-width k. ▶ Example 4 . Here are some examples of graphs.The c-edge in the graph G a unary edge. D as a module.▶ Definition 63 (Body of a component).Let G be a graph, D a decomposition of G and C a component of D. The body of C is the subgraph of G whose vertices are those of C minus the inner vertices of its children; and whose edges are those of C minus those of its children.The x-body of C is the graph whose interface is the interface of C, whose vertices are the vertices of the body of C, and whose edges are the edges of the body of C plus, for each child F of C, an x-edge whose interface is the interface of F .We denote it by x-body D (C). ▶ To conclude we only need to find a set of verticesS of G such that D t (S) = D. Let S S 1 • • • ∪ S k ∪ {ι}, where ι is the interface of G. Let us show that D t (S) = D.This is a consequence of the following lemma: ▶ Lemma 80. Let C be a context, K a graph and I an interface in H of the same arity as the hole of C. Suppose that the hole of C is not parallel to any module.We have:max-module C[K] (I) = max-module K (I)◀ Theorem 81.Suppose that µx.L is a guarded iteration of type test.We have: ▶ Definition 83 (D d (S, col)).Let G be a graph, S a set of vertices and col a coloring of G.We let D d (S, col) bet the set of active modules of G of type d, whose interfaces belong to S. ▶ Proposition 84.Let µx.L : d be a guarded iteration.We have: G ∈ µx.L ⇔ ∃S, col.S is a set of vertices and col a coloring of G such that D d (S, col) is an L-decomposition of G. ▶▶ Definition 90 (D s (R, col)).Let G be a graph, R a relation on the vertices of G and col a coloring of G.We let D s (R, col) bet the set of active modules of G of type series, whose interfaces belong to R. Proposition 91.Let µx.L be a guarded iteration of type series.We have:G ∈ µx.L ⇔ ∃R, col.R is a relation on the vertices of G, col is a coloring of G and D s (R, col) is an L-decomposition of G.▶ Proposition 92.Let µx.L be a guarded iteration of type series and let G be a graph.The interfaces of every L-decomposition of G form a companion relation. ▶ Proposition 96.If a language of domain-free graphs is recognizable, then it is regular.Using Prop.96, let us show the following property: ▶ Proposition 97.If a language of domain graphs is recognizable, then it is regular. can specify languages of graphs by means of σ-algebras, generalizing to graphs the notion of recognizability by monoids.A σ-algebra A is the collection of a set D called its domain, and for each n-ary operation o of σ, a function o A : D n → D. A homomorphism h : A → B between two σ-algebras A and B is a function from the domain of A to the domain of B which preserves the operations of σ.Note that the set of tw 2 graphs, where the operations of σ are interpreted as in Def. 5, forms a σ-algebra which we denote by G tw2 .We say that a language L of tw 2 graphs is recognizable if there is a σ-algebra A with finite domain, a homomorphism h : G tw2 → A and a subset P of the domain of A such that L = h −1 (P ). be graphs such that h i and the H i have the same arity, for all i ∈ [1, n].We define C[H 1 , . . ., H n ] as the graph obtained from the disjoint union of C and H 1 , . . ., H n , by removing the holes of G, and for every i ∈ [1, n] identifying the input of h i with the input of H i , the output of h i with the output of H i , and by letting its interface of to be that of C. ▶ Definition 27 (Islands and modules).An island of a graph G is a graph H such that there is a context C satisfying G = C[H].A module is a island which is pure.Two islands (or modules) of a graph are parallel if they have the same interface.Since modules are pure, we can speak of series, parallel, domain and test modules of a graph. ▶ Definition 26(Contexts).Let S be a set of special (unary and binary) letters, and let n ≥ 1.An n-context is a graph such that n of its edges, called holes, are numbered from 1 to n, and labeled by n distinct special letters.We call 1-contexts simply contexts.Let C be an n-context whose holes are h 1 . . ., h n and let H 1 , . . ., H n ▶ Proposition 32.Being series, parallel, domain, test, an island, a module, a maximal island, a maximal module are CMSO definable properties. tw 2 graphs 3.1 Regular expressions for word and multiset graphs ▶ Definition 33 (Word and multiset alphabets).Let Σ w be the set of terms whose graphs have the following form, where a, b ∈ Σ 2 and c ∈ Σ 1 : Let Σ m be the set of terms whose graphs have the following form, where a ∈ Σ 2 and b ∈ Σ 1 :I C A L P 2 0 2 2 121:8 Regular Expressions for Tree-Width 2 Graphs Word graphs are the graphs generated from those of Σ w by series composition, and multiset graphs are the graphs generated from those of Σ m by parallel composition.▶Example 34.Below, from left to right, a word graph and two multiset graphs. ▶ Definition 35 (Word an multiset expressions).Word expressions are defined as follows:
2022-06-29T13:05:55.857Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "0e666ed746f7b6da0fb5d576faf3736b83c942be", "oa_license": "CCBY", "oa_url": "https://hal.science/hal-03753626/document", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "165da5cd7edb7de693e4b779a75f568a5636357a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
266805388
pes2o/s2orc
v3-fos-license
Long-term surveillance of invasive pneumococcal disease: The impact of 10-valent pneumococcal conjugate vaccine in the metropolitan region of Salvador, Brazil Background: In 2010, Brazil introduced the ten-valent pneumococcal conjugate vaccine (PCV10) in the national infant immunization program. Limited data on the long-term impact of PCV10 are available from lower-middle-income settings. We examined invasive pneumococcal disease (IPD) in Salvador, Bahia, over 11 years. Methods: Prospective laboratory-based surveillance for IPD was carried out in 9 hospitals in the metropolitan region of Salvador from 2008 to 2018. IPD was defined as Streptococcus pneumoniae cultured from a normally sterile site. Serotype was determined by multiplex polymerase chain reaction and/or Quellung reaction. Incidence rates per 100,000 inhabitants were calculated for overall, vaccine-type, and non-vaccine-type IPD using census data as the denominator. Incidence rate ratios (IRRs) were calculated to compare rates during the early (2010–2012), intermediate (2013–2015), and late (2016–2018) post-PCV10 periods in comparison to the pre-PCV10 period (2008–2009). Results: Pre-PCV10, overall IPD incidence among all ages was 2.48/100,000. After PCV10 introduction, incidence initially increased (early post-PCV10 IRR 3.80, 95% CI 1.18–1.99) and then declined to 0.38/100,000 late post-PCV10 (IRR 0.15; 95% CI 0.09–0.26). The greatest reductions in the late post-PCV10 period were observed in children aged ≤2 years, with no cases (IRR not calculated) and those ≥60 years (IRR 0.11, 95% CI 0.03–0.48). Late post-PCV10, significant reductions were observed for both PCV10 serotypes (IRR 0.02; 95% CI 0.0–0.15) and non-PCV10 serotypes (IRR 0.27; 95%CI 0.14–0.53). Non-PCV10 serotypes 15B, 12F, 3, 17F, and 19A became predominant late post-PCV10 without a significant increase in serotype-specific IPD incidence compared to pre-PCV10. Conclusion: Significant declines in IPD, including among adults not eligible for vaccination, suggest direct and indirect protection up to nine years after PCV10 introduction, without evidence of significant replacement disease. Continued surveillance is needed to monitor changes in non-vaccine serotypes and inform decisions about introducing higher valent PCVs. Introduction Streptococcus pneumoniae frequently colonizes the human nasopharynx and may spread in the respiratory tract, causing non-invasive pneumococcal disease, or through the bloodstream to other sites, leading to invasive pneumococcal disease (IPD) such as meningitis, septicemia, or bacteremic pneumonia [1].In Brazil, invasive pneumococcal disease (IPD) represents an important public health problem, mainly affecting children and the elderly [2].In the period before the introduction of the 10-valent pneumococcal vaccine, 2007-2009, the average incidence rate of pneumococcal meningitis among children under five years of age was 2.3 cases/100,000 inhabitants [3]. PCVs are now widely in use worldwide, including most low-income countries [9].Studies in various countries have consistently demonstrated a dramatic decline in vaccine-type IPD following PCV introduction, through both direct and indirect effects [10][11][12].This reduction was partially offset in some countries using 7-valent PCV by increases in IPD due to serotype 19A (which is included in PCV13, and in all newly formulated PCVs) [13].Continuous monitoring of PCV impact on vaccine-type and non-vaccine-type IPD is important for quantifying the benefits of PCV and informing vaccine policy decisions.Limited data are available on the long-term impact of PCV10 in the context of low-or middle-income countries. The Brazilian government introduced PCV10 in March 2010 [14,15].The initial schedule was three primary doses at ages 2, 4, and 6 months plus a booster dose at 12 to 15 months of age [15].In addition, catch-up vaccination was offered for children between 7 and 11 months of age (two doses) plus a booster at 12-15 months of age; children between 12 and 23 months were offered a single catch-up dose and no booster.Based on evidence of vaccine impact and effectiveness from countries using 2 primary doses plus 1 booster, in 2016, the schedule was changed to 2 primary doses at 2 and 4 months, and a booster dose for children at age 12-18 months [16]. Following PCV10 introduction in Brazil, declines in vaccine-type IPD incidence ranging from 41.3% to 87.4% were observed in young children [17][18][19], as well as considerable indirect effects, with a 50.4% decline in vaccine-type IPD among persons aged ≥ 65 years [19].Nevertheless, the latest published data on PCV10 impact in Brazil cover a period of up to only 5 years following the introduction of PCV10 [17][18][19][20].Longer-term data on PCV10 impact and currently circulating strains are needed to guide decisions about new formulation and higher-valent PCVs. Study setting The metropolitan region of Salvador is a large urban setting located in northeastern Brazil, and encompasses 13 municipalities, with a population size of 3.5-3.9 million inhabitants [21] with a stagnant poverty rate around 30% [22].The health facilities in the metropolitan region of Salvador are organized into 12 health districts with 13 Emergency Care Units (UPA) and 30 hospitals (including private and public facilities).Estimated coverage of three or more PCV10 doses among children aged 6-48 months ranged from 66.4% in 2011 to 82.9% in 2018 [23]. Pneumococcal Disease Surveillance Laboratory-based surveillance for IPD was conducted at the Couto Maia Institute (ICOM), the Paediatric Centre Professor Hosannah de Oliveira (CPPHO) and the Cerebrospinal Fluid Laboratory (SINPEL).ICOM is the main reference hospital for infectious diseases in the metropolitan region of Salvador since 1996 [24], and CPPHO is the pediatric unit of the Federal University of Bahia Hospital.Culture of sterile samples collected from patients admitted at these facilities are processed at on-site microbiology labs.In addition, SINPEL performs cerebrospinal fluid (CSF) culture for patients admitted to seven hospitals in the metropolitan region of Salvador.Together, these nine hospitals represent more than 90% of the total IPD cases reported to the State Secretary of Health (unpublished data).IPD cases were defined by identification of pneumococcus from a normally sterile site (CSF, blood, or pleural fluid) from a patient admitted to any of the participating hospitals.Cultures were ordered at the discretion of the attending physician; standardized clinical case definitions were not used.The study team reviewed laboratory records 5 days a week to identify new culture isolations of S. pneumoniae among hospitalized patients.At the ICOM, a standardized data entry form was used to extract demographic and clinical information from medical records of IPD cases. Pneumococcal isolates from all sites were sent to the Laboratory of Pathology and Molecular Biology at the Gonçalo Moniz Institute, Oswaldo Cruz Foundation (LPBM-IGM/ FIOCRUZ) for confirmation and capsular serotyping.Confirmation of S. pneumoniae was performed using standard bacteriological techniques, including Gram stain, colony morphology on agar media with 5% of sheep blood, optochin susceptibility (5 μg Oxoid disks), and bile solubility [25]. Serotyping was performed using a previously described 8-sequential multiplex PCR method [26].Isolates with negative or unresolved PCR serotyping results were subjected to Quellung reaction testing for capsular type definition at the U.S. Centers for Disease Control and Prevention (CDC). Ethical approval This study was approved by the National Committee for Ethics in Research (CONEP), Brazilian Ministry of Health (no. 1.667.451) Overall, 24 distinct pneumococcal serotypes were identified in the pre-PCV10, 30 in the early post-PCV10, 21 in the intermediate post-PCV10, and 9 in the late post-PCV10 periods.The most common PCV10 serotypes in the pre-PCV10 period were 14 (11/90, 12.2%), 23F (11/90, 12.2%), 6B (8/90, 8.8%), and 19F (8/90, 8.8%).In the late post-PCV10 period, the only PCV10 serotype identified was serotype 4, one case in an adult aged 36 years (supplementary Table 1).Non-PCV10 serotype 3 rose during the PCV10 years from 0.16 cases per 100,000 inhabitants increasing to 0.25 per 100,000 inhabitants in the early post-PCV10 period ) then reduced to 0.13 and 0.05 in the intermediate and late post-PCV10 periods, respectively.No significant changes were observed in the serotype-specific incidence of IPD due to the most common non-PCV10 serotypes, with the exception of serotype 12F, which had a significant increase in the earlypost PCV10 period ) but no significant difference compared to pre-PCV10 was observed in the intermediate or late-post PCV10 periods (Table 2). Discussion Over 9 years following PCV10 introduction, we observed a decline of 84.7% and 98.0% in overall IPD and PCV10-type IPD, respectively, in the metropolitan region of Salvador.The reduction was seen in all age groups including adults, which may reflect indirect vaccine effects.We also observed an unexpected decline in non-vaccine-type IPD, which could have resulted from changes in surveillance over time or the impact of other (non-vaccine) factors.In the post-PCV period, the distribution of serotypes was more diverse, with no evidence of replacement disease.Higher-valent newer PCVs, particularly PCV20, could offer better serotype coverage for the remaining burden of IPD.These data contribute to a growing body of evidence of long-term PCV10 impact on IPD in middle-income countries. Several studies employing hospital-based surveillance or population-based data from other regions of Brazil have shown declines in IPD following PCV10 introduction (17)(18)(19)27).However, these studies differ with respect to study population, case definitions, and period since PCV10 introduction.An early investigation conducted at the national level using timeseries analysis aimed to predict trends in post-vaccination IPD rates.This analysis utilized data from the National Surveillance System for notifiable diseases (SINAN) from 2008 through 2013.The findings revealed a significant decrease of 41.3% in PCV10 types of IPD, following the administration of PCV10, mostly in children aged 2-23 months [17].A study conducted at the University Hospital of São Paulo (HU-USP), the most populous city in the Americas located in the southeast region of Brazil, reported that overall IPD hospitalization rates among children aged < 2 years decreased from 20.30 to 3.97 cases/1,000 admissions and PCV10-type IPD fell from 16.47 to 0.44 cases/1,000 admissions within two years after PCV10 introduction [18].In the state of Paraná, southern Brazil, early post-PCV10 surveillance data (2010-2011) indicated a decline in pneumococcal meningitis incidence due to PCV10 serotypes, especially in children aged ≤ 2 years (from 6.01 to 2.49 cases/100,000 inhabitants) [27].A national laboratory-based surveillance study that examined IPD rates up to five years after PCV10 introduction in Brazil reported a decline in PCV10-type IPD of 83.4% in children aged 2 months through 4 years, and 53.4% in adults aged ≥ 65 years [19].The present study encompassing nine years post-PCV10 introduction provides additional evidence of a sustained impact of PCV10 on IPD in Brazil, particularly among children in the first two years of life. Our findings of reduced PCV10-type IPD incidence in older age groups after PCV10 introduction are consistent with other studies reporting indirect effects from PCV vaccination of infants [12,28].A meta-analysis using data from 34 different countries found that following PCV7 introduction, vaccine-type IPD in all age groups (including nonvaccine eligible age groups) reduced about 90% within approximately nine years; based on available data the model predicted a similar pattern of reduction in PCV13 unique serotypes following transition from PCV7 to PCV13 [12].However, data on indirect PCV effects from low-and middle-income countries are more limited.Our data covering a nine-year period following PCV10 introduction in Brazil suggest a robust population-level reduction in PCV10-type IPD.Most low-and middle-income countries do not currently recommend PCV for adults, and instead rely on high coverage in the childhood vaccination program and indirect PCV effects, so that vaccine-type disease in adults will be reduced.Data from this study and other assessments of indirect effects can inform evidence-based decisions about adult pneumococcal vaccination. As expected, an increased diversity of non-PCV10 serotypes was found after PCV introduction.Some of the non-PCV10 types identified in the present study, including 12F, 3, 6A, 8, 17F, and 15B, have been a common cause of IPD in other settings after PCV introduction [11,29].Yet despite changes in serotype distribution, there was no significant increase in IPD due to non-PCV10 serotypes, including serotype 19A.IPD due to serotype 19A increased in many settings following PCV7 introduction [30].PCV10 was thought to potentially offer cross-protection against 19A disease based on immunogenicity data [31].Although some early studies suggested possible declines in 19A disease or effectiveness of PCV10 against 19A disease [32], overall evidence has not shown PCV10 impact on 19A disease [19,33,34] Furthermore, evidence of increased prevalence of 19A carriage after introduction of PCV10 has been documented in different settings [35][36][37][38].We observed a 76% reduction in IPD due to serotype 19A in the late post-PCV10 period compared to prevaccine baseline; however, this reduction is similar to the reduction in all non PCV10-type IPD (73%), suggesting that the observed declines in 19A disease are likely attributable to non-vaccine factors. The study has several limitations.The study does not account for other (non-PCV10) factors that could influence the burden of pneumococcal disease over time, such as changes in the prevalence of HIV, nutritional status, socioeconomic status, or access to healthcare (i.e.treatment of non-invasive pneumococcal infections before they become invasive).Changes in the sensitivity of the surveillance system also likely impacted the observed IPD incidence. During the early post-PCV10 period, laboratory surveillance was enhanced, as the State of Bahia was one of the sentinel points for evaluating PCV10 effectiveness [14].More active surveillance likely led to an increased detection of IPD cases during the early period of PCV10 introduction (2010-2012), as reflected in the high IRRs for this period.During the subsequent period (2013)(2014)(2015)(2016)(2017)(2018), surveillance was more passive and may reflect underascertainment of IPD, which could exaggerate the apparent impact of PCV10 during the intermediate-and late-post-PCV10 periods.It is possible that some patients were missed because they were treated in hospitals that were not part of the IPD surveillance network.In addition, since patient recruitment was based on CSF/blood culture result, ascertainment of IPD cases may have been affected by the previous use of antimicrobials agents.Another limitation is predominance of meningitis cases (61.2%), reflecting an under-ascertainment of bacteremic pneumonia cases. In conclusion, the study adds to a growing evidence base of the considerable health benefits resulting from the inclusion of PCV10 in the Brazilian immunization program.The incidence of PCV10-type IPD has fallen across all age groups, with no cases identified in children under two years of age in the late-post-PCV10 period.The incidence of non-PCV10 type IPD also declined, suggesting that non-vaccine factors are contributing to a reduced IPD burden.Out of the remaining cases during the late-post PCV10 period, newly formulated and higher valent vaccines have the potential to prevent more IPD cases.Incidence of the most common non-PCV10 serotypes causing IPD cases in the metropolitan region of Salvador (Bahia-Brazil) according to PCV10 period. Fig. 1 . Fig. 1.Number of IPD cases in the metropolitan region of Salvador according to vaccine period, stratified by vaccines compositions. , the Institutional Review Board of IGM-FIOCRUZ.This activity was reviewed by CDC and was conducted consistent with applicable federal law and CDC policy § ( § See e.g., 45C.F.R. part 46, 21C.F.R. part 56; 42 U.S.C. §241 (d); 5 U.S.C. §552a; 44 U.S.C. §3501 et seq).A waiver of informed consent was granted by the institutional review board because the research involves only minimal risk. Table 1 Incidence of IPD cases in the Salvador metropolitan region (Bahia-Brazil), from 2008 to 2018, stratified by age and PCV10 vaccine serotype.
2024-01-07T16:06:36.673Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "e8e4eb063d4f933b0e8f2d72dfa73698f4d7af9a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.vaccine.2023.12.055", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1d26f97f790e5c36646d388794c4a97452065002", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249960944
pes2o/s2orc
v3-fos-license
Convergent validity of commonly used questions assessing physical activity and sedentary time in Swedish patients after myocardial infarction Background Guidelines recommend regular physical activity (PA) and decreased sedentary time (SED) for patients after myocardial infarction (MI). Therefore, valid self-assessment of PA is vital in clinical practice. The purpose of this study was to assess the convergent validity of commonly used PA and SED questions recommended by the National Board of Health and welfare (NBHW) and national SWEDEHEART-registry using accelerometers as the reference method in patients after MI. Methods Data were obtained 2017–2021 among Swedish men and women (180 assessments). Participants answered five commonly used PA and SED-questions (by NBHW and SWEDEHEART) and wore an accelerometer (Actigraph GT3X) for seven days. Convergent validity was assessed gradually by; Kruskall Wallis-, Sperman rho, Weighted Kappa- and ROC-analyses. Misclassification was explored by Chi-square analyses with Benjamini–Hochberg adjustment. Results The strongest correlation (r = 0.37) was found for the SED-GIH question (NBHW). For PA, no specific question stood out, with correlations of r = 0.31 (NBWH), and r = 0.24–0.30 (SWEDEHEART). For all questions (NBHW and SWEDEHEART), there was a high degree of misclassification (congruency 12–30%) affecting the agreement (0.09–0.32) between self-report and accelerometer assessed time. The SED-GIH, PA-index and SWEDEHEART-VPA had the strongest sensitivity for identifying individuals with high SED (0.72) or low PA (0.77 and 0.75). Conclusion The studied PA and SED questions may provide an indication of PA and SED level among patients with MI in clinical practice and could be used to form a basis for further dialogue and assessment. Further development is needed, since practical assessment tools of PA and SED are desirable. Supplementary Information The online version contains supplementary material available at 10.1186/s13102-022-00509-y. Introduction Cardiovascular disease (CVD) is the major cause of death in Europe [1]. There is evidence that regular physical activity (PA), including exercise and everyday PA as well as limited sedentary time (SED) lower the risk of hospitalization and premature mortality [2][3][4][5]. International guidelines strongly recommend individualised PA and SED recommendations for patients with 14:117 CVD [6,7]. Therefore, valid self-reported assessment is a vital component in clinical practice. Questionnaires are easy to use and an inexpensive way of describing previous PA behaviour, type of activity and the relative intensity [8]. However, questionnaires suffer from potential errors due to social desirability and the difficulty to accurately estimate duration and absolute intensity [8,9]. "The Swedish National Board of Health and Welfare" (NBHW) recommends two PA questions (NBHW-PA questions) to assess levels of everyday PA, exercise, and a combination of these [10]. They also recommend a question regarding the amount of time spent sedentary (SED-GIH) [11]. A previous study explored the association of the NBHW-PA and SED questions with health care utilisation and mortality among patients with CVD [3]. The results indicated that higher levels of everyday PA and lower time spent sedentary were associated with lower readmission rates. In addition, both higher exercise frequency and everyday PA were linked to a lower mortality risk [3]. The Swedish national quality registry for acute coronary and cardiac rehabilitation care, "SWEDE-HEART", contains questions based on PA-recommendations by Haskell et al. [12], focusing on moderate and/ or vigorous intensities of PA (SWEDEHEART-PA questions) [13]. Data based on the SWEDEHEART-questions have linked high levels of both moderate and vigorous PA with a lower risk of rehospitalization and risk of all-cause mortality in patients with (myocardial infarction) MI [2]. Thus, the current questions seem to have an acceptable predictive validity for identifying individuals with a higher risk of rehospitalization and all-cause mortality [2,3]. However, there is still a gap in knowledge of the convergent validity of the questions commonly used in clinical practice among individuals with atherosclerotic CVD i.e., can the questions be used to estimate individuals PA level. The objective of the present study is to assess the convergent validity of the PA and SED questions in patients after a MI using accelerometers as the reference method. Methods This was a single center study performed at the Department of Cardiology at Danderyd's hospital, Stockholm, Sweden. Individuals were informed of the study and invited to participate by a nurse at the first routine follow-up visit, six to eight weeks after MI. Data were collected via questionnaires, accelerometers, diaries, and the national quality registry SWEDEHEART between October 2017 and May 2021. The participants answered a questionnaire concerning PA and SED both at six to eight weeks and at 10-12 months post MI. Immediately after the visits, an accelerometer with a diary were posted to the participant, with instructions on how to wear the device and to start the accelerometer assessment directly. After the measuring period, participants returned the accelerometer and the diary by prepaid postage to the Swedish School of Sport and Health Science. Covariates were collected from a sub-register of SWEDEHEART, the SEPHIA-registry, focusing on outpatient cardiac rehabilitation up to one year after MI from the responsible cardiac department [14]. The following variables were obtained: gender, age, occupation, smoking habits, body mass index (BMI), systolic and diastolic blood pressure, left ventricular ejection fraction (LVEF), diabetes (ICD E.10-E.11) and smoking habits. Study participants Individuals aged 18-80 years with newly diagnosed MI (ICD I21) registered in the SWEDEHEART registry were included in the study. Individuals who were physically disabled (wheelchair dependent) or with reduced ability to answer the questions were excluded by not being asked for participation. Physical activity and sedentary time questionnaire Six commonly used PA and SED questions were included in the study (the NBHW-PA, SED-GIH questions and the SWEDEHEART-PA questions). The NBHW-PA and SED questions use different time frames. The NBHW-PA questions focus on PA in an ordinary week while the SED question focuses on an ordinary day. The questions of everyday PA and physical exercise formed a validated PA-index (3-19 points) of total PA level [10]. This was obtained by multiplying the category of exercise (one to six) by two, to account for a potentially higher intensity and then adding the category of everyday PA (one to seven). The same index has been used in previous surveys to assess the approximate number of individuals who achieve ≥ 150 min of MVPA [10]. The cut-off was set at nine. SWEDEHEART-PA questions The PA questions in the SWEDEHART registry focus on PA during the previous week and are listed below. The individuals can choose between 0-7 days/week [15] • SWEDEHEART-MVPA: "Number of physical activity sessions of at least 30 min (two 15-min sessions can be combined into one 30-min session) in the last 7 days, with a minimal intensity of fast walking". The following two questions separate moderate and vigorous intensity levels of PA. • SWEDEHEART-MPA: How many days in the last week did you do at least 30 min, total time (at least 10 min at a time) of physical activity that made you slightly out of breath and gave slightly elevated heart rate? • SWEDEHEART-VPA: How many days during the last week did you do some form of continuous vigorous physical activity /exercise, (at least 20 min), that made you out of breath and gave you elevated heart rate? Accelerometer data An accelerometer provides information on body movements, which can be translated to data on PA, using validation algorithms. Such translation gives reasonably valid data of duration, frequency, and absolute intensity of PA and SED [16,17]. Accelerometers (Actigraph GT3X monitor, Pensacola, Florida, USA) were used as the convergent PA and SED assessment method. Participants were encouraged to wear the monitors on the right hip, 24 h for ten consecutive days [18], except in water-based activities. The participants used a diary during the period they carried the accelerometer. They noted the time they went to bed and woke up as well as time without the accelerometer. These periods (if ≥ 40 min) were excluded from the accelerometer analyses. When diary data of sleeping hours were missing, the time between 10:00 PM and 07:00 AM was considered night and excluded from the accelerometer analyses. The accelerometers and data files were processed using the software Actilife, version 6.13.4 (ActiGraph llC, Pensacola, Fl, USA). Data were collected tri-axially using a sampling rate of 30 Hz and, after extraction, data were down-sampled and saved as 60 s epochs, with activity intensity based on vector magnitude. Normal frequency filter was applied. Additionally, the Choi algorithm was used for validate non-wear time [19], defined as a minimum of 90 consecutive minutes with no movement, i.e., 0 counts per minute (cpm), with an allowance for a maximum amount of movement of two minutes with intensities up to 199 cpm. Cut-points were used to describe the daily PA behaviour, using the following components: time spent in sedentary (SED, 0-199 cpm), light physical activity (LIPA, 200-2689 cpm), moderate PA (MPA 2690-6166) and vigorous physical activity (VPA, ≥ 6167 cpm) [20,21]. The mean daily time in SED, LIPA, MPA, and VPA was calculated as the sum of each variable on all valid days divided by the number of valid days. In addition, time in MPA and VPA was summed up and termed moderate-tovigorous PA (MVPA). Statistics To be included in the analyses, individuals had to answer the questionnaire and have valid accelerometer data for at least seven days defined as ≥ 600 min of data per day after non-wear time had been excluded. The descriptive data are presented as median and 25th and 75th percentiles (Q1-Q3), or number and proportions. Before Spearman and weighted Kappa analysis, the continuous accelerometer data were divided into the same categories as the different answer options for PA (everyday PA, exercise, number of days in MVPA, MPA, VPA) and SED questions. For the PA-index, the average number of minutes in MVPA was kept as a continuous variable. Convergent validity was explored gradually: Kruskal-Wallis analyses with Bonferroni correction were performed to investigate the differences in median accelerometer derived time between the different categories of the PA and SED questions. • Multiple linear regressions were performed to control for repeated measurements among individuals with two assessments. The study-id was used as a covariate. • Correlations between the categorised accelerometer data and the PA and SED-GIH questions were calculated using Spearman's rho (r). The associations were interpreted as weak (r < 0.10), modest (r [6,7] and SED [24] recommendations and risk assessments. For the ROC analyses, data are presented as the area under the curve (AUC) and 95% confidence intervals. AUC ranges from 0-1, where a poor model has an AUC value of 0.5 and an excellent model has a value of 1. Sensitivity and specificity analyses were used to identify the proportion of true positive and true negative answers to the PA and SED questions, based on the dichotomized accelerometer data. The point estimate (answer alternative) that generated the strongest combination of sensitivity and specificity was chosen. Lastly, calculations in Excel were performed to assess how well the PA and SED-GIH questions corresponded to the accelerometer derived data, either congruent, or over-and underestimating. Then, Chi square analyses with Benjamini-Hochberg adjustments were used to assess differences in congruence, under-, and overestimation between gender, age, LVEF, systolic blood pressure, diabetes, and BMI. All statistical analyses were performed using SPSS 27.0 software (IBM Corp., Armonk, NY, USA). Results A total of 123 individuals answered the questionnaire and provided complete accelerometer data were included ( Table 1). The median age was 67 years, with a majority of men with a preserved LVEF. A high proportion of the participants was regularly physically active. The accelerometer data showed 77% achieved at least 150 min of MVPA during a week. However, 47% had ≥ 9.5 h of daily SED. Individuals (n = 65) excluded from the analyses did not significant differ from included in gender but were significant younger median 58 (IQR 16) years. Of included individuals, 56 individuals also provided data from a second assessment (10-12 months after MI), leading to 179 complete assessments included in the analyses. There was an internal drop-out for the NBHW, SED (n = 2), (n = 3) everyday-PA and exercise (n = 2) questions. Convergent validity In the Kruskall-Wallis analyses we noticed differences (p < 0.05) in accelerometer collected time for different categories of the NBHW-PA and SED questions (Fig. 1a). For everyday PA, a difference was found in MPA between the categories "30-59" and " > 300" minutes. Regarding the exercise question, we found differences in VPA between "no time" compared to "60-89" and " < 120" minutes respectively, as well as a difference between the categories "1-29" and " > 120″ minutes. The PA-index could detect differences in accelerometer derived time in MVPA between individuals categorised as 5 vs 18 or 19. For SED, there were differences in accelerometer assessed time for individuals self-reporting the two lowest categories compared to individuals reporting > 10 h. For the SWEDEHEART-PA questions there were significant differences in accelerometer assessed time for the SWEDEHEART-VPA and MVPA questions (Fig. 1b). For the SWEDEHEART-VPA question, differences in time in VPA were seen between individuals categorised to zero and seven sessions. For MVPA the differences in time of MVPA were seen between category two and seven. In the multiple linear regression (Additional file 1), the study-id did was not a significant predictor for the relationships, there for all assessments (n = 179) were included in the analyses. The correlation between the SED question and categorical accelerometer data was moderate (r = 0.37), but the agreement was weak (0.09). However, the ability to classify individuals sitting ≥ 9.5 h was good, with an acceptable level of both sensitivity (72%) and specificity (62%) ( Table 2). For PA, the correlations between the PA questions and categorical accelerometer data were modest to moderate (r = 0.24-0.31) and no specific question had a significantly stronger correlation (p > 0.47). In general, the weighted kappa analyses showed a poor to fair agreement between the answer categories and categorised accelerometer data (agreement 0.10-0.32) with the strongest agreement (fair) for the SWEDEHART-MVPA and the NBHW everyday PA questions. Results from the ROC analyses are presented in Table 2, and graphical results in Additional file 2. To classify individuals as achieving ≥ 150 min of MVPA the NBHW PA-index had the best sensitivity (77%). Meanwhile the NBHW exercise question had the best specificity (85%) for identifying individuals not fulfilling ≥ 75 min a week of VPA (Table 2). Correlation, agreement, and area under the curve for the two specific timepoints are presented in Additional file 3, showing random but not systematic differences. Over-and under-reporting Congruence between individuals' self-reported PA and SED levels compared to the accelerometer measured data varied between 12 and 30% (Fig. 2a, b). In general, participants with a high self-rated PA level over-reported the PA to a larger degree. For SED, the majority (83%) under-reported their sedentary time; however, there were no differences in misclassification between the different SED categories. There were few other factors affecting the reporting pattern. Older individuals over-reported everyday PA to a higher degree than younger (63% vs 44%). Men underreported the number of PA sessions in MVPA (SWEDE-HEART) compared to women (39% vs 18%) (Additional file 4). Lastly, older and overweight/obese individuals over-reported time in VPA (SWEDEHEART), compared to younger individuals (68% vs 60%) and those with a BMI < 25 kg/m 2 (71% vs 53%). Discussion The main finding of this study was, that for the PA and SED questions (by NBHW and SWEDEHEART) frequently used in clinical practice, there was a high degree of misclassification (over-and underreporting), with modest to moderate correlation and poor to fair agreement. Nonetheless, they may provide some broadly indication about PA and SED level, which could form the basis for further discussion and assessment on an individual level in clinical practice. Sedentary question (SED-GIH) The updated European guidelines of CVD prevention highlight the importance of decreasing SED; therefore it is important that the convergent validity of the SED-GIH question has an acceptable level. The correlation to accelerometer assessed SED (r = 0.37) was somewhat lower compared to a study in a general population that showed a correlation between the SED-GIH question and total stationary time (r = 0.48) [11]. Convergent validity for other SED questionnaires among individuals within cardiac rehabilitation has previously been explored in two small studies, where modest (r = 0.19-0.21) correlations to accelerometer collected data [25,26] were found. The agreement between the SED-GIH question and SED time assessed by accelerometer was weak. This might be due to eight out of ten participants underestimating their sedentary time causing misclassification. The difficulty to estimate SED has been shown in other studies, with misclassifications of minus 85 min per day [27] and under-estimations of SED of approximately 70% [11]. To decrease the risk of misclassification, Gardner et al. suggested asking about time spend in seated activities, e.g. time of tv-viewing or computer use [28] instead of questions about sitting time. Despite the under-estimation of sitting time, ROC-analyses showed a modest ability (0.69) to identify individuals with more than 9.5 h/day of accelerometer assessed SED [24]. This is supported by Kallings et al. in a general population study (0.71) [11]. The NBHW and SWEDEHEART-PA questions For PA, the correlation for the NBHW and SWEDE-HEART questions with accelerometer derived data was modest to moderate. There were no significant differences in correlation for the NBHW-PA questions compared to the SWEDEHEART ones. The correlation of NBHW PA-index with accelerometers as a criterion method showed similar results to a study in a general population, r = 0.27 (vs 0.31) for MVPA min/day [10]. To our knowledge, few studies explore convergent validity between self-rated and accelerometer assessed PA for individuals with CVD. However, Biswas et al. found a moderate correlation (r = 0.49) between accelerometer assessed and self-rated time of MVPA in this group [26]. For the PA questions, agreement with time assessed by accelerometer was poor to fair (0.10-0.32) with the strongest agreement for the SWEDEHEART-MVPA and the NBHW everyday PA question. As with a previous study [27], there was a high degree of overestimation for all PA questions. Interestingly, there were small differences in misclassifications between groups, with women, obese and older participants over-reporting time and intensity of PA to a higher degree than men, individuals with a lower BMI and younger participants. This might be due to accelerometers not considering the individual's physical capacity (relative intensity), unlike self-reported data. This is clinically important, as relative intensity is valuable for recommendations of PA based on the individual. ROC-curve analysis of the PA-index was in line with a study in a general population (0.66) [10]. Thereto the PA-index had the strongest sensitivity (0.77) to identify individuals not achieving 150 min of MVPA collected by accelerometer, which indicates that the PA-index might be used to indicate low PA among patients with MI as well as in the general population. To identify individuals not achieving 3 times per week of VPA (accelerometer collected), the sensitivity was equivalent (0.75) for the SWEDEHEART-VPA question. Strengths and limitations The most important strength of this study is that it includes a large cohort of CVD patients, focusing on individuals with MI. It also includes a higher proportion of older individuals compared to previous studies [10,11]. Generalization of the results is possible as the median age and proportion of women is similar to patients included in the SEPHIA-registry [15]. However, included individuals were older than excluded, which may slightly affect the generalisability of the SWEDEHART-VPA and everyday-PA question, when older individuals were classified as overreporting ta a higher degree. A limitation is the lack of consensus for cut-off values of accelerometer data for individuals with CVD. This is troublesome and the time spent at different levels of PA intensity should therefore be interpreted with caution, especially in patients with reduced cardiorespiratory fitness, which leads to differences in absolute and relative intensity [18]. In this study, where a majority had a preserved LVEF, we used cut-off points in line with several previous studies on general populations [27] and patients in cardiac rehabilitation [18]. Accelerometers were used as the criterion method. However, accelerometer measurements do have disadvantages. For example, information about relative intensity and activities such as bicycling, strength training, and swimming, all common types of exercise within cardiac rehabilitation [6], are omitted. This might have led to the differences in self-rated and accelerometer assessed PA levels. In addition, the accelerometer assesses stationary behaviour and not sitting time per se, which the SED-GIH question focuses on. This might have contributed to the high amount of under-reporting. The same limitation was apparent when comparing accelerometer collected MPA with the everyday PA question, focusing on activities performed in the everyday, not specifying the intensity. Another limitation is that participants may have become more conscious about their PA behaviour during the study, affecting how they answered the questions and their PA behaviour during the measurement period. The participants answered the questionnaire prior to wearing the accelerometer, this may be a limitation when the SWEDEHEART-PA questions focus at PA the last week, however several studies indicate that accelerometer data collected over seven days use to be consistent between weeks (high reliability) [29][30][31]. Non-wear time is important to consider, since it affects PA and SED assessed by the accelerometer [32]. Thus, using a diary to register sleep and non-wear time attenuated this limitation. Before analyses, these times were excluded. Another strength is that all PA and SED questions consist of predetermined answer categories, which have been shown to increase validity compared to open answer questions [10]. Exploring an individual's total PA behaviour is complex and these questions have a limited convergent validity and low, precision. Therefore, a combination of both accelerometer derived data and questionnaires could be recommended [8,9]. Conclusion The present study is clinically important as it focuses on commonly used SED and PA questions to patients with CVD, where regular PA and low SED both have a central role in cardiac rehabilitation. The convergent validity of the SED and PA questions is poor to moderate compared to accelerometer assessed data. The SED-GIH question had the strongest correlation. However, for PA we could not identify any preferable question. In spite of the risk of misclassification when using questionnaires, the questions seem to be a practical and acceptable method that may provide some indication about PA-level, which could ^Accelerometer data include physical activity at a moderate intensity for the same duration as the everyday PA question different answer categories. # Accelerometer data include physical activity at a vigorous intensity for the same duration as the exercise question different answer categories. *Differed from (p < 0.05); a under report compared to 150-300 min and 300 min, b under report more than 90-149 min, c more congruent than > 300 min, d more congruent than 30-60 min and 60-89 min, e over report more than 30-60 min, f over report more than all other categories. b Congruence and misclassification in the SWEDEHEART-questions compared to accelerometer assessed time in physical activity. *Differed from (p < 0.05); SWEDEHEART-VPA; a more congruent compared to 3-7 sessions, b overreport more compared to 1-2 sessions, c overreport more compared to 3 sessions, d overreport more compared to 0-2 sessions, SWEDEHEART-VPA e more congruent compared to 1-4 sessions; SWEDEHEART-MVPA, f underreport more compared to 0, 1, 3 and 4 sessions; g overreport more compared to 1 session
2022-06-24T13:24:54.010Z
2022-06-24T00:00:00.000
{ "year": 2022, "sha1": "0a009754191e5fe986068d3dcc3ecc7c3c6b726d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0a009754191e5fe986068d3dcc3ecc7c3c6b726d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254201333
pes2o/s2orc
v3-fos-license
The value of using a deep learning image reconstruction algorithm of thinner slice thickness to balance the image noise and spatial resolution in low-dose abdominal CT Background Traditional reconstruction techniques have certain limitations in balancing image quality and reducing radiation dose. The deep learning image reconstruction (DLIR) algorithm opens the door to a new era of medical image reconstruction. The purpose of the study was to evaluate the DLIR images at 1.25 mm thickness in balancing image noise and spatial resolution in low-dose abdominal computed tomography (CT) in comparison with the conventional adaptive statistical iterative reconstruction-V at 40% strength (ASIR-V40%) at 5 and 1.25 mm. Methods This retrospective study included 89 patients who underwent low-dose abdominal CT. Five sets of images were generated using ASIR-V40% at a 5 mm slice thickness and 1.25 mm (high-resolution) with DLIR at 1.25 mm using 3 strengths: low (DLIR-L), medium (DLIR-M), and high (DLIR-H). Qualitative evaluation was performed for image noise, artifacts, and visualization of small structures, while quantitative evaluation was performed for standard deviation (SD), signal-to-noise ratio (SNR), and spatial resolution (defined as the edge rising slope). Results At 1.25 mm, DLIR-M and DLIR-H images had significantly lower noise (SD in fat: 14.29±3.37 and 9.65±3.44 HU, respectively), higher SNR for liver (3.70±0.78 and 5.64±1.20, respectively), and higher overall image quality (4.30±0.44 and 4.67±0.40, respectively) than did the respective values in ASIR-V40% images (20.60±4.04 HU, 2.60±0.63, and 3.77±0.43; all P values <0.05). Compared with the 5 mm ASIR-V40% images, the 1.25 mm DLIR-H images had lower noise (SD: 9.65±3.44 vs. 13.63±10.03 HU), higher SNR (5.64±1.20 vs. 4.69±1.28), and higher overall image quality scores (4.67±0.40 vs. 3.94±0.46) (all P values <0.001). In addition, DLIR-L, DLIR-M, and DLIR-H images had a significantly higher spatial resolution in terms of edge rising slope (59.66±21.46, 58.52±17.48, and 59.26±13.33, respectively, vs. 33.79±9.23) and significantly higher image quality scores in the visualization of fine structures (4.43±0.50, 4.41±0.49, and 4.38±0.49, respectively vs. 2.62±0.49) than did the 5 mm ASIR-V40 images. Conclusions The 1.25 mm DLIR-M and DLIR-H images had significantly reduced image noise and improved SNR and overall image quality compared to the 1.25 mm ASIR-V40% images, and they had significantly improved the spatial resolution and visualization of fine structures compared to the 5 mm ASIR-V40% images. DLIR-H images had further reduced image noise compared with the 5 mm ASIR-V40% images, and DLIR-H was the most effective technique at balancing the image noise and spatial resolution in low-dose abdominal CT. Introduction The widespread use of computed tomography (CT) lies in its availability, speed, and diagnostic performance. Almost one-third of CT examinations involve the abdomen (1), and the related radiation exposure and potential carcinogenicity to this area has continued to attract research attention (2)(3)(4). Many of CT's benefits are overshadowed by concerns for persistent radiation dose, especially for young patients and patients undergoing repeated CT scans (2,5). Therefore, achieving dose reduction while maintaining image quality is an area of intense research focus. Usually, we can achieve this objective by improving the technology of CT scanners and software (6). Traditional reconstruction methods, such as filtered back projection (FBP), generate higher image noise at reduced radiation doses, leading to image quality degradation. The improvement of computing power has given rise to iterative reconstruction (IR) technology. Many clinical studies have shown that these IR algorithms can be applied to reduce radiation dose (7)(8)(9)(10). However, the nonlinear and nonstationary properties of IR algorithms cause the spatial resolution to become dependent on contrast and radiation dose. When applied at high levels, IR algorithms alter the texture of the images (11,12). The need to balance image noise and spatial resolution and image texture limits the ability of IR algorithms to further reduce the radiation dose. Faced with these limitations of IR, GE Healthcare (Waukesha, WI, USA) developed deep learning image reconstruction (DLIR) algorithms (TrueFidelity) trained with high-quality FBP datasets of both patient and phantom scans to learn how to differentiate noise from signals. The design goal of the DLIR algorithm is to generate a reconstructed image that outperforms previous IR techniques in terms of image quality, dose performance, and reconstruction speed. The DLIR engine generates the output image from an input sonogram that is acquired with a low radiation dose using deep convolutional neural network (DCNN)-based models. During training, the DCNNs analyze the data and synthesize a reconstruction function, which is optimized through the learning process and extensive testing of the dataset for validation. The DLIR is an algorithm developed to achieve image qualities similar to those of high-dose FBP, as the FBP is the ideal image reconstruction technique in a high-dose and optimal-scan environment. The DLIR technique can be reconstructed in 3 modes: low (DLIR-L), medium (DLIR-M), and high (DLIR-H). The final output image is generated by varying the degree of the noise included in the image for each mode. The introduction of L, M, and H is intended to provide a good balance between noise reduction and improved spatial resolution, with L favoring spatial resolution and H favoring noise reduction. Therefore, our study aimed to evaluate the ability of the DLIR algorithm (TrueFidelity) in balancing image noise and spatial resolution in low-dose abdominal CT as compared with the conventional adaptive statistical iterative reconstruction-V (ASIR-V) algorithm. Patient population This retrospective study included 89 patients who underwent low-dose chest CT scans between May 2018 and August 2018. To ensure the complete inclusion of the entire chest, the scan range for the chest CT normally contains a small portion of the upper abdomen. The portion of the scan data for the upper abdomen included in the chest CT scan was used for the study. The image analysis for this paper was then limited to the anatomy and blood supplies of the upper abdomen. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by the ethics board of the had significantly improved the spatial resolution and visualization of fine structures compared to the 5 mm ASIR-V40% images. DLIR-H images had further reduced image noise compared with the 5 mm ASIR-V40% images, and DLIR-H was the most effective technique at balancing the image noise and First Affiliated Hospital of Xi'an Jiaotong University, and individual consent for this retrospective analysis was waived. Data acquisition and image reconstruction All patients underwent the low-dose chest CT scan on a 256-row multidetector CT system (Revolution CT, GE Healthcare). Patients were placed in the supine position, and scanning was performed in the craniocaudal direction in a single breath hold. The automatic tube current modulation technique (SmartmA) was used to automatically adjust the tube current for achieving consistent image quality across the study population. The scanning acquisition parameters were as follows: tube voltage, 120 kVp; tube current range, 10-130 mA; noise index value, 16; rotation time, 0.5 seconds; detector collimation, 128×0.625 mm; helical pitch, 0.992:1; and scan field of view, 50 cm with the "Largebody" bowtie filter. Using a high noise index value of 16 (than the normal 9 for the routine abdominal CT application) significantly reduced the required radiation dose. The scan data for the upper abdomen were reconstructed into standard image set at 5 mm with ASIR-V at 40% (ASIR-V40%) blending percentage between FBP and ASIR-V for routine clinical use. Additionally, images at a thinner slice thickness of 1.25 mm were generated to simulate higher spatial resolution but lower-detected x-ray signal strength per detector cell using ASIR-V40% and DLIR at 3 reconstruction strength levels: DLIR-L, DLIR-M, and DLIR-H. All images were reconstructed using the "standard" reconstruction kernel, and image sets were transferred to workstations for data measurement and image analysis. Quantitative analysis Quantitative image analysis was performed on an Advantage Workstation (AW, Version 4.7, GE Healthcare) blindly by a radiologist who was board-certified in diagnostic radiology and had 5 years of experience in abdominal CT imaging. The CT attenuation value (CT value in HU) and standard deviation (SD) of abdominal subcutaneous fat, hepatic parenchyma, paraspinal muscle, and the abdominal aorta were measured using a region of interest (ROI) with a diameter of 5 mm on the image section containing the trunk of the portal vein. The copy-and-paste functions were used for ROI placement so that the same areas of interest could be drawn in the same location on each reconstruction. All measurements were performed 3 times over 3 consecutive slices under identical display window width (400 HU) and window level (40 HU) settings, with the average values being used for further analysis. ROIs were placed in a uniform area away from artifacts, as shown in Figure 1. Image quality is represented by image noise (SD) and signal-to-noise ratio (SNR; SNR = CT value/SD) (13,14). To further quantitatively evaluate the spatial image resolution of different reconstruction algorithms, we also investigated the interface between fat and liver parenchyma. To do this, we used their contrast difference which has an adequate CT attenuation difference for measuring the spatial resolution by drawing a line from the fat to the liver parenchyma and measuring all CT values on the line, as shown in Figures 2,3. CT values of all the points on the line from fat to liver parenchyma were measured in ImageJ (S National Institutes of Health, Bethesda, MD, USA) by marking all the reconstructions in the same position. The CT value of the reconstructed image in the same position was imported into a Microsoft Excel table using ImageJ software. The data points between the last dip and the first peak on the profile curve were fitted using the slope formula to calculate the edge rising slope which was used to represent the spatial resolution of the image ( Figure 3; Table 1). Qualitative analysis The qualitative images were assessed in terms of image noise, artifacts, and visualization of small structures by 2 blinded readers who were board-certified in diagnostic radiology and had at least 10 years of experience in abdominal CT imaging. The 2 radiologists were unaware of the examination details and examined each case independently after receiving standardized instructions. The reconstructed axial images with patient information deleted were displayed using a high-resolution monitor on a dual-monitor picture archiving and communication system (PACS) workstation ( Figure 4; Table 2). The reader could adjust the image display window and level during the evaluation process. Qualitative image quality, including image noise, artifacts, edge sharpness of organs, and visualization of small structures (i.e., small blood vessels in the liver) was graded on a 5-point scale (15,16). Table 3 lists the detailed evaluation criteria. Finally, the overall image quality was averaged over the scores of the 3 different categories to generate the final score, with a higher the score indicating a better image quality. Quantitative image quality analysis As shown in Table 4 , there was no significant difference in CT value among the 5 different reconstruction groups. liver parenchyma, 28.8% and 52.4% in paraspinal muscle, and 27.3% and 51.3% in the abdominal aorta, respectively. Compared with the 5 mm slice thickness ASIR-V40%, the 1.25 mm DLIR-H reduced the image noise of fat by 29.2%, liver parenchyma by 16.7%, paraspinal muscle by 17.8%, and the abdominal aorta by 14.8%. In regard to SNR, the comparison was basically consistent with that of the noise. As shown in Table 1 and Figure 2, there was no significant statistical difference in the edge rising slope among the 3 DLIR groups. Compared with that of the 5 mm ASIR-V40% group, the edge rising slope of the DLIR-L, DLIR-M, and DLIR-H groups increased by 76.7%, 73.2%, and 75.4%, respectively. Compared that of the 1.25 mm ASIR-V40% group, the edge rising slope of the DLIR-H group increased by 16%. Qualitative image quality analysis As shown in Table 2, the consistency between the observations of the 2 readers was very good (kappa values ranged from 0.81 to 1.00). In terms of total image quality, the order of numerical scores from high to low was DLIR-H (1.25 mm) > ASIR-V40% (5 mm). There was no statistically significant difference among the 3 DLIR groups, while the DLIR-L, DLIR-M, and DLIR-H groups had better image visualization than did the ASIR-V40% groups. Regarding image artifacts, there was no statistically significant difference between any of the groups. Discussion Our study compared the conventional ASIR-V40% images at the normal 5 mm slice thickness and the thin 1.25 mm thickness to the 3 strength levels of DLIR images at 1.25 mm slice thickness in low-dose abdominal CT. Our preliminary results showed that DLIR had strong advantages in balancing image noise and spatial resolution to provide better overall image quality under low-dose conditions. Compared with the standard ASIR-V40%, DLIR-M and DLIR-H significantly improved the quantitative and qualitative image quality of abdominal sections under the same thin image slice thickness of 1.25 mm. Compared with the commonly used ASIR-V40% at 5 mm, the 1.25 mm DLIR-H, at a quarter of the signal strength, still provided lower noise images at significantly improved spatial resolution. Conventionally, there is an inverse relationship between image noise and spatial resolution, which has been noted as one of the limitations of many IR algorithms. The ASIR-V algorithm has the advantage of reducing noise (18), but it is difficult to balance image noise and spatial resolution when high-strength ASIR-V is used (15). Studies have confirmed the limitations of texture alteration in highstrength ASIR-V images and have shown that excessively lower radiation dose levels increase the risk of an inaccurate diagnosis, such as in the detection of liver lesions (16,17). One study has indicated that although the use of 80% of ASIR-V significantly reduced noise and improved CNR, the images were so smooth that they had a negative impact on the diagnosis of lesions (19). On the other hand, DLIR is designed to maintain texture and spatial resolution while reducing noise, so it is expected to significantly impact the optimization of the abdominal CT scheme. Clinical studies have demonstrated that deep learning-based image reconstruction methods have higher diagnostic accuracy compared to IR algorithms (20,21). Phantom studies have shown that DLIR improves the image quality of CT scans at medium or lower radiation doses (22). Benz et al. (23) found that, compared with ASIR-V, DLIR significantly reduces noise while providing excellent image quality and the same diagnostic accuracy. Akagi et al. (24) found that DLIR-H could clearly show small metastases on low-dose scans. Higaki et al. (22) conducted a prospective study of 59 patients who underwent standard-dose chest or abdominal pelvic CT and found that all pulmonary nodules detected at the standard dose were also detected by DLIR at a low dose. Sun et al. (25,26) found that the DLIR algorithm could also significantly reduce the contrast agent dose and radiation dose in angiography. The results of our current research showed that DLIR could reduce image noise and significantly improve objective and subjective image quality without excessive smoothing. DLIR-H reduced image noise by more than 50% compared with ASIR-V40% for the 1.25 mm slice thickness images while showing better boundaries with a higher edge rise slope (P=0.02) and close to statistically significant higher subjective score in the visualization of small structures (P=0.058). DLIR-H also demonstrated its strong ability to balance high spatial resolution and low image noise. Liver CT has a stringent requirement for high low-contrast resolution. Traditionally, thick slice thickness images (i.e., 5 mm) have been used in liver CT to achieve low noise images without requiring an excessive radiation dose in patients. With DLIR, this balanced requirement could be easily achieved. In fact, our results indicated that DLIR-H generated lower noise images at 1.25 mm slice thickness than did ASIR-V40% at 5 mm slice thickness, with significantly improved spatial image resolution in terms of the quantitative measurement of edge rising slope and the qualitative score of visualization of small structures (both P values <0.001). Furthermore, we believe that the DLIR algorithm may also be used on the datasets of the newly introduced photon-counting detector CT (27), which might improve image quality even more. This would enable scanning at lower doses and may even be applicable to a wide variety of pathologies. This study has some limitations. First, we mainly focused on the subjective and objective evaluation of image quality but did not evaluate the impact of DLIR on the detection and diagnosis of specific lesions. A dedicated liver CT with contrast enhancement is needed for such an evaluation. Second, this study was limited by the small sample size. Further studies with larger sample sizes are needed to further validate our results. In summary, our study indicates that DLIR, especially the highest strength DLIR, has a significantly higher ability to reduce image noise and balance image noise, noise texture, and spatial resolution than does the conventional ASIR-V algorithm. It is possible for DLIR to significantly reduce the radiation dose while preserving the exploratory and diagnostic capability of abdominal CT.
2022-12-04T18:08:49.693Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "0d80304401852bf80cdd07206bf8e1de6f27274c", "oa_license": "CCBYNCND", "oa_url": "https://qims.amegroups.com/article/viewFile/105571/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f47a48804a2c9c2e78cd4a5e55d3a1d0b6037c99", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
267783450
pes2o/s2orc
v3-fos-license
Poor quality of sleep and musculoskeletal pains among highly trained and elite athletes in Senegal Background Previous studies reported that poor sleep quality (PSQ) was associated with musculoskeletal pains (MSP) and poor physical performance in athletes. Objective The current study aimed at determining PSQ and its associations with MSP in some sub-Saharan athletes. Methods A cross sectional study was conducted among 205 highly trained and 115 elite athletes (aged: 25 ± 2 years, Body mass index: 22.8 ± 0.9 kg/m2) in Dakar, Senegal, during a competitive season in a variety of sport disciplines including athletics, basketball, football, rugby, wrestling, tennis. Quality of sleep and MSP were assessed using the French version Pittsburgh Sleep Quality Index (PSQI) and French version of Nordic questionnaire respectively. Pain on body joints during a week was defined as seven-day MSP (MSP-7d) and PSQ for a PSQI > 5. Results 27.8% (95%CI: 23.2–32.9) of the overall sample suffered PSQ, with 33.7% (95%CI: 24.7–44.0) in basketball and 24.7% (95%CI: 16.9–34.6) in football. According to athletic status and gender, PSQ was more prevalent among highly trained (66.3; 95%CI: 55.9–75.3) and men (69.7%; 95%CI: 59.5–78.7). Among athletes with PSQ 43.8% (95%CI: 33.9–54.2) suffered MSP-7d, with 36.6%; highly trained (95%CI: 23.7–42.9) and 28.1% female. Considering body region, hips/thigh (14.6%; 95% CI: 8.74–23.4) and upper back (13.5%; 95%CI: 7.88 -21, 1) were more affected. Basketball players were more affected from MSP (MSP-7d = 38.5%; 95%CI: 24. 9–54.1) on high on wrists/hands (MSP-7d = 44.4%; 95%CI: 18.9 -73.3; P = 0.04). Based on athletic status, MSP-7d were higher on highly trained necks (100%; 95%CI: 56.1–100; p = 0.04). PSQ was associated with basketball (OR: 3.062, 95%CI: 1.130–8.300, p = 0.02) compared to Athletic. PSQ and MSP-7d were associated on Wrist/hands (OR: 3.352, 95%CI: 1.235–9.099, p = 0.01), and at the upper back (OR: 5.820, 95%CI: 2.096–16.161, p = 0.0007). Conclusion These results indicate that PSQ is considerable among Senegalese athletes and is associated with MSP during a week. Hence, we recommend to look for strategies optimizing good quality of sleep in order to reduce pains, to improve health. Background Sport both in developed and developing countries requires a follow-up (physiological, psychological and nutritional) of the major actors who are the athletes for health preservation and the optimization of biomechanical [1,2].This follow-up during training session or competition also allows for assessing performances, profiling athletic talent, identifying ability to compete, and identifying weakness and factors determining physical performance [3].The follow-up of the athlete emphasizes the notion of recovery by the sleep amply studied [4,5]. Sleep is a physiological condition characterized by a reversible behavioral state, with changes in the level of consciousness and responsiveness to stimuli.Sleep is a fundamental requirement for health and recovery that would be actively involved in homeostatic processes able to revitalize and restore the main physiological, metabolic and psychological functions.This condition is necessary for physiological mechanisms essential to life, such as the energy restoration, neural plasticity and secretion of the growth hormone [6].Persistent PSQ has a cumulative long-term negative effect on human health outcomes, resulting in increased susceptibility to infections [7], and non-communicate diseases like hypertension, dyslipidemia, cardiovascular disease, weight-related issues, metabolic syndrome, type 2 diabetes mellitus, and colorectal cancer [8].Sleep deprivation studies report a significant association with the decreased of cognitive function [9], decreased mood [10]. Sleep disturbance is known to exert detrimental effects on physiological and physical performance of athletes and deteriorate health [11,12].PSQ has been associated with musculoskeletal pains (MSP) and reported to aggravate it, and causes injuries [13].MSP are frequently observed in athletes and can often be the cause of poor performances and even loss of competition.MSP are a common problem and important warning signals of overuse injury in athletes [14,15].Injuries in athletes most of the time require significant recovery periods with physiotherapy or surgery intervention [16,17].Prevention of MSPs and injuries in sports medicine constitute one of the most worrying health problems because resulting in high economic costs, withdrawal of athletes from training and competition, and impairment of their performance [18,19].It is therefore appropriate to prevent the occurrence of MSP in athletes by promoting good sleep.Moreover, poor sleep and pain are two interdependent and bidirectional phenomena drawing a vicious circle that can disrupt athlete's performance and health.Pain can interrupt or disrupt sleep, just as changes in sleep patterns can also influence pain perception [13,20]. In most sub-Saharan Africa developing countries, sport is a unifying tool, requiring appropriate scientific interventions for performance optimization due to better follow-up of physiological adaptations related to the restorative aspect of sleep.However, not many information's are known on MSP in African athletes.Few studies have been published on MSP [21,22] our knowledge, none coupling sleep quality and MSP.Hence, the purpose of the present study was to evaluate sleep quality in highly trained and elite athletes in Senegal and to investigate possible associations between sleep quality and MSP. Study design and participants This was a cross-sectional prospective and analytical study was conducted in Dakar, Senegal during four months.After obtaining authorizations of the administrative and technical staff following the explanation of the purpose of the study and the potential benefits on health and performance of the athletes; participants were recruited from football, basket-ball, rugby, tennis, athletics, and traditional wrestling professional and amateur clubs.Athletes were interviewed by the investigators who collected the data by completing the questionnaires.Athletic status (highly trained and elite) of participants has been characterized according to McKay et al. [23] classification.We excluded athletes who take caffeine or energy drink in the evening and those recovering from musculoskeletal trauma, under rehabilitation or physiotherapy interventions. Sampling The sample size was determined by using the Lorentz's formula [N = p (1-p) z 2 /d 2 where N is the minimum sample size; p: prevalence; z = 1.96 for confidence at 95%; d = 0.05] with the prevalence of PSQ of 85.71% reported in a month among athletes in Canadian athletes [24].Then, the minimal sample size found was 188 participants.A total of 320 athletes was finally included. Ethics After explaining the aim and specifics objectives of the study to administration staff of athlete clubs and coaches; administrative authorization and the purpose of the research was also explained to athlete.Thereafter an informal written and signed consent was obtained from each participant with the possibility to withdraw from the study at any time.The national ethics committee of Cheikh Anta Diop University of Dakar, Senegal approved (015/2021/CER/UCAD) the study which was conducted in accordance with the recommendations of the Declaration of Helsinki revised in 1989. Data collection Socio-professional information's and anthropometric measures A structured questionnaire was elaborated to collect personal information of athletes such as age, gender, case of injuries, athletic status, sport discipline, number of training session/ week.The weight was measured using an electronic scale Tanita BC-532 (Tokyo, Japan) and the height a rod graduated to the nearest centimeter.Body mass index (BMI) was calculated using the Quetelet's formula: BMI (Kg.m −2 ) = Weight (kg) / height 2 (m 2 ). Quality of sleep A French version of the Pittsburgh Sleep Quality Index (PSQI) was used to assess the sleep quality [25].This questionnaire assesses the quality of sleep of an individual for a month.PSQI consists in 19 items of the following seven components of sleep quality: sleep onset latency, sleep duration, efficiency, quality, disturbances, medication, and daytime dysfunction.Each component of PSQI is scored between 0-3, and the sum of the seven components yields a global score of sleep quality with a total score ranging from 0-21 points; a high score (PSQI score > 5) is an indication of PSQ. Musculoskeletal pains A structured validated French version of the Nordic questionnaire adapted to adolescent population was used to determine MSP prevalence [26].This questionnaire determines the occurrence of MSP on nine body regions (neck, shoulders, elbows, wrists/hands, upper back, lower back, hips/thighs, knees, ankles/feet) during a week.For each body region, the parameters evaluated were: -Presence or absence of aches, pains or genes during the last seven days, was used to determine MSP prevalence.This questionnaire determines the occurrence of MSP on nine body regions (neck, shoulders, elbows, wrists/hands, upper back, lower back, hips/thighs, knees, ankles/feet) during a week.For each body region, the parameters evaluated were: -Presence or absence of aches, pains or genes during the last seven days, -Bad performance due to joint pains, -Absenteeism of training session or a competition during the last seven days for reasons related to pain in one or several body regions, -Presence or not of a history of trauma in one or several body regions during a training session or competition. Therefore: -A week/7-day MSP (MSP-7d): was defined as sevenday prevalence of MSP.Pain related to a former injuries regions was not considered as MSP. Statistical analysis The data were inserted in the software Microsoft Excel 2016.The analysis was conducted using StatView 5.0 (SAS Institute, Inc., Chicago, USA) software.The Kolmogorov-Smirnov test verified the normality of data. In the descriptive stage of the analysis, mean and standard deviation of numerical variables were calculated, as well as the absolute and relative frequencies of categorical variables and lower and upper limits of the 95% confidence interval for a proportion confidence intervals of MSP-7d and PSQ were determined [27].Unpaired Student's t-test was used to compare unpaired quantitative variables.Besides, the chi-square test was conducted to verify the difference between proportions, and also to examine the association between nominal variables.Logistic regression models were performed to identify factors associated with MSP and PSQ.Then, association was quantified through computing crude odd ratio adjusted on gender, age and number of training sessions per week, at 95% of confidence interval (95%CI).The statistical significance threshold was set for any value of P < 0.05. Discussion The present study aimed at determining prevalence of PSQ in athletes, and then its association with MSP in a week period.Prevalence of poor sleep reported among athletes was 27.8%.This prevalence is similar to that reported by Gomes et al. [28] among adolescent amateur athletes in five sport disciplines (volleyball, handball, basketball, swimming and judo).Some studies highlighted that PSQ is generally high in athletes, such as noticed in the sample of the study of Samuels [24] in Canadian athletes who observed an important worse poor sleep of 85.7%.A study of Juliff et al. [29] in a sample of 283 elite Australian athletes reported important prevalence of PSQ of 64.0%.and Silva et al. [30] in a study among 146 athletes from the Brazilian Olympic team found 53% of sleep complaints.In a recent study between elite and sub elite athletes from a wide variety of sports Madigan et al. [31] in a reported an important rate of poor sleep of 64% and 65% respectively.All these results suggest that PSQ is a crucial problem among athlete requiring education of sleep for good health and best performance. According to athletic status, no statistical difference of poor sleep between amateur and professional athletes has been noticed.This result is in accordance to that reported by Madigan et al. [31] in a study between elite and sub elite athletes without difference of poor sleep between both groups.However, in a recent study of Penttilä et al. [32] it was noticed an important proportion of poor sleep in professional athletes compare to amateur.With regards to physiological and psychological complexity of sleep, the poor quality of sleep should be superior in professional athletes due to the important mental stress and physical demands.Moreover, it is well established that training as found significantly superior in professional athletes compared to amateur in the present study constitutes an important determinant of sleep deficiency difference [10,33,34]. According to sport discipline, poor sleep was more prevalent in basketball.This result is similar to that observed by Franco et al. [35] in a study on quality of sleep in athletes involved four sport (athletics, boxing, basketball and crossfit) poor sleep was mostly in athletes practicing crossfit and basketball. With respect to gender, no significant difference on poor quality of sleep was noticed.This result is in accordance to that of Juliff et al. [29] who didn't found gender difference of sleep disorders in Australian athletes.Gender differences of sleep disorders are very controversies.Brand et al. [36] in a pilot study on self-reported sleep quantity and sleep-related personality traits in adolescents found that female adolescents were at greater risk of suffering of poor sleep; of the same with Schaal et al. [37] who observed highest rate of sleep disorders in women.Franco et al. [35] who analyzed sleep in 83 athletes, in individual and collective sports found that the majority of bad sleepers were women.Nevertheless Silva et al. [30] in the clinical evaluation reveal that men reporting more sleep complaints than women elite athletes.Previous studies argued that women's sleep patterns are subject to disturbance because they are determined by physiological factors like changes in hormonal variation, social roles and interactions [36][37][38]. Among athletes with poor sleep 43.8% were suffering from MSP during the last seven days without significant difference according to gender and athletic status.MSP-7d were more on hips/thigh followed by upper back and wrists/hands.Studies assessing quality of sleep and the occurrence of MSP in athletes are scarce.However, de Souza Bleyer et al. [39] investigating associations between sleep quality and the occurrence of musculoskeletal complaints among elite athletes in Brazil found MSP during the last seven days on six body region excepted neck, shoulders and hip/thighs in athletes with poor sleep.Moreover, this lack of difference in pain between genders does not corroborate the study of Shan et al. [40] suggesting that beyond the fact that poor quality of sleep should be predominant in women, pain should be more in females because of their higher pain perception threshold than males.Also, Anatomical characteristics of the female body have been implicated as enhancing the development of pain in some body regions such as the back and therefore leading to a higher prevalence in women than in men [41].In addition, pain in women may be related to their lower muscle mass and bone density, which may lead to destabilization of the body and therefore insufficient compensation for high loads [42]. In the present study it was noticed in athletes with poor sleep during the last 7 days, high MSP-7d in basketball.Also, poor sleep was associated with sport discipline.Basketball players with poor sleep were in risk to suffer from MSP in a week compared to Athletic.In a recent study on MSP in amateur and professional athletes, Malam et al. [22] found high rate of MSP-7d basketball players.Added to their PSQ which can justify high prevalence of pain, basketball requiring high physiological and biomechanics demands in aerobic and anaerobic capacities along with integration of physical characteristics.Also, frequent jumping, landing and changes in direction make up much of physical load of competitive games, which therefore, expose basketball players to high level of eccentric muscle contractions and joints solicitations which can be causes of pain [43]. Many researchers had established a link between, sleep quality and pain, thus establishing a vicious circle [13,44,45].Based on physiological justifications, poor sleep is accompanied by an increased sensitivity to noxious stimuli and a decrease in endogenous pain-inhibitory [46][47][48].Others possible neurophysiological mechanisms connecting sleep to pain are focusing on ghrelin [49,50]. According to Guneli & Ates [50], beyond the control of food intake ghrelin secreted primarily by the stomach, links to its hypothalamic receptor at the arcuate nucleus.It's shown that sleep is related to ghrelin [51] with a rise levels at night [52,53] its level rises primarily in response to acute sleep disturbance [54].Then ghrelin directly activates the neuropeptide Y and indirectly inhibits proopiomelanocortin neurons in the hypothalamus.Activated neuropeptide Y is known to modulate nociception in some regions of the central nervous system, inducing spinal antinociception and regulating pain in the brain.In addition, proopiomelanocortin derivative, β-endorphin, is known as an important endogenous key-component of the antinociceptive system.Then, the antinociceptive effect is mediated by opioid receptors which is modulated by nitric oxide [55].Ghrelin increasing nitric oxide synthesis levels, may also improve antinociceptive effects of endogenous opioids, showing its interaction with central opioid mechanisms.All the same, In addition to analgesic activity, ghrelin has been indicated to be a powerful anti-inflammatory intermediate and inhibits pro-inflammatory cytokines such as IL-1β, IL-6, and TNF-α, which cause pain and other symptoms [56,57]. Conclusion This study provides evidence of the reality of PSQ in Senegalese athletes, in whom musculoskeletal pain were found.MSP-7d in athletes with PSQ were higher in highly trained and women.The most affected body regions were the hips/thighs and upper back.Thus, the optimization of the biomechanical performances and health of athletes is linked to a good physiological recovery.Recovery is linked to good quality sleep which will reduce the occurrence of MSP.Therefore, it is imperative for sports coaches to emphasize good sleep education in athletes. Fig. 2 Fig. 3 Fig. 2 MSP in athletes with PSQ according to gender and athletic status.MSP-7d: Prevalence of musculoskeletal pains during the seven last days Table 1 Gender, socio-demographic and anthropometric characterizations PSQI Pittsburgh sleep quality index P-value α: gender comparison, P-value β: Athletic status comparison Table 2 Prevalence of MSP in participants with bad quality of sleep according to body region, sport discipline and athletic status MSP-7d Prevalence of musculoskeletal pains during the seven last days, NA Not available P-value α: comparison of MSP-7d according to body regions in sport discipline P-value β: comparison of MSP-7d according to athletic status on body regions Table 3 Risk factor of poor sleep MSP-7d Prevalence of musculoskeletal pains during the seven last days, OR Odd ratio, CI Confidence interval
2024-02-23T14:01:41.754Z
2024-02-22T00:00:00.000
{ "year": 2024, "sha1": "aaa7ae0e1654133cf1dff31969e41f702ad29152", "oa_license": "CCBY", "oa_url": "https://bmcsportsscimedrehabil.biomedcentral.com/counter/pdf/10.1186/s13102-023-00705-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e88230a4c561e03bfbac6216aa908b5b6075cba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55206865
pes2o/s2orc
v3-fos-license
Influence of Geostress Orientation on Fracture Response of Deep Underground Cavity Subjected to Dynamic Loading Deep underground cavity is often damaged under the combined actions of high excavating-induced local stresses and dynamic loading. The fracturing zone and failure type are much related to the initial geostress state. To investigate the influence of geostress orientation on fracture behaviours of underground cavity due to dynamic loading, implicit to explicit sequential solution method was performed in the numerical code to realize the calculation of geostress initialization and dynamic loading on deep underground cavity.The results indicate that when the geostress orientation is heterotropic to the roadway’s floor face (e.g., 30 or 60), high stress and strain energy concentration are presented in the corner and the spandrel of the roadway, where V-shaped rock failure occurs with the release of massive energy in a very short time. When the geostress orientation is orthogonal to the roadway (e.g., 0 or 90), the tangential stress and strain energy distribute symmetrically around the cavity. In this regard, the stored strain energy is released slowly under the dynamic loading, resulting in mainly parallel fracture along the roadway’s profile.Therefore, to minimize the damage extent of the surrounding rock, it is of great concern to design the best excavation location and direction of new-opened roadway based on the measuring data of in situ geostresses. Introduction With the increase of excavated depth for underground rock engineering, numerous unconventional rock failures such as rock spalling [1][2][3], zonal disintegration phenomenon [4], and rockburst [5][6][7][8] have been often observed.One essential precursor for inducing these failures is the high geostress, which accumulates gradually in the surrounding rock during the excavation process.When underground roadway is situated in highly stressed rock mass, its mechanical behavior and failure type are influenced by various factors, including inner factors and external factors.The former include the rock mass's property, the roadway's type and size, and the spatial orientation of the roadway with geostresses, while the external factors that contribute to rock failure include the geostress state, the excavation method, and external dynamic disturbance [5,9]. Many studies have been devoted to investigating the rock failure mechanism of underground roadway, especially for the case when the roadway is situated in high geostress environment.Li et al. [10] investigated the influences of various in situ stresses, unloading rates, and paths on dynamic effects of circular opening, indicating that the increase of in situ stresses can greatly exacerbate the opening's dynamic effects.Lu et al. [11] also investigated the dynamic response of roadway due to transient release of initial geostress.While the excavation unloading process induces damage to underground cavity, dynamic loading is also a nonignorable cause to induce rock degradation or even failure [12,13].Zhu et al. [12] employed RFPA code to study the dynamic failure process of an underground opening under static geostresses and dynamic loading.Tao et al. [13] examined the multiple fracture zones around highly stressed cavities, and the results illustrated that the dynamic load and static stress gradient are two critical factors to induce multiple fracture.The previous researches indicated that the fracturing behavior and damage zone of underground roadways are generally different under various geostresses. Meanwhile, underground rocks are always subject to complex stress fields, including gravity stress and tectonic stress [10].When excavating in complex stress fields, the new-opened cavity is often situated in an occasion when no former engineering experience can be referred from, due to the various vertical and horizontal stresses and their various spatial orientations.While there were some reports published focusing on the plastic zone and damage extent of underground opening under various lateral pressure coefficients [14], few were seen with regard to the influence of various geostress directions on the fracturing behavior of underground roadway.Pioneer work including this consideration can be found in [15] which investigated the effect of principal stress orientation on tunnel stability but did not involve the external factor of dynamic disturbance. In deep underground excavation, due to the combined actions of geostresses and dynamic loadings, the surrounding rock around work face is usually induced to fail with the ejection of rock pieces.The rock failure process is widely accepted as a typical dynamic instability and energy release process.This paper explores the dynamic failure response of deep underground roadway subjected to dynamic disturbance under various orientations of geostress around the roadway.The lateral stress and strain energy distribution at the roadway's periphery are examined.Additionally, the strain energy releasing mechanism of the surrounding rock under dynamic loading is investigated, including its releasing magnitude and time.Finally, the influence of geostress orientation on the damage zone and fracturing pattern of the cavity was discussed. Numerical Simulator Descriptions The finite element method (FEM) program Ansys/Ls-dyna is widely used in nonlinear dynamical calculations to simulate sheet metal forming, bird strike and material failures, and so on.It is quite suitable for simulating rock failure due to large deformation and nonlinear dynamic loading.In this study, Ansys/Ls-dyna was explored to examine the dynamic behaviors of the underground opening subjected to dynamic loading. Modelling Layout. A deep-buried roadway with straightwall-top-arch cross section was considered in this case.The specific geometries and loading conditions of the numerical model are shown in Figure 1.Provided that the mechanical characteristics in every cross section along the roadway axis are the same, the numerical model was assumed as a plane strain problem.Therefore, a single layer meshed model was established by employing three-dimensional eight-node solid element type.By using three-dimensional solid element to solve the plane strain problem, it is beneficial for improving the accuracy of the simulated results and reducing the computation time, as well as making the calculation process easier to converge.The calculating model was first prestressed by principal stresses, that is, the maximum principal stress 1 and the minimum principal stress 3 .In this case, the principal stress state with 1 of 25 MPa and 3 of 10 MPa was applied on the boundary faces.Then, a triangle stress wave (Figure 2) was applied at the element component which is 11.5 m away from the roof.The total action time of the stress wave is 4.0 × 10 −4 s, of which the rising time is 1.0 × 10 −4 s and the peak stress is 40 MPa.All the faces are defined as nonreflecting boundaries to exclude the reflected stress waves that may be generated at the model boundaries. Implicit to Explicit Sequential Solution Method. The dynamic response of deep-buried opening involves two loadings of static geostress and dynamic disturbance, which is a coupled static and dynamic loading problem.The problem should be divided into two steps for numerical calculation: the static stress state of overstressed opening should be solved at first, and then the dynamic loading process will be computed on the basis of the static results.For this problem, the code provides a good solution method, that is, implicit to explicit sequential solution method.First, the implicit module is used to calculate the initial static geostress state of the opening.The strains, the displacements, and the stresses obtained from implicit calculation are imported into the explicit module, which is accomplished by creating a database file that updates the geometry and the stress history of the explicit element so that it matches the implicit static solution [16,17].The flow chart for implicit to explicit sequential solution process is presented in Figure 3. Material Model for Rock 3.1.The Brittle Damage Model.Lots of material models have been developed to simulate the damage or fracture process for rock or rocklike materials, such as the Johnson-Holmquist model, the continuous surface cap model (CSCM), and the brittle damage model that are available in Ls-dyna.The models are designed for special purpose to take into account the erosion, strain rate effect and cracking, and so forth.The Johnson-Holmquist model is advantageous for considering compression damage but does not consider tensile damage as extensive [17].The CSCM includes an isotropic constitutive equation, yield, and hardening surfaces and is well used to simulate dynamic characteristics of rock failure [17,18], but it is very difficult to determine the input parameters. In this paper, the brittle damage model was employed to simulate the dynamic fracturing characters of the roadway under high geostress and dynamic loading.The model is designed primarily for concrete though it can be applied to a wide variety of brittle materials [16], especially be suitable for simulating rock fracture under a tensile force [19].It also contains a minimal set of material constants which can be determined from the standard tests [20].More detailed demonstrations on this material model can be referred to from [16,21]. Determination of the Parameters for the Brittle Damage Model.To validate the capability of the brittle damage model for simulating the fracturing behavior of rock, uniaxial compression tests (UCT) were conducted to compare the results from numerical simulation.The specimens for experiments were extracted from local surrounding rock buried in 500 m depth of Linglong gold mine where a violent bulking rockburst occurred in January, 2013 [22].The UCT were carried out on a servo-controlled testing system SANS-CHT4605. Then, a meshed cylinder entity with the same size as the rock sample was built in the code.Uniaxial compression simulations were conducted when the boundary conditions and the loading scheme were the same with experimental conditions.The stress strain relation curves from simulation and experiment were compared once the numerical calculation was finished.Differences between the two curves were step-by-step narrowed by modifying the input parameters during the repeated simulation process.The experimental stress strain curve was compared with the simulated results, as shown in Figure 4.It is noticed that the simulated stress strain curve is basically close to the experimental curve, especially for the postpeak stage.Note that the peak strain of the simulations is less than experimental peak strain, which is mainly because the initial microcracks within rock are not taken into consideration in the code.Although there are differences in local strain value, the peak stress and overall strain are highly close, indicating that the material model is suitable for further simulation studies.The corresponding parameters as input were listed in Table 1. Numerical Simulations and Results The validated rock material was used in the numerical code to investigate the energy and failure characteristics of the cavity in this section.First, the distributions of lateral stress and strain energy density were calculated when the underground roadway was excavated.Then, the strain energy evolution of surrounding rock due to dynamic loading was analyzed, and the fracturing behaviors of the opening were simulated under different geostress states. Stress Redistribution in Geostressed Environment. When the underground roadway is created, the geostresses near the cavity workface will change: the tangential stress of the surrounding rock increases while the radial stress decreases gradually to zero at the opening's surface along the radial direction from outside rock mass.The tangential stress distribution at the periphery of the opening is different due to different initial stress states.Now that the closedform solution of tangential stress for a tunnel with straightwall-top-arch cross section is not straightforward derived, thus, the tangential stress around the roadway was calculated and plotted in the numerical program.Figure 5 presents the tangential stress distribution of the roadway under four different initial stress states, especially when the orientation angle of the maximum principal stress with the roadway's floor face () is 0 ∘ , 30 ∘ , 60 ∘ , and 90 ∘ , respectively. It can be seen from Figure 5 that the compressive and tensile stress zone are alternatively distributed around the opening.Particularly, when the orientation angle is 0 ∘ or 90 ∘ , the tangential stress distribution is symmetrical.High compressive stress zones are shown in the places that are parallel to maximum principal stress, while lower compressive stress zones or even tensile stress zones are observed in the places that are vertical to maximum principal stress.In addition, obvious compressive stress concentration appears around the corner of the roadway for all the four cases. SED Distribution in Static Geostressed State.Based on the energy theory, the failure process of rock is actually a process of strain energy accumulation, dissipation, and release inside the rock.The excavating-induced stresses and the strain energy inside the surrounding rock keep changing continuously as the workface of the cavity moves forward.Some places of the rock mass accumulate energy, while others release energy.The rock will be induced to break once a certain failure criterion is satisfied.To express the energy characteristics of deep surrounding rock, strain energy density (SED) is used in this paper which is written as where 1 , 2 , and 3 are the maximum, mediate, and minimum principal stress, respectively, is the elastic modulus, and is Poisson's ratio.The SED distributions of the roadway under the four cases of geostress orientation angles are shown in Figure 6. From Figure 6, it can be seen that the SED distribution is generally similar to the distribution of compressive stress zone; that is, highly stressed zone promises high SED concentration region.The maximum values of SED for the four cases are obviously various.The maximum magnitudes reach 87.5 kJ/m 3 and 83.6 kJ/m 3 when is 0 ∘ and 90 ∘ , respectively.While equals 30 ∘ or 60 ∘ , the SED at the corner is greatest, reaching 178.2 kJ/m 3 and 177.4 kJ/m 3 , respectively.This is because when the direction of maximum principal stress is parallel or vertical to the floor face (e.g., = 0 ∘ or 90 ∘ ), the induced stress state of the roadway is symmetrical, which is beneficial for moderate deformation of the surrounding rock.In this way, the strain energy distribution around the roadway is alleviated and balanced. However, when the direction of maximum principal stress is heterotropic to the floor face (e.g., = 30 ∘ or 60 ∘ ), the stress state is extremely imbalanced, resulting in high stress concentration near the corner and spandrel of the roadway.In this regard, the surrounding rock masses accumulate a large amount of strain energy, which is disadvantageous for rock stability after excavation. SED Dissipation due to Dynamic Loading. To further examine the strain energy releasing response of the opening subjected to dynamic disturbance, four special elements which are located at roof, left sidewall, right sidewall, and floor, as marked in Figure 2, were chosen to trace the SED change law. Figure 7 presents the SED evolutional curves for the four locations under different geostress orientations. When is 0 ∘ , the strain energy accumulates at the roof of the roadway in a very high level.The stored strain energy declines sharply to zero in just 1.16 × 10 −3 s, indicating that the roof surrounding rock fails fiercely due to the dynamic loading, whereas little influences on the change of strain energy are observed at the opening's sidewalls and floor under the action of dynamic stress wave. When equals 30 ∘ , it is noticed that the SED at the roof is much greater than other locations of the opening and fully released within 1.48 × 10 −3 s, indicating a violent dynamic failure occurs in the rock mass.But there is little change on SED at the sidewalls and floor for this case. For the case of = 60 ∘ , the strain energy distribution of the roadway is obviously different.Under the dynamic disturbance, the SED of the right sidewall experiences a long time of punctuation and declines to a lower level eventually.The SED of other places, such as the left sidewall and the roof, decreases slowly till zero with a long releasing period of 4.17 × 10 −3 s.When reaches 90 ∘ , the excavating-induced stresses are symmetrically distributed around the roadway.The change law of the strain energy at the left sidewall and the right sidewall is generally the same, reducing from 14.13 kJ/m 3 to 0 after 4.81 × 10 −3 s.On the contrary, there is little strain energy accumulated in the roof and floor, where spalling failure is induced subjected to the dynamic disturbance. In terms of the energy releasing time, it can be observed that the strain energy releasing time for the roof when = 0 ∘ and 30 ∘ is much less than that when = 90 ∘ .When the geostress orientation angle is 0 ∘ or 30 ∘ , the roadway's roof performs obviously high stress concentration, gathering considerable strain energy there.Under the direct impact of dynamic loading from the top rock mass, the strain energy within the roof is released sharply, indicating a fierce rockburst happens in this occasion.Differently, it takes a longer period of time for the SED to be released at sidewalls when = 90 ∘ .In this regard, the high strain energy concentrates mainly at the sidewalls, where less impact will act upon the surrounding rock when subjected to the dynamic loading, so the energy is released relatively slower. Fracturing Zone and Failure Type. To understand how the geostress orientation influences the fracture zone around the opening under dynamic loading, further simulations were extended in the numerical code.The plastic region and fracture pattern are presented in Figure 8, where the fringe levels with various colors indicate the magnitudes of plastic damage.The fracturing zone is equivalently represented by deleting the failure elements automatically. From Figure 8, the damage zone and extent of the surrounding rock mass have distinct differences under different orientation angles of the principal stress with the floor face.In general, the fracturing zone mainly performs around the surrounding rock in which it is parallel to the maximum principal stress.This is because the failure place was previously subject to high compressive stress, and it is quite prone to collapse under the superposition action of dynamic disturbance, with releasing a large amount of strain energy as well. When the angle is 30 ∘ or 60 ∘ , the damage zone and extent are much severer, presenting V-shaped breakout in high accumulated energy places.In this occasion, the excavating-induced stresses distributed around the surroundings are nonuniform, resulting in high stress concentration in the corner and spandrel of the roadway.The highly stressed surrounding rock will be largely fractured when experiencing dynamic loading.When the principal stress direction is vertical or parallel to the floor face, that is, when is 0 ∘ or 90 ∘ , the failure of the surrounding rock is characterized as parallel fracture along the roadway's profile. Conclusions An implicit to explicit sequential solution method was employed in Ansys/Ls-dyna to realize the calculation of dynamic loading on highly stressed underground roadway.A validated material model was used to explore the fracture response of deep-buried roadway due to static geostress and dynamic loading, especially to investigate how the geostress orientation influences the strain energy release evolution and failure pattern of the underground roadway subjected to dynamic loading.The numerical results indicate that when the geostress orientation angle is 30 ∘ or 60 ∘ , tangential stress and strain energy concentrate mainly in the corner and the spandrel, where V-shaped breakout with releasing massive energy is observed when experiencing dynamic loading.When the geostress orientation is 0 ∘ or 90 ∘ , the stress and strain energy distribute symmetrically around the roadway, leading to parallel fracture along the roadway's profile with slower release of strain energy.This study can contribute to optimal selection of excavation location and direction of new-opened cavity according to the in situ geostress state, as well as to mitigating the fracture and damage extent of the surrounding rock when proper support strategy is carried out. Figure 1 : Figure 1: Numerical model with specific geometries and loading conditions. Figure 2 : Figure 2: Triangle stress wave used as dynamic disturbance. Figure 3 : Figure 3: Flow chart for implicit to explicit sequential solution process. Figure 4 : Figure 4: Comparison of the experimental stress strain relation with numerical result. Figure 6 : Figure 6: SED distributions under four different orientation angles of the maximum principal stress with the floor face (unit: kJ/m 3 ). 3 ) sid d d d d d d d d d d d d d d d d dewa e e e e e e e e e e e ll Roof Floor SED (KJ•m − = 90 ∘ Figure 7 : Figure 7: SED evolutional curves for the four locations under various geostress orientations. Figure 8 : Figure 8: The plastic region and fracture pattern of the roadway under various geostress orientations. Table 1 : Input parameters for the rock model.
2018-12-11T13:03:31.336Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "e229e7043f4b208c42755bb48b2f64ba747550d5", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2015/575879.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e229e7043f4b208c42755bb48b2f64ba747550d5", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Mathematics" ] }
250416765
pes2o/s2orc
v3-fos-license
The Utility of Virtual Reality in Orthopedic Surgical Training OBJECTIVE: To examine the efficacy of virtual reality (VR) to prepare surgical trainees for a pediatric orthopedic surgery procedure: pinning of a slipped capital femoral epiphysis (SCFE). DESIGN: Participants were randomly assigned to a standard, study guide (SG) group or to a VR training group. All participants were provided a technique video and SG; the VR group additionally trained via an Osso VR surgical trainer (ossovr.com) with real-time feedback and coaching from an attending pediatric orthopedic surgeon. Following training, participants performed a SCFE guidewire placement on a SawBones model embedded in a soft-tissue envelope (SawBones model 1161). Participants were asked to achieve “ideal placement” based on the training provided. Participants were evaluated on time, number of pin “in-and outs,” penetration of the articular surface, angle between the pin and the physis, distance from pin tip to subchondral bone and distance from the center-center point of the epiphysis. SETTING: Orthopedic Institute for Children, Los Angeles, CA. PARTICIPANTS: Twenty fourth-year medical students, first- and second-year orthopedic residents without experience with the SCFE procedure. RESULTS: Twenty participants were randomized to SG (n = 10) or VR (n = 10). Average time to final pin placement was 19% shorter in VR group (706 vs 573 seconds, p = 0.26). When compared to SG, the VR group had, on average, 70% less pin in-and-outs (1.7 vs 0.5, p = 0.28), 50% less articular surface penetrations (0.4 vs 0.2, p = 0.36), and 18% smaller distance from pin tip to subchondral bone on lateral view (7.1 vs 5.8 mm, p = 0.42). Moreover, the VR group had a lower average angle deviation between pin and line perpendicular to the physis on coronal view (4.9° vs 2.5°, p < 0.05). CONCLUSIONS: VR training is potentially more effective than traditional preparatory methods. This pilot study suggests that VR training may be a viable surgical training tool, which may alleviate constraints of time, money, and safety concerns with resultant broad applicability for surgical education. INTRODUCTION Advances in surgical techniques, work-hour restrictions, and increased time spent on non-clinical administrative tasks have placed increased demands on surgical trainees. 1,2 Moreover, a greater emphasis on patient safety along with higher expectations for surgical outcomes and a litigious medical practice atmosphere have placed increased pressure on attending surgeons to be mindful of trainees' potential missteps. 3,4 The length of surgical residency, however, has remained fixed, leaving surgical trainees and resident educators in search for safer, more efficient modalities for learning and practicing surgical techniques. 5,6 Virtual reality (VR) simulation has emerged as a promising alternative. VR simulation provides access to unlimited, safe technical repetitions, enabling the acquisition of surgical skills in a flexible, low-stakes training environment. Training with VR simulators has previously been shown to improve surgeon performance with good transferability to the operating room. 7 VR surgical training is specifically useful for learning the steps of complex orthopedic procedures requiring multiple tools. 6 However, it is not clear from the current literature whether VR training is similarly useful in acquiring the basics of accurate pin/screw placement requisite for many orthopedic procedures. The importance of accurate percutaneous pin placement is especially true in pinning slipped capital femoral epiphysis (SCFE). Optimal screw placement is the key to preventing further slippage and hardwarerelated complications, such as screw cutout or intra-articular penetration. [8][9][10] To successfully pin a SCFE, surgeons must be facile with the use of fluoroscopy and have a keen awareness of spatial relationships and how projected fluoroscopic views correspond to 3-dimensional space to enable proper placement of internal fixation and reduce ionizing radiation exposure to the patient, themselves and the operating room staff. VR simulation is seemingly well-suited to train this skill as programmed VR environments can closely replicate the experience of using 2-dimensional fluoroscopic images to visualize 3-dimensional placement of internal fixation devices. Theoretically, the benefits realized utilizing VR training to familiarize a trainee with the steps of a more complex procedure may extend to more fundamental skill acquisition, with broad implications for multiple orthopedic subspecialties and procedures. Further, VR simulation has the benefit of being able to provide standardized, objective grading of operative parameters including accuracy of surgical technique, economy of motion, and procedural time, making it well-suited for basic skill acquisition. Therefore, the aim of this study was to compare the performance of novice surgical trainees who trained utilizing VR simulation to those who did not have access to VR training and only studied a standard technique guide (SG) prior to in situ pinning of a SawBones model for SCFE. We hypothesize that adding VR simulation training in preparation for SCFE in-situ pinning will improve performance with broad implications for teaching all other orthopedic procedures. METHODS After receiving Institutional Review Board approval, fourth-year medical students and firstand second-year orthopaedic residents without prior experience of performing a SCFE procedure were recruited and randomly assigned to either the SG or VR group. All participants completed informed written consents. All components of the VR system were provided by Osso VR (http://www.ossovr.com/). Study Design All participants received a sealed envelope randomly assigning them to either the SG or VR groups. All participants received a grading rubric in addition to a written technique guide specific to SCFE pinning and a 6-minute video demonstrating the technical details of a SCFE pinning utilizing a SawBones model. These resources simulate resources readily available to surgical learners preoperatively. Participants in the SG group had 2 hours to review the surgical technique guide and the demonstration video, which included step-by step instructions and illustrations of the procedure. SG participants did not have access to the VR trainer, nor did they receive any coaching/training with pediatric orthopedic staff prior to SawBones testing. The participants randomized to VR were allowed to review the study guide and demonstration video prior to VR training. Thereafter, these participants completed brief tutorials on how to properly use the OSSO VR system. The Osso VR system (Osso VR, Sacramento, California) is composed of an Oculus Rift virtual headset that attaches over the eyes and 2 oculus wrist and touch motion controllers that communicate electronically with the headset. The wrist and touch controllers relay vibrations and forces back to the user in the form of haptic, providing tactile as well as visual feedback. The VR system did not undergo any changes or upgrades throughout the duration of the study. VR participants also completed a specific tutorial on the utilization on the Osso VR SCFEspecific module (SCFE Focused with C-Arm Beta). In the VR environment, participants initially performed an in situ pinning of a SCFE utilizing the system's training mode, with written instructions and prompts for each step of the procedure provided by the system. This training mode provides immediate feedback to the trainee, grading their pinning accuracy as a product of the angle and depth of final pin placement in both sagittal and coronal views. Grades from A to F (from excellent to poor) were provided. The participants were given up to 2 hours of unrestricted access to the training environment. After the VR participants successfully achieved 3 grade "A" pin placements, they were transitioned into a testing mode with real-time feedback and coaching from an attending pediatric orthopedic surgeon (RT, MS). VR training was completed once subjects successfully completed two additional SCFE pinnings deemed acceptable by the coaching attending surgeon. After completion of studying the study guide/technique video (SG and VR groups), and following VR training (VR group), participants were then escorted to a separate test room, where they were tasked with placing a guidewire in a SawBones SCFE model utilizing fluoroscopic guidance. The surgical model consisted of a realistic artificial proximal femur of a moderate SCFE (SawBones model 1161), embedded in a ballistic gel thigh model to simulate the soft tissue envelope of the hip. The artificial bones were radio-opaque to allow fluoroscopic-guidance of pin placement (Fig. A1). The "surgical incision" was pre-made into the soft tissue envelope prior to the exercise, as the goal of this study was to focus on pin placement rather than surgical dissection and approach. The same soft tissue envelope was used for each participant, enabling standardization of the surgical incision. A new SawBones femur was used for each study participant. All participants received instruction on proper use of drill instruments by an attending surgeon prior to testing. Subjects had up to 20 minutes to perform the procedure and were given an unlimited number of pinning attempts and unrestricted use of fluoroscopy to complete the task. The procedure was timed. The time started once the participant indicated that they were ready to start and ended once the participant indicated that they were satisfied with the final position of the pin or once the 20-minute time limit was reached. Each participant was video recorded for later in-depth analysis. Evaluation of SCFE Procedure Data collection was performed at the time of the exercise and later confirmed with the video recordings. Data collectors were blinded to the training modality given to the participant, and all data was free of personal identifiers. Data collection included the following: total time-to-pin-placement, number of "in-and-outs" (pin in-and-out of bone), number of fluoroscopy images taken, whether or not the pin penetrated the articular surface of the femoral head (assessed both radiographically and via the physical model), angle between the pin and the physis, and pin-tip location within the femoral head, measured as distance from the tip of the pin to the center-center point (ideal pin location) and pin distance from the subchondral bone (in mm). Scoring rubrics based on these criteria were provided to all participants prior to testing (Table A1). Mann-Whitney-Wilcoxon tests were utilized to compare continuous variables with statistical significance set at p < 0.05. All statistical analysis was conducted using GraphPad Prism Statistics/ Data Analysis software (GraphPad Software, Inc., La Jolla, CA). With regards to pin placement, results were variable. In the coronal plane, the VR group had a lower average angle deviation between the pin and a line perpendicular to the physis (SG: 4.90°, VR: 2.55°, p < 0.05; Table 2). However, the SG group demonstrated a lower average angle deviation between the pin and a line perpendicular to the physis in the sagittal plane (SG: 4.95°, VR: 5.70°, p = 0.43; Table 2), although this result did not reach statistical significance. Analysis of pin location with respect to center-center position revealed that the SG group was closer in both the coronal (SG: 4.86 mm, VR: 6.51 mm, p = 0.46; Table 3) and sagittal (SG: 8.22 mm, VR: 8.81 mm, p = 0.46; Table 3) planes. As for pin location with respect to distance from the subchondral bone, on average, the SG group was closer than the VR group in the coronal plane (SG: 5.83 mm, VR: 7.23 mm, p = 0.46; Table 4), but the VR group was closer in the sagittal plane (SG: 7.14 mm, VR: 5.79 mm, p = 0.42; Table 4). In a subgroup analysis of the VR participants, we found that time-to-complete-VR training correlated with an overall global surgical score (score calculated to incorporate results from all tested parameters). Individuals who successfully completed the VR module faster achieved a higher global score compared to individuals who were slow to complete the VR module (R 2 = 0.05) (Fig. 2). DISCUSSION The purpose of this study was to evaluate whether VR surgical training better prepares surgical trainees for common operative procedures compared to traditional methods of preparation (written surgical technique guide and demonstration video). Our findings indicate that VR training trended toward improved skill acquisition and application in preparation for SCFE pinning, with implications for improved acquisition of general orthopaedic skills (i.e., use of fluoroscopy, spatial awareness). Although limited by the number of subjects, the current study suggest that VR training appears to be more effective than traditional preparatory methods with respect to achieving a shorter procedure time, decreasing the number of "in-and-out" events, decreasing the number of violations of the joint space, and achieving a better overall pin placement. Further, given the correlation between VR training time and outcomes, VR training and testing may help identify resident baseline surgical skill level and provide opportunities for additional, individualized VR training in those trainees whose baseline skill set is lower than their peers. In this light, VR surgical trainers may prove beneficial as a much-needed objective assessment tool that can measure trainee progression, allowing for standardization of training and testing, ensuring that all surgical trainees graduate with a similarly tuned skill set. Moreover, VR provides a safe, interactive learning environment that provides a necessary adjuvant to more traditional passive lecture-based learning, which remains common in surgical training programs. Active learning methods, including VR simulation, improve acquisition of knowledge, increase motivation for self-directed learning and also increase the transfer of skills to clinical practice. [11][12][13] Previous research demonstrates VR simulation's success in training bronchoscopy, laparoscopic skills, and operating room performance. [14][15][16] Specifically as VR relates to orthopaedics, Blumstein et al. demonstrated the utility of VR training in a tibial nail SawBones model. These authors found that compared to those without VR training, individuals who trained utilizing VR simulation prior to tibial nailing completed a significantly higher number of steps correctly, completed the module quicker, and had a higher knowledge for instruments. 6 However, this study primarily analyzed steps completed, demonstrating the utility of VR training for familiarizing oneself with the steps of a procedure. On the contrary, this present study demonstrates the utility of VR training for technical skill acquisition and finesse. Similarly, Lohre et al. demonstrated the superiority of VR simulation for technical skill acquisition and implant placement. 17 Our study corroborates these findings and expands them to include findings supporting the use of VR training for tactile feedback, spatial awareness with the use of fluoroscopy. Additionally, our study demonstrated the viability of combining VR training with live surgical coaching, which allowed our coaches to individualize guidance, tailoring each trainees' experience to their skills and perceived deficits. And while we incorporated live coaches, available software allows for remote coaching/feedback, greatly expanding the potential use of VR training globally. Moreover, as stated previously, VR training may identify learners who would benefit from more pre-OR training. Coupling coaching with VR training would allow for this training to be completed in a safe, cost-effective environment, without OR time constraints, risks to patients or the expenses associated with SawBones and cadavers. 18 This combination may serve to "level the playing field" in surgical training, ensuring a baseline level of competence, safety, and familiarity with the procedure prior to entering the operating room. Such assurances may allow for earlier entrustment and autonomy in the OR for more trainees. In fact, increased entrustment, equitable advancement of autonomy, and resultant acceleration in surgical skill acquisition has been previously tied to surgical coaching. 19,20 And coupling coaching with VR simulation allows for a cost-and time-effective modality by which resident educators may incorporate coaching into successful surgical training programs. Utilizing VR training and testing for SCFE pinning and similar procedures and concepts may help identify which residents are ready for surgical autonomy. In this way, VR may decrease the impact of implicit bias inherent in choosing which trainee(s) are capable of independence. Such implicit biases are more likely to occur with female and underrepresented residents compared to their white male counterparts. [21][22][23] Such biases can lead to slower progression toward autonomy, less surgical exposure, and decreased trainee confidence. In fact, a recent systematic review found that while there were no differences in performance or skills between men and women at any level of training in medicine, women rated themselves lower in perceived clinical skills, performance, confidence in procedures, identification with the role of doctor, interpersonal/communication skills, and preparedness for leadership positions. 24 In this light, VR training systems with objective assessment capability can serve as an invaluable tool to allow faculty to evaluate resident performance without bias and allow for graduated autonomy when objectively appropriate. LIMITATIONS AND CONCLUSION Despite the promising results, this study had several limitations, including the inclusion of fourth-year medical students as opposed to solely first-and second-year orthopedic surgery residents. However, the level of surgical training between a second-year resident, first-year resident, and fourth-year medical students is similar considering the timing of when our study occurred (July-September). Second year residents were newly transitioning to orthopedics from their intern year positions, which primarily consisted of administrative and patient management duties. Moreover, the lack of surgical experience amongst the more junior participants was advantageous in that it allowed for an accurate measure of the differential between typical preparation and VR training unencumbered by previous experience. Further, the interpretation of the significance of study results was limited by our small sample size. Although the findings in the VR group showed promising results, they were not statistically significant differences due to the variability in baseline skills and performance amongst participants. Given the limited sample size, the pin-physeal angle was the only statistically significant improvement noted in the VR group. While the argument could be made that this is not clinically relevant as there is emerging evidence that screw angle does not necessarily affect stability, 25 the accepted standard for screw placement remains perpendicular to the physis. 26 As such, residents were instructed on perpendicular placement, and individuals who completed VR training demonstrated significantly increased accuracy, a highly transferable skill. That being said, future experiments will include multiple institutions to obtain a larger sample size. Longitudinal studies evaluating the effectiveness of utilizing VR for individualized training plans would also be of benefit. Overall, our pilot study demonstrates that VR training is a potentially effective novel surgical training tool, which may alleviate the constraints of time, money, and safety concerns on surgical education. While further investigation with a larger number of subjects is warranted, VR appears to be a promising, efficacious, and efficient educational tool that warrants consideration as an adjunct to traditional orthopedic resident education. Correlation of time spent on VR training and aggregate score. Individuals who successfully completed the VR module faster achieved a higher global score compared to individuals who were slow to complete the VR module (R 2 = 0.05).
2022-07-11T15:05:05.022Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "60f7643448b4cadb6977e13017d867e7f9eb7b54", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jsurg.2022.06.007", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4aece8ea564a90f54a75970cff736ca9d16a105b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56428407
pes2o/s2orc
v3-fos-license
Image Retrieval Based on Fractal Dictionary Parameters Content-based image retrieval is a branch of computer vision. It is important for efficient management of a visual database. In most cases, image retrieval is based on image compression. In this paper, we use a fractal dictionary to encode images. Based on this technique, we propose a set of statistical indices for efficient image retrieval. Experimental results on a database of 416 texture images indicate that the proposed method provides a competitive retrieval rate, compared to the existing methods. Introduction With the popularity of the computer and the rapid development of multimedia technology, image information is growing rapidly.How to effectively manage these resources has become a focus of many scholars' study.The early image retrieval technology is based on text, which relies on manual work that it could hardly meet the users' needs.Subsequently, a concept of content-based image retrieval (CBIR) is proposed.A new time has come that image management system can analyze image and extract features automatically.Nowadays, CBIR is used in many techniques, including fractal-based image retrieval. Fractal image coding is based on approximating an image by an attractor of a set of affine transformations [3].To a certain extent, fractal codes reflect spatial relationships between regions, which can describe image content.Sloan [4] first proposed fractal codes based on CBIR.Zhang et al. [5] presented an approach to texture-based image retrieval that determines image similarity on the basis of matching fractal codes.Pi et al. [3] proposed four statistical indices utilizing histograms of range block mean and contrast scaling parameters.Huang et al. [2] used a new statistical method based on kernel density estimation. All of the aforementioned retrieval indices are based on techniques which are similar to traditional fractal coding and are generated by image self-similarity.In this paper, we propose a retrieval method, regarding a shared dictionary as a medium and using the similarity between images and the dictionary as retrieval indices.All the data obtained in the experiments reflect the differences between query image and the dictionary.The remainder of the paper is organized as follows.Section 2 introduces the fractal image coding based on the fractal dictionary.The proposed indices and retrieval method are described in Section 3. Experiment results are reported in Section 4, which is followed by the conclusion. Fractal Image Coding Based on 𝑀-𝐽 Set Images can be viewed as vectors and can be encoded by a set of transforms.Usually, the transform can be generated by collage theorem [6].In this theorem, a suitable transform is constructed as a "collage, " and the "collage error" is represented as the distance between the collage and the image.In the traditional fractal encoding, devised by Jacquin, image is titled by "range blocks, " each of which is mapped from one of the "domain blocks" as depicted in Figure 1; the combined mappings constitute transforms on the image as a whole [6]. Encoding image needs a suitable transform minimizing the collage error for each range block, which requires recoding blocks mapping parameters, such as contrast scaling s and luminance offset .These parameters are applied on (1) for image reconstruction.In (1) is defined as the image at th iteration. 0 , the initial image, can be an arbitrary image with the same size of encoding image.Consider . . . Block Truncation Coding.Delp and Mitchell [7] presented a block truncation coding (BTC) scheme for image compression.It uses a two-level nonparametric quantizer that adapts to local properties of the image.In this scheme, an original image is first divided into nonoverlapping square blocks of size × .For each pixel in a block, if its value is greater than the block mean ( ave ), the point is marked as 1, otherwise as 0, forming a two-value matrix consisting of 1 and 0. The equation is defined as follows: x = { 1, ≥ ave, 0, < ave. ( Then the matrix is reshaped to a 1 × 2 row vector to generate a binary sequence.Finally, the corresponding decimal, defined as BTC value, is recorded. Figure 2 shows a block after an original image is divided.Its ave is 241.875.There are 9 nodes bigger than ave . The two-value matrix is shown in Figure 3.In the matrix, nine nodes are noted as 1 and the others as 0. The binary sequence is 1010 1100 1111 1000, so the BTC value is 44272. Fractal Dictionary Based on M-J Set.In fractal decoding, the initial image and the final image have no direct relationship.Therefore, we compress enough domain blocks into a file as a dictionary.An image is encoded by finding bestmatching domain blocks in the dictionary and decoded like traditional decoding process, by affine transform on these best-matching domain blocks. Mandelbrot set (abbreviated as M set) and Julia set (abbreviated as J set) are the classical sets in fractal study.They both contain abundant information.Each point in the M set corresponds to different parameters for the J set construction.The structures of J sets have self-similarity and infinity, which are rich enough to present an image.We use it for a dictionary [8].The process is as follows.Step 1. Choose parameters for a J set.We use a standard M set's boundary points as generation parameters for J set. Step 2. Generate the J sets.We use the above parameters to generate J sets.According to the time-escaped algorithm [9], points in J sets are represented as the escape time, which must be satisfied with the following equation: where Max Iterative is the max escape time. Step 3. Quantize the image of J set.The values of the pixels assigned as the escape time are relatively small.It is better to multiply them by an expansion number as follows, so that the pixel values are between 0 and 255 equally: Note that H is the expansion number. Step 4. Classify the domain block.The J set from Step 3 is divided into × nonoverlapping blocks.We calculate the BTC value of each block and use it as a classifier.If a block is the same as one in the BTC queue or if the queue has v blocks, we ignore this block.Otherwise, we compute the collage error between the calculating block and each one in the queue.If the collage error is less than a threshold, this block would be ignored, otherwise, added to the queue. Step 5. Output the dictionary.All the blocks are written into a file by ascending BTC.We call this file dictionary. Like traditional fractal encoding, an original image is first divided into nonoverlapping range blocks of size × . For each range, we search a best-matching block with the smallest norm in the BTC queue after its BTC value is calculated.Finally, we get the parameters of each range block: one BTC value (btc ), matching block number ( ), contrast scaling parameter ( ), luminance offset ( ), and affine transformation (Γ ). In the decoding process, we use the (btc , ) as an index to locate the best-matching block.The original image can be decoded as follows: where is the domain block in the dictionary after affine transformation. The Proposed Indices It has been demonstrated in the literature that a grayscale histogram (color histogram) provides good indexing and retrieval performance while being computationally inexpensive [10].However, it is still a coarse feature when applied in image retrieving systems.In the existing method, we know that the histogram of fractal coding for image retrieval is effective [2].In this paper, we use the following indices in the retrieving system. Dictionary of Collage Error (DE). Collage error is the real-value of the distance between the range block and bestmatching domain block.The smaller it is, the closer the decoded image is to the original image.Consider In (6), U is a matrix whose elements are all ones.As (6) shows, it can also demonstrate the distance between the original image and dictionary.So, the distribution of collage error can be used as a parameter to classify texture images.We quantize collage error to an integer interval (K).It is rounded to a nearest integer when it is smaller than K, or it is cut into K if it is bigger than K.In this paper, K is 13. Figure 5 shows that similar texture images share almost the same distributions, while differing from different texture images.Hence, the distance between the same texture images is smaller than the different ones. Dictionary of BTC (DB). The domain blocks in the dictionary are classified by BTC value as a category.So BTC value can also be treated as an index when we search a bestmatching block.An image has a feature on BTC distribution (DB) and can be a scale in the image retrieving system. In this paper, DB is a quantized value ranging from 0 to 15.We calculate the DB of images in Figure 4. Figure 6 shows that the DB of four similar images, Figures 6(a)-6(d), are distributed similarly while Figures 6(e)-6(h) are not.Based on the above observation, we choose the DB as an image index.However, experimental results prove that it is only a coarse parameter, which will be discussed in Section 4. Joint of Dictionary of BTC and S (JDBS). Schouten and De Zeeuw [11] have proved that contrast scaling parameters (s) in fractal coding can be used in retrieving images.But it is still a rough feature, just as DE and DB.Combing BTC with s, we present a 2D joint histogram with its character expressed in Figure 7. Note that num is a sum number when = , BTC = btc .Then, we get the result of JDBS shown in Figure 8. Figure 8 shows peaks of the same texture images coordinated roughly near while far in different texture images visually.We believe that JDBS are more precise than the above indices in retrieving images. Similarity Measurement. To measure the similarity between two images, we choose 2 as distance metric, which is expressed as follows: where { 1 , . . ., } and {V 1 , . . ., V } are our proposed indices of query image and candidate images, respectively.The distances corresponding to DE, DB, and JDBS for images are listed in Table 1.The query image is image (a) in Compared with DB and JDBS, the distance of DB between similar textured images has changed at 0.1; the JDBS changes at 0.001, and the distance between dissimilar texture images changes irregularly.Note that JDBS is more accurate. The Operation Process. The whole operation process includes three parts: encoding image, extracting statistical features, and comparing their feature distances as shown in (7).The pseudocode is listed as in Figure 9. Performance Evaluation In this section, we present the performance of the proposed indices and compare our method with the literature's methods.The image retrieval system is shown in Figure 10.The test database is composed of 26 512 × 512 grayscale Brodatz texture images [12] and each image is separated into 16 subimages of size 128 × 128.Each sub-image is encoded based on M-J set fractal dictionary.A query image is randomly selected from the test database.All the sixteen retrieved subimages are selected based on the smallest distance criterion.In this paper, we use indices' length to evaluate the computational complexity. Unfortunately, all the proposed indices have some inherent flaws.Although JDBS does better than DB, its vector length is longer than DB's, so it takes more computational complexity than that of DB.In order to reduce the complexity, we divide a retrieving index into several parts.For instance, we can replace the JDBS with the method of DS + DB.A list of candidate images is selected by matching DS + DB.Also, we can combine two indices.Let DE and DS be a 2D joint statistic of JDSE, then the performance will be enhanced.On 4.1.Average Retrieval Rate.Usually, average retrieval rate (ARR), as follows, can evaluate a technique's performance: Note that F is denoted as the number of retrieved images, is denoted as the number of correctly retrieved images at zth test and Z is the number of subimages in the test database.In this case, = 16 and = 416.All experiments shown in Table 2 were conducted on a Core(TM) i5(2.40GHz) PC.All data shown in Table 2, was acquired by the experiments. Compared to HM and KM, DB technique has a better performance.The average retrieval rate of DB (for 16 vector length) is 60.86%, while the HM and the KM provide 42.01% and 32.84% average retrieval rate at similar vector length.If the DB is quantized to 32, its performance goes up by 6.19%.At the same time, the performance of DS + DB goes up to 70.12% average retrieval rate, which is 9.18% higher than that of HS + HM and 31.97%higher than that of KS + KM, while their vector lengths are all around 21. Compared with DS + DE + DB, JDSE + DB technique works well in average retrieval rate.When the DB is at 32 length, the total length of JDSE + DB is at 84, which is not much longer than 68 vector length of JDSE + DB and JHSE + HM.What more, due to the simple operations, the vector length does not impact on the computing complexity so much when the length is not too long.The time consumption in retrieving is less than 0.3 s in the experiments, so its computing complexity is tolerated.When JDSE + DB is at 68 and 84, the average retrieval rates reach 78.67% and 79.18%, respectively. Precision against Recall Curve.Precision against recall curve is another method of evaluating retrieving performance.The higher both precision and recall are, the better Note that retrieved is a set of retrieved images for a query and relevant is a set of relevant images for the query images [13] (in this case, relevant = 16).Based on top 2, 4, 6, 8, 10, 12, 14, and 16, we calculate the average precision and average recall of both HS + HE + HM and DS + DE + DB at 33 vector length.The average precision (or recall) of DS + DE + DB is obviously higher than the average precision (or recall) of HS+ HE + HM, when vector lengths are the same (see Figure 11).Besides, DS + DE + DB's slope varies slightly, and this implies that DS + DE + DB has a better performance. To some extent, the curve varies greatly with the quantization levels of DB (see Figure 12).However, when the quantization level is in excess of 16, the curve changes slightly; on the other hand, the computation becomes more complex.Figure 13 shows that the average retrieval rates change significantly as the quantization levels of DB increase. When the level reaches 16, the curve gets to its peak.After 16, the distribution of DB becomes too detailed, which makes the DE + DS + DB lose statistical characteristics when DE is at thirteen vector length and DS is at four vector length, so the curve falls down and the average retrieval rates decrease.That is to say, when the vector length is 16, the performance and complexity can achieve a balance. Conclusions In this paper, we have proposed a set of indices based on M-J fractal dictionary encoding.M-J fractal dictionary is a shared file composed of blocks of Julia set with BTC ascending.We proposed that DE, DB, and JDBS indices are close for similar texture images and different for different texture images.Subsequently, we calculated average retrieval rate, average precision, and average recall and compare with previous methods.We discussed further minimizing of the vector length of DS + DE + DB without a big loss of retrieval rate and gave the optimal length of the feature vector.Experimental results on a database of 416 texture images showed that the proposed indices provided better performance than the previous methods.Also, DE + JDBS provided a 79.18% average retrieval rate at the maximum, and its computational complexity was tolerated.In addition, JDSE + DB and DS + DE + DB not only had low computational complexity, but also provided competitive retrieval rate, compared to existing methods. Figure 1 :)Figure 2 : Figure 1: One mapping from a domain block to range block. )Figure 3 : Figure 3: Bit matrix of the block in Figure 2. Figure 4 . It can be observed that the corresponding indices are roughly close for similar texture images and different for different texture images.However, (d)((a), (c)) is bigger than (d)((a), (f)) and (d)((a), (h)) on DE.In fact, (a) and (c) share similar texture, while (a), (f) and (h) are different texture images.This causes an unexpected result that image (f) and image (h) are retrieved, while image (c) is lost when the query image is (a).Hence, DE is only a coarse index. Figure 5 : Figure 5: Dictionary of collage error corresponding to the eight texture images in Figure 4. 0 Figure 6 : Figure 6: Dictionary of BTC corresponding to the eight texture images in Figure 4. BTC is quantized to 16. Figure 8 : Figure 8: 2D joint histogram of BTC and s corresponding to the eight texture images in Figure 4. Figure 9 : Figure 9: The pseudocode of the whole operation process. Figure 10 : Figure 10: The image retriveing system based on M-J set fractal dictionary. Figure 11 : Figure 11: Precision against recall curve of HS + HE + HM and DS + DE + DB. Figure 12 : Figure 12: Precision against recall curve of DS+DE+DB at different DB quantizations. Table 1 : Distance between query image and candidate images. i Figure 7: 2D joint structure of BTC and s, where BTC and s are quantized to 16 and 4, respectively. Table 2 : Average retrieval rate of different retrieval methods.
2018-12-18T18:07:00.755Z
2013-12-26T00:00:00.000
{ "year": 2013, "sha1": "5797079343eb4cc1959eeaea17a3ca7314ed0503", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2013/689602.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5797079343eb4cc1959eeaea17a3ca7314ed0503", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
4957981
pes2o/s2orc
v3-fos-license
Excitation of self-localized spin-wave"bullets"by spin-polarized current in in-plane magnetized magnetic nano-contacts: a micromagnetic study It was shown by micromagnetic simulation that a current-driven in-plane magnetized magnetic nano-contact, besides a quasi-linear propagating ("Slonczewski") spin wave mode, can also support a nonlinear self-localized spin wave"bullet"mode that exists in a much wider range of bias currents. The frequency of the"bullet"mode lies below the spectrum of linear propagating spin waves, which makes this mode evanescent and determines its spatial localization. The threshold current for the excitation of the self-localized"bullet"is substantially lower than for the linear propagating mode, but finite-amplitude initial perturbations of magnetization are necessary to generate a"bullet"in our numerical simulations, where thermal fluctuations are neglected. Consequently, in these simulations the hysteretic switching between the propagating and localized spin wave modes is found when the bias current is varied. INTRODUCTION It was theoretically predicted 1-3 and experimentally observed 4-9 that persistent microwave magnetization precession can be excited in a thin ("free") layer of a magnetic layered structure by direct current traversing the structure. The bias current passing through a magnetic layered structure becomes spin-polarized in the direction of magnetization of a thicker ("fixed") magnetic layer, and then can transfer this induced spin angular momentum to the magnetization of a thinner ("free") magnetic layer. For the proper direction of the bias current this spin-transfer mechanism creates an effective negative magnetic damping in the "free" magnetic layer, which, for sufficiently large current magnitude, can compensate the natural positive magnetic damping and lead to the excitation of microwave spin waves 3,10,11 . The analytical theory of spin wave excitation in magnetic nano-contacts by spin-polarized current performed in linear 3 and weakly nonlinear 12 approximations showed possibility of selfsustained excitation of two qualitatively different modes: linear propagating "Slonczewski" mode 3 and nonlinear evanescent "bullet" mode 12 . The latter mode exists only in in-plane magnetized case, has a substantially lower excitation threshold due to its self-localized character and, consequently, vanishing radiation losses, and is believed 12 to have been observed in experiments [7][8][9] . At the same time, the full-scale micromagnetic simulations of magnetization dynamics in inplane magnetized nano-contacts 13,14 done using the Landau-Lifshitz-Gilbert equation with Slonczewski spin-transfer term showed no self-sustained excited spin wave states for current densities below the threshold of excitation of a linear propagating "Slonczewski" spin wave mode. Thus, it still remains unclear whether the analytically predicted low-threshold bullet mode 12 is an artifact of the small-amplitude expansion of full equations of motion for the magnetization done in Ref. 12, or it is a physical reality that can be observed in in-plane magnetized nano-contacts. In the latter case it is necessary to understand why the micromagnetic simulations 13,14 failed to reproduce this low-threshold localized spin wave mode. It should be noted that a spin wave mode having properties similar to the properties of a selflocalized spin wave "bullet" 12 was found in numerical simulations 14 , but for the current densities that were substantially larger than the instability threshold for the linear "Slonczewski" mode 3 . This high-current spin wave mode has many attributes of the self-localized nonlinear "bullet" mode: large precession angle, strong spatial localization, and low frequency (below the ferromagnetic resonance (FMR) frequency of the "free" layer). However, from the numerical results presented in Ref. 14 it is not clear whether this mode is really the spin wave "bullet" 12 or a strongly nonlinear spin wave excitation of a qualitatively different type related to the formation of vortex-antivortex pairs in a current-driven magnetic nano-contact [13][14] . The aim of our present paper is to verify the predictions of the analytical theory 12 about the existence of a low-threshold spin wave "bullet" mode using the full-scale micromagnetic simulations of the Landau-Lifshits-Gilbert-Slonczewski (LLGS) equation. In contrast with the previous numerical studies 13,14 , where the simulations of spin wave dynamics for each value of the bias current were performed starting from the equilibrium initial magnetization state, in our current work we, at first, progressively increase the bias current from zero to sufficiently large above-threshold value, and then progressively decrease this current to zero value. Using this method, we were able to observe in our simulations subcritically-unstable 15 spin wave modes (i.e. modes which require finite amplitude of spin wave fluctuations to be excited) even in the absence of thermal noise fluctuations. Starting our simulations from a large magnitude of the bias current (which corresponds to a strongly nonlinear regime of magnetization oscillations), and gradually reducing the current magnitude, we demonstrated that the spin wave "bullet" mode 12 can, indeed, be supported by bias currents that are substantially lower than the threshold of excitation of the linear "Slonczewski" mode 3 . At the same time, we also demonstrated that the spin wave "bullet" excitation is strongly subcritical (see Ref. 15 for details on subcritical instabilities) and, therefore, it is not possible to observe it when the bias current is increased starting from the equilibrium initial conditions (in the absence of thermal noise) up to relatively large magnitudes of the bias current. Only at the current magnitudes substantially exceeding the threshold current of a propagating "Slonczewski" spin wave mode the localized "bullet" mode with the frequency lower than the FMR frequency of the "free' layer is excited, in full agreement with the results of previous simulations 13,14 . Thus, the co-existence of two spin wave modes (propagating "Slonczewski" mode and localized "bullet" mode) with different critical currents and different instability scenario (linear and sub-critical) leads to the hysteretic behavior of a magnetic nano-contact when the bias current passing through it is varied. FORMULATION OF THE PROBLEM We studied current-induced spin wave dynamics in a magnetic multi-layered system consisting of a thick magnetic "pinned" layer (PL, see Fig. 1) that serves as a spin polarizer, a thin nonmagnetic spacer, and a thin magnetic "free" layer (FL). The thickness of the PL is assumed to be large enough to prevent any dynamics in this layer. The bias magnetic field H is applied in the plane of the structure along the axis z. The bias current I traversing the multi-layered structure is applied within the circular nano-contact area of the radius R c (see Fig. 1). The dynamics of magnetization M = M(t, r) of the "free" magnetic layer under the action of spin-polarized current is described by the Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation: where γ is the gyromagnetic ratio and H eff is the effective magnetic field calculated as a variational of the magnetic energy W of the system, which includes magnetostatic, exchange, and Zeeman contributions. The second term in the right-hand side of Eq. (1) is the phenomenological magnetic damping torque written in the nonlinear form 16 that is similar, but not identical, to the traditional Gilbert form used in the previous simulations 13,14 , and M 0 = |M| is the saturation magnetization of the "free" layer. Since the spin-torque mechanism of spin wave excitation is very efficient, it can lead to rather large magnetization precession angles very soon above the excitation threshold (see e.g. where α G is the dimensionless Gilbert damping constant, q 1 is a dimensionless phenomenological nonlinear damping parameter of the order of unity, and ξ is the dimensionless variable characterizing level of nonlinearity of magnetization precession (see Ref. 16 for details) : where ω M = γ M 0 . To determine how important is the role of nonlinear damping in the current-induced spin wave excitation, we used in our simulations two different values of the parameter q 1 : q 1 = 0, which gives us the classical Gilbert damping model and q 1 = 3, which corresponds to a moderate degree of damping nonlinearity. The last term in the right-hand side of Eq. (1) is the Slonczhewski spin-transfer torque 1,3 that is proportional to the bias current I. The function f(r/R c ) characterizes the distribution of current across the nano-contact area. In the simplest case of uniform current density distribution f(r/R c ) = 1 if r < R c and f(r/R c ) = 0 otherwise. In Eq. (1) the proportionality coefficient σ is determined by the spin-polarization efficiency ε and is given by the expression 3,10 Sd eM where ε the dimensionless spin-polarization efficiency defined in Refs. 1 and 3, g is the Landè factor, µ B is the Bohr magneton, e is the absolute value of the electron charge, d is the FL thickness and S = πR c 2 is the cross-sectional area of the nano-contact. In Eq. (1) the unit vector p defining the spin-polarization direction is parallel to the direction e z of the in-plane external magnetic field. In our calculations we made several simplifying assumptions. First of all, we neglected the constant current-induced (Oersted) magnetic field and the magnetostatic coupling between the two ferromagnetic layers (FL and PL) of a nano-contact as we do not believe that in the presence of a sufficiently large constant bias magnetic field these effects can qualitatively change the structure of spin wave modes excited in a nano-contact by a spin-polarized current. Second, we assumed that the magnetocrystalline anisotropy of the "free" layer is negligibly small. To reduce the computation time we, also, neglected the random fluctuations arising from the thermal noise. Our further investigations have shown that, although these fluctuations do not change the structure of spin wave modes that could be excited in a nano-contact, they might play an important role in the process of excitation of a particular spin wave mode in a laboratory experiment. In our simulations we used a set of material parameters that is typical for the experiments with current-induced spin wave excitations in in-plane magnetized nano-contacts with permalloy free layer 8 where D is the spin wave dispersion coefficient determined mostly by the exchange interaction and Γ is the linear spin wave damping rate proportional to the Gilbert damping constant α G . In the case of an in-plane magnetized magnetic FL linear propagating mode with the threshold current (5) can, also, exists, with parameters Γ and D having the form 12 : is the ferromagnetic resonance (FMR) frequency in the "free" layer. For a typical nano-contact radius of the order of several tens of nanometers the main contribution to the linear threshold current (5) comes from the first term describing radiation losses. According to the linear theory 3 , the propagating spin wave mode excited at the threshold is a cylindrical spin wave with the wave vector k L = 1.2/R c and frequency that is higher than the FMR frequency in the "free" layer. Due to its propagating character, the linear cylindrical spin wave mode excited at the threshold is relatively weakly localized near the excitation region (current-carrying nano-contact). Thus, in the limit of small damping the squared amplitude (proportional to the mode power) A 2 = (M 0 -M z )/2M 0 of the linear "Slonczewski" mode decays with the radial distance r as The nonlinear analysis 12 of spin wave excitations in in-plane magnetized nano-contact geometry revealed a qualitatively different picture. It was shown in Ref. 12 that the competition between the nonlinearity and exchange-related dispersion leads to the formation of a stationary two-dimensional self-localized non-propagating spin wave "bullet" mode whose frequency is shifted by the nonlinearity below the spectrum of linear spin wave modes, i.e. below the FMR frequency in the FL. This nonlinear mode has evanescent character with vanishing radiation losses, which leads to a substantial decrease of its threshold current in comparison to the linear propagating "Slonczewski" mode. In contrast with the linear mode, the "bullet" mode is strongly localized and its squared amplitude decays with the distance r much faster then in the case of the linear propagating mode (7): Here k B is an imaginary wave vector of the bullet mode, related to its frequency by the relation, similar to Eq. (6): Although the analytical theory 12 the dissipation might be nonlinear 16 , but is independent of the radial coordinate r, while close to the region boundary (R * < r < L/2) the dissipation is linearly increasing with coordinate and has the spatial rate c: The parameters of the dissipation function (10) In should be noted that reflections at the boundary of the computational region can also take place because of the inhomogeneous profile of the static internal magnetic field near these boundaries. To overcome this problem one usually uses either periodic boundary conditions 13,14,23,24 or open boundary conditions 17 . For the geometry of our simulations we used a different approach. Using the fact that our computational area is much larger than the typical wavelengths of the excited spin wave modes and expecting that the magnetization distributions calculated in our simulations would be reasonably smooth, we assumed that the magnetization far away from the nano-contact area is aligned along the direction (e z ) of the external bias magnetic field. Thus, we assumed that at the actual boundaries of the computational region the variable magnetization is fixed and is parallel to the direction of the bias magnetic field (z-axis): Outside the gridded region, the magnetization is also constrained to lie along the bias field direction. The magnetostatic charges appearing at the ends of the calculation region were consequently discarded. 25,26 It has been checked numerically that the above described pinned boundary conditions, acting on both exchange and magnetostatic fields, worked sufficiently well, i.e. a reasonably flat profile of the total effective field has been obtained in the vicinity of the computational boundaries. RESULTS AND DISCUSSION In our numerical simulations we started from the initial equilibrium state M = M 0 e z , and progressively increased the value of the applied bias current (taken with the proper sign corresponding to the case when electrons flow from FL to PL 3 ). We found that at the value L th I = 11 mA (which constitutes our numerical threshold current for the excitation of the linear Slonczewski-like mode 3 ) the initial uniform magnetization state loses its stability, and the system reaches the limit cycle representing the microwave generation. This threshold value is the same for both models of dissipation (with q 1 = 3 and q 1 = 0), and is quite close to the theoretical value L th I = 11.5 mA of the threshold current of the linear Slonczewski's mode calculated using Eq. (5). As it can be seen in Figs nano-contact has been demonstrated for in our previous work 17 . If the bias current is further increased in our simulation, an abrupt downward jump in the frequency of the excited mode is observed in both dissipation models (see Fig. 2 (a) and (c)), even though the range of existence of the linear (high-frequency) mode is larger in the case of nonlinear damping 16 undergoes an analogous upward jump to the values larger than 90 degrees (see Fig. 2 (b) and (d)). These large values of the precession angle correspond to the precession of the magnetization vector around the direction that is antiparallel to the external bias magnetic field. This effectively means a local reversal of the average magnetization vector in the area beneath the contact. The precession angles corresponding to the both excited spin wave modes (low-amplitude linear mode and highamplitude nonlinear mode) are shown schematically in the inset in the Fig. 2 (d). As it was mentioned above, the high-amplitude low-frequency mode appearing suddenly at large bias currents (larger than the threshold for the excitation of the linear "Slonczewski" mode) was observed previously in numerical simulations 13,14 . However, it was not clear from Refs. 13 and 14 whether this low-frequency mode is identical to the analytically predicted "bullet" mode 12 or represents another more complicated type of high-amplitude nonlinear spin wave excitation. To check the nature of this high-amplitude low-frequency mode, we performed numerical simulations with decreasing bias current. Starting from the stationary dynamic magnetization configuration that exists at sufficiently large bias current Thus, in our numerical simulations we were able to demonstrate that the nonlinear spin wave mode can exist in an in-plane magnetized magnetic nano-contact at such low values of the bias current, at which the linear propagating "Slonczewski" mode can not be supported. This means that, with a very high probability, the nonlinear low-frequency mode observed in our simulations is the self-localized "bullet" mode which has been predicted analytically in Ref. 12, but was not found in the previous numerical simulations [13][14] . We also believe that this localized "bullet" mode has been observed in the laboratory experiments [7][8][9] . It also follows from our numerical results that in the deterministic (without thermal noise) numerical simulations the spin wave "bullet" mode can only be excited by a hysteretic procedure when the bias current is first increased to a substantial supercritical value and then is gradually decreased. In agreement with the analytical prediction 12 Fig. 2). An additional property of the excited spin wave modes that can be successfully used for their identification is the spatial localization, which significantly differs for linear propagating mode (see Eq. (7)) and nonlinear self-localized "bullet" mode (Eq. (8)). The dependence of the squared amplitude A 2 on the distance r (taken along the z axis in Fig. 1) is shown in Fig. 3 for both linear and "bullet" modes. The curves in Fig. 3 were calculated for the bias current I = 12 mA at which both modes exist simultaneously (see Fig. 2). It is clear from Fig. 3 that the "bullet" mode is exponentially localized and at a distance r ≈ 4R c from the center of the nano-contact the bullet amplitude is three orders of magnitude lower than its value at the contact center. In comparison, the amplitude of the linear propagating spin wave mode at the same distance is two orders of magnitude larger than the "bullet" amplitude. The analysis of the numerical data on the spatial localization of the excited spin wave modes allows us to confirm the analytical conclusion 12 of the evanescent character of the "bullet" mode. In Fig. 4 we show numerically calculated profiles of the linear (dashed line) and "bullet" (solid line) modes in logarithmic scale and, for comparison, the analytical profiles (dash-dotted lines) of the linear mode calculated from Eq. (7) (Fig. 4 (a)) and of the "bullet" mode calculated from Eq. (8) ( Fig. 4 (b)). It is clear, that numerical simulations are in reasonably good agreement with the predictions of the analytical model 12 , which predicts that the bullet mode is a strongly localized evanescent mode. The weak oscillations of the amplitude of the linear propagating mode observed in our numerical simulations Fig. 4 are, most probably, related to the fact that the boundary conditions chosen in our simulations at the edges of spatial region of computation were not ideally absorbing, and resulted in the weak reflection of the linear propagating mode. To further prove the evanescent character of the high-amplitude "bullet" mode we determined (using Eq. (8)) the modulus of the "bullet" wave number |k B | from the spatial profiles of the "bullet" mode numerically calculated for different values of the bias current using the fitting procedure similar to the one shown in Fig. 4 (a). Then, we plotted in Fig. 5 the "bullet" mode frequencies numerically calculated for two different dissipation models (see Fig. 2 It is clear that in the whole range of calculated "bullet" frequencies the spatial localization of the "bullet" mode follows the formula (8), where the modulus of the "bullet" wave number is very close to its "evanescent" value D k Thus, it has been numerically proven that the high-amplitude spin wave "bullet" in an in-plane magnetized nano-contact having its frequency nonlinearly shifted below the FMR frequency ω FMR of the FL has, indeed, evanescent character, as it was predicted in the analytical calculation 12 . We also note, that the conclusion about the evanescent nature of the "bullet" mode does not depend significantly on the dissipation model used (nonlinear or Gilbert). CONCLUSION In conclusion, using full-scale micromagnetic simulations, we have numerically proven that a current-driven in-plane magnetized magnetic nano-contact can support at least two different types of microwave spin wave modes: quasi-linear propagating "Slonczewski" mode 3 and the subcriticallyunstable 15 self-localized nonlinear spin wave "bullet" mode 12 . We have shown that the "bullet" mode, having very large precession angles exceeding 90 degrees, can exist at the bias currents that are substantially lower than the threshold of excitation of the linear "Slonczewski mode (see Fig. 2) and, therefore, in real finite-temperature laboratory experiments [7][8][9] , where the thermal noise is present, the "bullet" mode is the mode that is be excited first when the bias current is increased. In our zero-temperature numerical simulation, where the influence of the thermal noise is excluded, the "bullet" mode can be excited only if we reduce bias current starting from the large supercritical values of it that significantly exceed the linear spin wave mode threshold. We have also proven that the high-amplitude "bullet" mode is an evanescent mode, whose spatial localization is directly related to the difference between the "bullet" frequency of the imaginary "bullet" wave number calculated from the numerical "bullet" profiles using Eq. (8) (like in Fig. 4 (b)): symbols -frequencies calculated numerically (see Fig. 2) for the nonlinear damping model with q 1 = 3 (solid squares) and for the standard Gilbert damping q 1 = 0 (open circles); solid line -"bullet" frequency analytically calculated from Eq. (9).
2017-05-21T05:49:00.419Z
2007-05-25T00:00:00.000
{ "year": 2007, "sha1": "ebff8f1593d528a0a5d58ffdcc1b4e9655357af1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0705.3750", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "be75c47ac40b39471098c2414854da816630f4bc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270607602
pes2o/s2orc
v3-fos-license
Rapid-Onset Obsessive-Compulsive Disorder With Hallucinations in a Post-seizure Four-Year-Old Male Rapid-onset obsessive-compulsive disorder (OCD) has been classically described in the context of infectious and autoimmune stressors, most famously PANDAS (pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections) and then PANS (pediatric autoimmune neuropsychiatric syndrome). PANS itself, however, specifically excludes neurological and medical disorders, including seizures, from the diagnostic criteria. Changes in affect, such as depression/anxiety and new-onset psychosis, have been previously described in the post-seizure period but often self-resolve. To the best of our knowledge, neither rapid onset nor exacerbation of OCD have been previously reported in a post-seizure patient. We present the case of a four-year five-month-old male with a history of poor weight gain who presented to the emergency department for a seizure in the context of hypoglycemia. During the hospital course and within one month following discharge, he became significant for a myriad of new behaviors, rituals, and even visual hallucinations. We propose that the seizure itself is a highly unique and likely neurophysiological stressor. We consider neurologically exacerbated OCD to be an area ripe for further investigation. Introduction Obsessive-compulsive disorder (OCD) is defined as either the presence of obsessions, compulsions, or both [1].In terms of prevalence, OCD accounts for 1-2% of the global population in both adult and pediatric populations [1,2].OCD can be divided into early-onset and adult presentations [2].Within the early onset, which makes up the majority of cases, the mean age of diagnosis was noted at 10.3 years [3].Furthermore, OCD in younger children is not unheard of, with one study citing a mean age for younger children of 4.95 years at the time of diagnosis [4].Indeed, prepubertal onset cases account for up to 30% of all OCD presentations [2].However, it must be remembered that OC (obsessive-compulsive) symptoms will often precede a formal diagnosis in an average of two to five years [4,5].In terms of heritability, early-onset OCD has an association of 10-fold among first-degree relatives [2]. Onset and presentation are commonly gradual and can be attributable to various psychosocial stressors or trauma [2].When the presentation is rapid, autoantibodies associated with GABHS (group A beta-hemolytic Streptococcus) and other autoimmune workups are routinely performed.In the absence of streptococcal antibodies, PANDAS (pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections) can be ruled out.Furthermore, in the absence of other neurological or medical disorders, a diagnosis of PANS (pediatric acute-onset neuropsychiatric syndrome) is reasonable [6]. In terms of symptoms, pediatric and adult populations can differ, with compulsions only being more prominent in early-onset OCD [2,7].In general, symptoms can consist of obsessions, including intrusive thoughts, or compulsions, including ritualized behaviors, that have a distressing impact on the patient in daily life.In OCD in very young children, patients are often unable to clearly articulate their thoughts or explain their actions, which makes the diagnosis even more challenging [4].Thus, parent/family recall becomes crucial.The family history may contribute to why compulsive rituals are more observable than obsessive thoughts or worries in pediatric patients.Additionally, the ability to articulate false perceptions is also extremely limited in very young patients [2].Finally, family accommodation of the patient's worries and rituals can have a deleterious effect on the prognosis [7]. In this study, we discuss the case of a four-year five-month-old patient with an initial diagnosis of OCD and visual hallucinations one month following admission for hypoglycemic seizures.Other comorbidities, including tics, anxiety, and depression, became readily apparent over time.We follow him mainly through his first two years.At the time of writing this article, the patient is currently an adolescent, approximately 14 years old.Even so, his unique historical presentation may contribute to the scant existing literature on OCD in the context of neurological injury. Case Presentation At the time of presentation to the emergency department, the patient was a four-year five-month-old male with a chronic history of poor weight gain who had a seizure-like episode in the home lasting 30 minutes in which he was notable for bilateral arm flexion, generalized stiffening, and vertical deviation of the eyes.Following this episode, he became somnolent and lethargic.In the emergency department, the patient was found to have a blood glucose of less than 20, an elevated white count of 18.24, and elevated urine ketones.Table 1 shows in-patient labs.Throughout the admission, the patient was consistently afebrile, with no infectious symptoms or sick contacts; the 24-hour EEG was also within normal limits.Head CT was furthermore unremarkable, as were brain MRIs in subsequent months.A diagnosis of hypoglycemic seizures in the context of inadequate food intake was made.Endocrine and pancreatic lab values were also normal.Following glucose resuscitation, scheduled accu-checks, and IV fluid resuscitation for eight hours, the patient was subsequently discharged after two days with a normal white count and glucose monitoring.Medications at discharge included acetaminophen for pain and cyproheptadine (started prior to admission) for appetite stimulation.Of note, the patient had a g-tube (gastronomy tube) placed by gastroenterology within six months following the seizure for repeated inadequate caloric intake.Even up to the present, the patient continues to depend on the g-tube to supplement per oral caloric intake.Referrals were made for neurology and psychiatry.Within one month, the patient was diagnosed with epilepsy and inattention spells by pediatric neurology.He was started on antiepileptic therapy, including levetiracetam titrated up to 10 mg/kg.However, the patient stopped levetiracetam after the first few months because of emerging rashes.After the first year, he was started on zonisamide alone, titrated up to 160 mg nightly.Repeat yearly EEGs were consistently negative, and no repeat seizures or spells were reported after five seizure-like episodes and inattentive spells in the first year.The patient later stopped antiepileptic therapy five years after the initial seizure and has been seizure-free ever since.The patient presented for an initial psychiatric visit five weeks following his hospital admission for a seizure.The mother, who served as the primary historian, had compiled a detailed list of novel and distressing behaviors observable in her son in the months prior, during hospitalization, and in the period up to the first psychiatric visit.Table 2 lists these concerning behaviors.While distressing behaviors had been noted leading up to the seizure, their frequency quickly multiplied following the seizure.Of particular note, within 24 hours of his seizure, the patient started grouping his potato chips into piles that he would eat or discard.He exclaimed that he could see black dots and bugs in his food and would spit out certain foods while mentioning spiders. Chronology of patient behaviors: mother's observations From TABLE 2: Chronology of new behaviors in patient as observed by mother By the time of the initial psychiatric visit, the patient was only consuming ramen noodles.He could not tolerate being interrupted during his speech without needing to start over.Furthermore, he would not permit others to touch him.He had also started petting his dog with 20 hand strokes each time every 20-30 minutes and could not bear to be interrupted.New thoughts included, "Am I poisonous?"and seeing spiders, insects, and black dots in his food where others could not. Although parents declined to have anti-streptococcal antibodies performed, all other autoantibodies, inflammatory markers, and physical exams were negative, making an autoimmune or post-infectious etiology less likely.The family further denied any history consistent with a sore throat or pharyngitis.Genetic studies were also performed, which were noteworthy for the patient being a heterozygous carrier of the L292X variant in the PEX7 gene.This mutation has been associated with pathogenicity when homozygous recessive [8]. With organic, autoimmune, and genetic etiologies having been ruled out, a diagnosis of OCD with poor insight was made.A diagnosis of visual hallucinations was also made based on the patient's history and distress.Neuropsychological evaluation concurred with a diagnosis of OCD and hallucinations while remaining equivocal for autism spectrum disorder. The family was initially hesitant to begin psychotropic therapy but later agreed to start fluoxetine up to 20 mg daily with limited benefit.Fluoxetine, in fact, had to be titrated down to 7 mg daily due to activating symptoms, including the patient becoming hyperverbal.Visual hallucinations abated over time but did intermittently reoccur with new auditory and tactile hallucinations in subsequent years.The patient described hearing distressing, at times aggressive, voices but without specific commands.Tactile hallucinations included formications.Even so, his presentation was never suggestive of schizophrenia, lacking both affective blunting and characteristic negative symptoms.Risperidone was initially trialed up to 0.25 mg nightly, but the patient and family felt it had no effect, and it was discontinued.Aripiprazole was also trialed up to 3.25 mg daily but was later discontinued due to family observations of its lack of benefit.Quetiapine was later started at 100 mg daily; hallucinations decreased in frequency but did not completely resolve.The patient continues to take quetiapine. Concurrent with his initial therapy with fluoxetine and clonidine for anxiety, up to 0.15 mg daily was added.Outside of pharmacological management, the patient also started intense CBT with an independent psychologist.As a result, the patient was further characterized as having OCD of incompletion or "just right" OCD.This type of OCD refers to the overwhelming thoughts or worries that can occur when soothing rituals or compulsions are unable to be completed [3].He has since undergone multiple trials of CBT, all mostly for self-limited benefit.Current symptoms have sporadically been noticeable for intermittent passive ideation of non-suicidal self-harm.At this time, the patient is presently stable, with a robust support network in his family.Worrying thoughts about new people or new family members (a newborn niece) and concerns with contamination are intermittent, but family accommodation (the act of reassurance) helps to ease patient distress.Soothing rituals (Table 2) are largely still intact. In retrospect, the mother had noticed subclinical distressing behaviors prior to his seizure (Table 2).These behaviors included ritualized actions, walking only on white tiles, fear of contamination, difficulty selfsoothing, and requiring repeated reassurances from parents.However, she concurs that the patient's behaviors considerably worsened in the days and weeks following the seizure. Birth, developmental, social and family history The patient was born via induced vaginal delivery at 40 weeks to an advanced maternal-age mother, age 42.Gestation was complicated by a knotted umbilical cord, a smaller stomach, and a calcified hepatic lesion. Development was initially unremarkable, with milestones appropriately achieved through the first two years of life.Starting at two years, the patient became notable for speech delay, vocal tics, oro-motor dysfunction, and poor weight gain.The patient later received a gastric tube for nutritional supplementation, upon which he is still partially dependent, although he no longer meets the criteria for poor weight gain.The patient also continues to receive speech therapy; however, there is now less concern for oro-motor dysfunction, and tics have largely subsided.As noted above, there was an initial concern for autism spectrum disorder, but this was not definitively supported by neuropsychological testing.Testing did indicate average development of verbal and nonverbal reasoning, language functioning, attention, intact visual-spatial skills, visual-spatial integration ranging from average to impaired, and some sensory sensitivity. At the time of the OCD diagnosis, the patient lived at home with two older siblings and both parents and maternal grandparents as caretakers.He is currently an adolescent, approximately 14 years of age, homeschooled due to social anxiety concerns for in-person school, and continues to live at home with both parents and one older sibling.Outside of his family support system, he is limited socially, but he does periodically attend online classes to help with social interaction with similar-aged peers. Both my father and paternal grandfather have a history of epilepsy.The patient has two older siblings who have since both been independently diagnosed with OCD.The mother has no other relevant psychiatric history. Discussion One limitation, in this case, is that the patient never completed testing for group A β-hemolytic streptococcal or GABHS-induced autoantibodies.However, other autoimmune workups were negative.These antibodies are part of the routine workup for rapid-onset OCD and would support a diagnosis of PANDAS.The patient and their family were unable to complete testing due to insurance coverage, personal reluctance, and because multiple specialists in psychiatry, psychology, and neurology from geographically separate health systems were being simultaneously consulted.That being so, this may not be problematic because there are several reasons why the patient was less suspicious of PANDAS. Not only did the family deny any historical symptoms consistent with a strep pharyngitis infection in the months preceding his OCD diagnosis, but PANDAS has an episodic course where obsessive-compulsive or OC symptoms will relapse, remit, and be exacerbated by repeat GABHS infections [9][10][11].Our patient (Table 2) had OC symptoms going back months and years before being formally diagnosed with OCD.OC symptoms never remitted following his seizure but rather increasingly worsened.Indeed, retrospective research has demonstrated patients will experience OC symptoms for an average of five years before meeting OCD criteria [5,9]. With the patient clinically less suspicious for PANDAS, PANS would reasonably next be considered in the workup.Historically, PANDAS was revised to be included under PANS because of the significant number of OCD patients testing negative for PANDAS autoantibodies.However, according to the diagnostic criteria, PANS is specifically excluded when OC symptoms may be reasonably ascribed to another medical or neurological disorder [7,10].Therefore, we must consider the temporal correlation between the neurological disorder consisting of the hypoglycemic seizure (later diagnosed as epilepsy) and the exacerbation of OC symptoms. There have been several reports of OCD exacerbated by neurological disorders.Among these is a case where an 11-year-old male previously diagnosed with OCD experienced a worsening in his symptoms following obstructive hydrocephalus secondary to glioma.Following intervention for the hydrocephalus, OC symptoms only temporarily improved [12].The literature review also includes several cases where OCD was diagnosed following a traumatic brain injury with corroborating neuroimaging [13].One case in particular was of a 12-year-old who was diagnosed with OCD and aggression, which resolved after two years [14]. Both cases highlight the neurophysiological underpinnings of OCD, namely the research implicating dysfunction in the cortico-striatal-thalamo-cortical (CSTC) loop [15].In fact, dysfunction in the CSTC loop has been noted in pediatric populations even during the subclinical phase [5].This is relevant to our patient because, in retrospect, he had many subclinical OC symptoms prior to the seizure, diagnosis of epilepsy, and OCD diagnosis. There have long been reports correlating epilepsy with OCD, but this has mainly been with temporal lobe epilepsy (TLE).Up to 70% of patients with TLE have been diagnosed with OCD [16,17].Although our patient was later diagnosed with epilepsy (note that he did have a supporting family history of epilepsy in his father and paternal grandfather) and did begin antiepileptic drug therapy, repeat EEGs were consistently negative, and his epilepsy was unable to be more precisely characterized.Furthermore, as mentioned above, the patient discontinued antiepileptic drug therapy after five years, as he was seizure-free between the second and fifth years following his diagnosis of epilepsy.However, his OCD persisted. Returning to our patient's hypoglycemic seizure, behavioral changes in the following seizures have previously been described.However, these cases have mainly been in the acute post-seizure period, with depression and anxiety symptoms typically lasting two to five days [18].Our patient was, in fact, later diagnosed with recurrent depression and anxiety, but this was likely independent of his seizures.However, our patient also did present with hallucinations, which were initially only visual but in subsequent years did include sporadic audio and tactile hallucinations.Post-seizure psychosis, similar to post-seizure affective symptoms, tends to be more often circumscribed [19]. Our patient has had intermittent hallucinations since first being diagnosed with OCD and has partially benefited from second-generation antipsychotic therapy, only quetiapine, while having no benefit from risperidone or aripiprazole.The fact remains that his OC symptoms predate any psychotic symptoms.Moreover, although his psychotic symptoms do impact his day-to-day life (i.e., causing him distress), he has always understood them to not be real; he has no flat affect, no delusions, and no disorganized thinking.Comorbid OCD and psychosis, according to one meta-analysis, have a prevalence of anywhere from 12-24% [20].Therefore, OCD and psychosis can and do overlap in many patients. Conclusions Rapid onset and early-onset obsessive-compulsive have classically been described in the context of PANDAS/PANS.PANDAS antibodies were, unfortunately, never completed in this patient.However, a diagnosis of PANDAS was less likely, as shown above.Although PANS allows for acute stressors other than GABHS, the criteria exclude neurological or medical disorders.Therefore, a diagnosis of rapid-onset OCD with hallucinations in the context of a discrete neurophysiological stressor, i.e., a seizure, represents a unique clinical presentation of pediatric obsessive-compulsive disorder.The uniqueness of this case is highlighted by the fact that although comorbid OCD and epilepsy cases have been previously described in other literature, to the best of our knowledge, this is the first observation wherein the acute stressor might reasonably be due to a seizure.It is our hope that this case report will contribute to a body of novel presentations of OCD with implications for the neurodevelopment of other pediatric patients with similar psychopathologies.It is also our hope that this case report will contribute to the ever-growing body of literature describing new psychiatric symptomatology in post-seizure patients. TABLE 1 : Relevant objective values during admission for new-onset seizure Values in bold are abnormal.WBC: white blood cell, RBC: red blood cell, HGB: hemoglobin, HCT: hematocrit, MCV: mean corpuscular volume, MCH: mean corpuscular hemoglobin, MCHC: mean corpuscular hemoglobin concentration, RDW: red cell distribution width, MPV: mean platelet volume
2024-06-20T15:09:20.120Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "3a38acb699fe72e3dd9d0cce9a7012f575766432", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/198384/20240618-24500-1w8zkjo.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51d70a1a9ef5fdb29b114435a7e088713af665de", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220365533
pes2o/s2orc
v3-fos-license
Contrasting Responses of Plastid Terminal Oxidase Activity Under Salt Stress in Two C4 Species With Different Salt Tolerance The present study reveals contrasting responses of photosynthesis to salt stress in two C4 species: a glycophyte Setaria viridis (SV) and a halophyte Spartina alterniflora (SA). Specifically, the effect of short-term salt stress treatment on the photosynthetic CO2 uptake and electron transport were investigated in SV and its salt-tolerant close relative SA. In this experiment, at the beginning, plants were grown in soil then were exposed to salt stress under hydroponic conditions for two weeks. SV demonstrated a much higher susceptibility to salt stress than SA; while, SV was incapable to survive subjected to about 100 mM, SA can tolerate salt concentrations up to 550 mM with slight effect on photosynthetic CO2 uptake rates and electrons transport chain conductance (gETC). Regardless the oxygen concentration used, our results show an enhancement in the P700 oxidation with increasing O2 concentration for SV following NaCl treatment and almost no change for SA. We also observed an activation of the cyclic NDH-dependent pathway in SV by about 2.36 times upon exposure to 50 mM NaCl for 12 days (d); however, its activity in SA drops by about 25% compared to the control without salt treatment. Using PTOX inhibitor (n-PG) and that of the Qo-binding site of Cytb6/f (DBMIB), at two O2 levels (2 and 21%), to restrict electrons flow towards PSI, we successfully revealed the presence of a possible PTOX activity under salt stress for SA but not for SV. However, by q-PCR and western-blot analysis, we showed an increase in PTOX amount by about 3–4 times for SA under salt stress but not or very less for SV. Overall, this study provides strong proof for the existence of PTOX as an alternative electron pathway in C4 species (SA), which might play more than a photoprotective role under salt stress. INTRODUCTION Soil salinity constitutes a major environmental scourge that adversely affects crop productivity and yield quality (Horie and Schroeder, 2004). Approximately one fifth of the world's cultivated area and about half of the world's irrigated lands are affected by the salinity constraint (Sairam and Tyagi, 2004). Mechanisms of how plants respond and/or tolerate salt stress are under intensive study (Zhu, 2001;Munns and Tester, 2008). To survive and overcome salt stress, plants mostly respond and acclimate with complex mechanisms including morphological, physiological and biochemical strategies (Taji et al., 2004;Acosta-Motos et al., 2015), which serve to modulate ion homeostasis, compatible compounds biosynthesis, sequestration of toxic ions, and reactive oxygen species (ROS) scavenging systems (Stepien and Klobus, 2005;Flowers and Colmer, 2008;Stepien and Johnson, 2009). In this study, we report that a protein involved in alternative electron transfer, PTOX, might be related to salt tolerance in C 4 plants. The protein PTOX is a plastid-localized protein involved in the plastoquinol oxygen oxido-reductase electrons flow process. PTOXwas discovered in the so-called immutans of Arabidopsis which shows a variegated leaf phenotype (Redei, 1963;Wetzel et al., 1994;Carol et al., 1999;Wu et al., 1999;Shahbazi et al., 2007). In chloroplasts, PTOX is situated at the stroma lamellae (SL) directly exposed to the stroma compartment (Lennon et al., 2003) and it is essential for the plastid development and carotenoid biosynthesis in plants (Carol et al., 1999;Aluru et al., 2001). PTOX is also involved in photosynthetic electron transport (Okegawa et al., 2010;Trouillard et al., 2012), chlororespiration (Cournac et al., 2000), poising chloroplast redox potential under dark (Nawrocki et al., 2015), and in response to abiotic stress (McDonald et al., 2011;Sun and Wen, 2011). It has been reported that plants grown under moderate light and non-stressful conditions exhibit low PTOX levels (uniquely 1 PTOX for 100 PSII photosystem; Lennon et al., 2003); in contrast, high PTOX levels have been characterized for plants exposed to various abiotic stresses such as heat, high light and drought (Quiles, 2006), high soil salinity (Stepien and Johnson, 2009), cold treatment and high intensities of visible light (Ivanov et al., 2012) and UV light (Laureau et al., 2013). In this study, in an effort to understand potential mechanism of how the halophyte SA tolerate high salt stress, we show that compared to a glycophyte species SV, under high salt stress (500 mM), SA showed increased expression of PTOX, which might have played a critical role for the maintenance of photosynthetic physiology and hence high photosynthetic efficiency of this species under salt constraint. Plant Material Seeds of SA were collected from San-San Lake in South East Shanghai city at the end of November in 2015 and 2016. The cleaned spikelets were stored in wet tissue (cloth) in sealed plastic at 4°C in the refrigerator. SA mature seeds require two to three months, after-ripening, wet storage in cold (stratification) to break dormancy (Garbisch and McIninch, 1992) and they remain viable for about one year. Seeds of SV were rinsed several times with tap water and then transferred to Petri-dishes and covered with water till germinate. After germination, they were transferred into potted soil. When the young seedlings of SA were about 2 cm in length and started greening, they were removed from the glass petri dishes. Trays containing SA seedlings were kept indoor at a temperature between 25 and 27°C, under fluorescent light at a photosynthetic photon flux density (PPFD) of 80-120 mmol m −2 s −1 with a photoperiod of 16/8 h for light/dark, respectively. Two-month old healthy plants with large expanded leaves where transferred to a hydroponic system for salt treatment. For SV, dry seeds were directly sown into wet potted soil, which was maintained wet by spraying water daily on the soil till the seeds germinated. SV grew under the same photoperiod and temperature conditions as SA. Nutrient was added routinely to ensure healthy growing plants before transferring them to hydroponic medium for salt stress treatment. Salt Stress (NaCl) Treatment Salt (NaCl) treatment was applied to hydroponic solution. During the plants transfer, roots were washed adequately with tap water then rinsed with deionized water. Four-week-old SV and 10-weekold SA plants were treated with 0, 50, 100, 250, 400, and 550 mM NaCl for up to 15 d. The composition of the hydroponic medium was as described by Hoagland and Arnon (1950). Determination of Monovalent Cations (Na + and K + ) Content Leaves were harvested and washed with deionized water twice. Then, the leaf samples were dried in the oven firstly at around 105°C for 2 h, subsequently at 65-70°C for 72 h, and then weighed to record their dry weights. Lyophilized leaves were ground to powder using pestle and mortar for mineral nutrient evaluation. The grinded samples to powder were thereafter dissolved in 10 ml of HNO 3 (0.1N) for 60 min at 95°C to extracted the major cations. The obtained solutions were subsequently filtered through Whatman filter paper, diluted with deionized water and processed for Na + and K + determination. The cations (Na + and K + ) levels were determined with an atomic spectrophotometer (PerkinElmer, PE AAS 900 F). Chlorophyll (Chl) Content Measurements The total Chl content was determined as previously described by Porra et al. (1989). Leaf segments (0.1 g) were first washed with distilled water and then kept in 1 ml acetone (80%) at 4°C for 12 d. Assessment of Photosystem II (PSII) Parameters PSII efficiency was assessed using the Chl a fluorescence induction (FI) technique. We used the multifunctional plant efficiency analyzer (M-PEA; Hansatech, King Lynn, Norfolk, UK) for the evaluation of PSII parameters as reported in details by Essemine et al. (2017). Plants were dark-adapted for at least 1 h at 25°C before measurements. Then, healthy and fully expanded leaves were exposed to saturating orange-red (625 nm) actinic light (AL, 5,000 µmol m −2 s −1 ) provided by the LED for 1 second. The ratio of variable fluorescence level F v (F m -F 0 ) to the maximum fluorescence level F m (F v /F m ) was considered herein to estimate the maximum PSII efficiency. F m (P-level) is the maximum yield of Chl a fluorescence and F 0 (O-level) represents its (Chl a fluorescence) minimum (the intensity of Chl a fluorescence of dark-adapted leaf with a measuring light of negligible AL intensity). F v /F 0 parameter represents the functional reaction center of PSII. All the parameters listed in the Table S1 were calculated from the original OJIP curves based on the so-called JIP-test (Strasser et al., 2004). The OJIP curve represents the transient fast chlorophyll a fluorescence induction of the dark adapted leaf following excitation with 1 s of saturating orange-red light (625 nm; 5,000_mmol m −2 s −1 ; Essemine et al., 2017). Setting of PAM Together With Infrared Gas Analyzer to Control CO 2 and O 2 Supply A special chamber was custom-designed and developed to enable precise control of CO 2 and O 2 environments. This chamber was tightly mounted on the detector-emitter of the Dual-PAM-100 fluorimeter which was connected through a hole to the Li-COR 6400 portable infrared gas analyzer (IRGA) to control CO 2 supplies (390 or 2,000 ml L −1 ) by Li-cor and via another window to an oxygen source equipped with an oxymeter to adjust the flow of oxygen from the source to the chamber. Oxygen sources with different concentrations (e.g. 2 and 21% as used in this study) are supplied by a gases distribution station (GDS). The setting for experiments using different levels of CO 2 and O 2 was as depicted in Figure 1 and video in Supplemental Data. Evaluation of P 700 Redox State in Leaves of SV and SA In order to probe the photosynthetic electron flow through PSI during steady-state photosynthesis in vivo, we preceded to determine the P 700 redox state in the light by measuring the oxidation of P 700 within the leaf as absorbance changes at 830 minus 875 nm to avoid any oxidation of plastocyanin (Pc). P 700 was oxidized to P 700 + at different intensities of AL ranging from 0 to 1,804 mmol m −2 s −1 (DA) then re-reduced in the dark and finally oxidized to a maximum level of P 700 + under far-red illumination to favor PSI photochemistry (DAmax; Klughammer and Schreiber, 1994;Zygadlo et al., 2005;Klughammer and Schreiber, 2008). The light dependence of the P 700 oxidation ratio (DA/DAmax; Klughammer and Schreiber, 1994;Zygadlo et al., 2005;DalCorso et al., 2008;Klughammer and Schreiber, 2008) was examined in SV and SA plants. The Far-red light (FR) intensity used was 102 mmol m −2 s −1 and a 100-ms saturating pulse (SP) of PPFD of 8,000 mmol m −2 s −1 was applied under background AL and FR. Conductance of the Electron Transport Chain (g ETC ) To estimate the conductance of the electron transfer chain (g ETC ), we used a similar experiment setting to the previous section monitoring the redox state of PSI with slight modifications. The saturating pulse was given under darkness simultaneously with the termination of AL. Notably, a 100-ms width SP at a PPFD of 8,000 mmol m −2 s −1 was applied and the decay in absorbance followed upon transition from the 100 ms SP to darkness (Klughammer and Schreiber, 1994;Klughammer and Schreiber, 2008). This intensity was found to be saturating regardless the condition used. Accordingly, the application of a flash induced rapid rise in the absorbance (A) signal, with no decrease during the flash regime (100-ms, not shown). The absorbance decay curve under such conditions (ctrl or salt) approximated closely to a first-order kinetic and fitted well with a mono-exponential decay, yielding a rate constant. This is considered as the measure of the electron transfer chain conductance (Golding and Johnson, 2003;Stepien and Johnson, 2009). In Situ Histochemical Localization of Reactive Oxygen Species (ROS) To detect the reactive oxygen species (ROS), a histochemical staining of the samples with nitroblue tetrazolium (NBT) was performed following Zhang et al. (2010) with minor modifications. Detached leaves were first vacuum-infiltrated in their appropriate solution (with or without NBT). For superoxide free radical (O − 2 . ) characterization, leaves were soaked in 6 mM NBT solution containing 50 mM sodium phosphate (pH 7.5) for 12 h under darkness. To detect hydrogen peroxide (H 2 O 2 ), the detached leaves were immersed in 5 mM of 3, 3'-diaminobenzidine (DAB) solution containing 10 mM MES (pH 3.8) for 12 h under darkness. After that, the adaxial surface of the leaf was subjected to moderately high light (500 mmol m −2 s −1 ) for 1 h. The dark-blue spots reveal the interaction between NBT and the generated O − 2 . ; however, the brown spots on the leaf reflect the interaction between DAB and formed hydrogen peroxide (H 2 O 2 ) at the presence of peroxidase. Both reactions (DAB and NBT) were blocked through soaking the leaves in lacto-glycerolethanol (1:1:4 by vol). Chl was removed from the leaves before imaging by boiling leaves in their respective solutions (NBT or DAB) for 2 min then the solutions were discarded and leaves were re-boiled in water for two to three times (1 min each). Then leaves were incubated in alcohol (99.5%) as described by Zulfugarov et al. (2014) till complete removal of Chl. Afterwards, leaves devoid of Chl were preserved in 50% ethanol till photographed. RNA Extraction, Purification and qRT-PCR Analysis Eight candidate housekeeping genes (Kumar et al., 2013) were screened to select an appropriate reference gene for SA and SV. These eight genes have been reported on Setaria italica (Foxtail Millet), representing different functional classes and gene families (Kumar et al., 2013). These genes are: viz., 18S rRNA (18S), elongation factor-1a (EF-1a), actin2 (Act2), alpha tubulin (Tub a), beta tubulin (Tub b), translation factor (TLF), RNA polymerase II (RNA POL II), adenine phosphoribosyl transferase (APRT; Kumar et al., 2013). In a recent study on SA, tubulin was used as a housekeeping gene (Karan and Subudhi, 2012a). Based on the similarity index between the sequences of each housekeeping gene in SV and SA, we obtained the highest similarity index in Tubulin alpha (Tub a), which is around 85%. In this study, we therefore selected Tub a as reference gene for qRT-PCR. Total RNA was extracted from mature leaves using Purelink RNA Mini Kit (Invitrogen, Carlsbad, CA, USA) according to manufacturer's instructions. Concentration of each RNA sample was measured using NanoDrop 2000 spectrophotometer (NanoDrop Technologies). Leaves were sampled from both species and total RNA was extracted using TRIzol Plus RNA Purification kit (Invitrogen Life Technologies, http://www.invitrogen.com). One microgram (1 mg) of total RNA was used to synthesize first strand cDNA with SuperScript VILO cDNA Synthesis Kit (Invitrogen Life Technologies, http://www.invitrogen.com). Quantitative real-time PCR (qRT-PCR) was performed using SYBR Green PCR Master Mix (Applied Biosystems, USA) with the fist strand cDNA as a template on a Real-Time PCR System (ABI StepOnePlus, Applied Biosystems lco., USA), with the following cycling parameters: 95°C for 10 s, 55°C for 20 s, and 72°C for 20 s. Primers for qRT-PCR were designed using Primer-Blast of the National Center for Biotechnology Information website (NCBI; https://www.ncbi.nlm. nih.gov) and Oligo 7 software. The primers for PTOX and Tubulinalpha used for qPCR analysis were listed in Table S2. Relative expression of gene against housekeeping gene tubulin-alpha was calculated as: 2 −DDCT (DCT = CT, gene of interest-CT, Tubulinalpha), as described by Livak and Schmittgen (2001). Six complete biological and technical replicates were determined for the analysis. Detection of PTOX Contribution in Electron Transport in SA To determine the contribution of the PTOX to the entire photosystem II (PSII) electron transfer, the leaves of either untreated (control) or salt-treated SV and SA were vacuum infiltrated with either water or with 5 mM n-propyl gallate (n-PG, 3,4,5-trihydroxy-benzoic acid-npropyl ester; Sigma) or 50 mM DBMIB (2,5-dibromo-3-methyl-6isopropyl-p-benzoquinone, Sigma). The stock solutions of n-PG were freshly prepared in ethanol and DBMIB in methanol. Western-Blot Analysis For immunoblot (western-blot) analysis, thylakoids membranes were isolated according to the protocol of Cerovic and Plesnicar (1984). Thylakoids proteins were extracted from thylakoid membranes using 125 mM Tris-HCl buffer, pH 6.8, 20% glycerol, 4% (w/v) SDS, 5% (v/v) b-mercaptoethanol, 0.1% (w/v) bromophenol blue. Protein concentration was determined by the Bio-Rad protein assay kit (Bio-Rad Laboratories). The protein of the electrophoresis gel was transferred to nitrocellulose membrane as documented in Mudd et al. (2008). The specific antibodies raised against PTOX for both species (SA and SV) were designed by the company according the sequence homology between species which was 63% (see blast sequences alignment results in Supplemental Data). For protein expression analysis, leaves from control and salt-treated plants were collected 12 d after initiating salt treatment. We used SDS-PAGE, to separate 29 mg protein from the thylakoid membrane samples. The protein on the electrophoresis gel was then transferred to nitrocellulose membrane and used for immunodetection. Westernblot band size was quantified by TanonImage technology software. For gene expression level, leaves from control and salt-treated plants for 12 d were collected and immediately frozen into liquid nitrogen. Then samples were used for RNA isolation and purification using Invitrogen PureLink RNA Mini Kit. Statistical Analysis Statistical analysis was performed through one-way ANOVA using Tukey's test. The difference between control and salttreated samples for SA and SV was analyzed. Differences in the physiological parameters including ions (Na + and K + ) and total chlorophyll contents, the biophysical parameters encompassing F PSII , NPQ, ETRI, ETRII, g ETC , P 700 oxidation ratio and eventually the PTOX expression level and its (PTOX) protein band size were all tested. The difference was labeled as being either strongly significant (***, p ≤0.001), or very significant (**, p ≤0.01), or significant (*, p ≤0.05), or not significant (, p ≥0.05). Chlorophyll a Fluorescence Induction and JIP-Test We used a JIP-test (Strasser et al., 2004) to unravel the salt stress impact on most of PSII parameters in both SV and SA. Results depicted in Figure 2 were derived from the fast phase Chl a fluorescence induction curve, i.e. the OJIP curve. Salt stress treatment experiments show that, for SV, even moderate NaCl concentration (50 mM) increased the Fo level (data not shown) and the J-test of OJIP induction curves (data not shown). However, SA showed no/or slight difference in the OJIP induction curves for moderate (250 mM) and high (550 mM) NaCl concentrations compared to the control without NaCl. Therefore, the function of PSII was not affected for SA; however, it was strongly inhibited and/or damaged for SV even at relatively low NaCl concentration (50 mM). To study in detail the effects of NaCl on PSII in these two species, we evaluated the PSII parameters using the JIP-test (Figure 1, Table S1; Strasser et al., 2004). The JIP-test was evaluated from SV (A, B and C) and SA (D, E and F) exposed for 5 (A, D), 10 (B, E) and 15 d (C, F) to different NaCl concentrations. Herein, we observe that after 5 d of exposure to 100 mM NaCl, PSII parameters showed apparent change in SV (Figure 2A, green spider). The salt effect became more pronounced after 10 d of exposure to salt at either 50 or 100 mM ( Figure 2B, red and green spiders). However, for SA, the deviation in the PSII parameters calculated with JIP-test was much lower and observable only for high NaCl concentration (550 mM) after 10 and 15 d' exposure ( Figures 2E, F, black spider). Therefore, PSII of SV was more sensitive to salt stress, as compared to SA. Sodium and Potassium Sequestration in Leaf Following NaCl Treatments SV and SA plants were grown for 4 or 8 weeks before their exposure to different salt concentrations. Exposure of SV to NaCl levels higher than 100 mM resulted in plant destruction before finishing the experiment; so higher NaCl concentration treatments were not used or considered for SV in our current study. Subjection of SA to NaCl concentration till 550 mM did not result in a considerable lethality (mortality) for the plants. The concentration of Na + in saltuntreated (control) leaf tissue was substantially higher in SA compared to SV ( Figures 3A, B). This difference vanished after salt stress treatment, due to a quick accumulation of Na + in the leaf of SV. The accumulation of Na + in SA leaves was much lower at exogenous NaCl levels between 0 and 100 mM. Na + accumulation enhanced enormously in SV leaves throughout the experiment ( Figure 3A), while leaf Na + level in SA increased less and gradually, even at higher external NaCl concentrations ( Figure 3B). The Na + levels estimated after 12 d NaCl treatment in SA exposed to 400 and 550 mM NaCl was nearly similar to that of SV subjected to only 50 mM NaCl ( Figures 3A, B). At 100 mM NaCl, SV accumulated more NaCl in the leaf than SA under all salt concentration range (100-550 mM). This is owing to the exclusion of NaCl to the leaf surface for SA. This exclusion mechanism represents a second barrier of SA defense against high NaCl concentrations besides the sequestration of salt in the vacuole. Earlier study performed on halophyte Aeluropus littoralis, a species that can tolerate up to 800 mM NaCl, showed that an increase in leaf epidermis layer thickness was mainly due to cells swelling following salt sequestration in the leaf (Barhoumi et al., 2007). SA and SV differed also in their K + concentrations in the leaf. Herein, the concentration of K + in leaf tissue of plants watered with medium without salt was lower by about 30% in SA leaves ( Figures 3C, D). Following salt treatment, the K + content of the leaf in SV decreased considerably, especially after 4 and 8 d treatment at 100 mM NaCl. However, in SA, there was an initial increase in K + with the increase in NaCl concentration; after 12 d of treatments, the K + concentration in the leaf gradually declined with an increase in the NaCl concentration ( Figure 3D). This also reflected by the ratio k + /Na + (Figures 3E, F), where we observed a dramatic decline in this ratio for SV but very less and mostly maintain stable with time course in SA, especially at NaCl concentrations higher than 250 mM ( Figures 3E, F). Chl Content in Leaf and Non-Photochemical Quenching Decay Components: NPQ fast, slow The total Chl content in untreated SA leaves under salt treatment was around 4.5 times higher than that in untreated SV leaves ( Figures 4A, B). Subjection of SV plants to NaCl entrained a gradual decrease in Chl content ( Figure 4A); the total Chl concentration after 12 d of salt treatment with 50 and 100 NaCl declined by 42 and 58%, respectively. In contrast, treatment of SA with 50 and 100 NaCl did not result in a dramatic decline in the total Chl content except at NaCl concentrations higher than 250 Mm, e.g., at 550 mM NaCl treatment, there as a~20% decrease in total Chl content ( Figure 4B). In SV, NaCl treatment resulted in an increase of NPQ, while NPQ remained similar or only slightly increased in SA at all NaCl concentrations ( Figures 4C, D). The NPQ increase in SV might be resulted from a modulation of either a protective high-energy-state quenching or photoinhibition, which differ in the relaxation kinetics after AL illumination (Maxwell and Johnson, 2000). Measurements of NPQ were taken after 16 d exposure to 100 and 400 mM NaCl treatments for SV and SA, respectively ( Figures 4C, D). The NPQ recovery under dark was measured to quantify the magnitude of each phase of NPQ dark decay. In SV, quantification of the relaxing phases of NPQ quenching (fast and slow) showed that most of quenching relaxed rapidly in the dark (NPQ f ), indicating that it was high-energystate quenching (Figures 4C, D). However, a part of the quenching was more conservative (NPQ s ), revealing the occurrence of photoinhibition phenomenon in SV plants because of high salt stress treatment. Both components of NPQ quenching enhanced following salt stress treatment ( Figures 4C, D). The increase in the total NPQ in SA was comparatively less and globally attributed to an increase in NPQ f (photoprotection process). Electrons Flow to Molecular Oxygen Under Salt in Both C 4 Species The electron generated by H 2 O splitting can be used by alternative sinks, in addition to the common sink to support NADPH generation. The most commonly known sinks are the reactions involving oxygen, including photorespiration and Mehler reaction (Chen et al., 2004;Shirao et al., 2013). To assess the relevance of these pathways, the electron flow dependent on the oxygen level was performed. SV and SA were subjected to different AL at a range of irradiance levels (AL, 0 to 1,806 mmol m −2 s −1 ) at the presence of saturating CO 2 (2,000 ml L −1 ) and either 21 or 2% of O 2 . Regardless the degree of NaCl treatments, the ERT II in both species reached its maximum at around 400 mmol m −2 s −1 (Figures 5A, B). Exposure of control leaves during measurement to low oxygen level (2%) caused a decrease in ETR II at saturating irradiances. Measurements of the redox state of P 700 , the primary electron donor of PSI, revealed slight effects of the various oxygen percentage levels (2 and 21%) in salt-untreated (control) plants. With increasing irradiance, P 700 gradually becomes more oxidized in both species SV and SA ( Figures 5C, D). Despite the proportion of oxidized P 700 (P 700 + ) was insensitive to the oxygen concentration in salt-untreated plants, the conductance of the ETC (g ETC ; Figures 5E, F) declined and the ETRI follows the same trend and decreased by the same amount ( Figures 5G, H). One-month old SV and 2 month-old SA were exposed to salt for up to 2 weeks. Plants were subjected to: 0, 50, and 100 mM NaCl for SV and 0, 100, 250, 400, and 550 mM NaCl for SA. Data represent the means of four to five replicates ± SE. The different letters above the bars indicate significant differences at P ≤0.05 among the treatments for the same species. SV exposed to 100 mM NaCl showed lower ETR II at high CO 2 than the control plants ( Figure 5A). As in the control, electron transport through PSII decreased slightly under low O 2 ( Figure 5A). Conversely, exposure to NaCl caused an increase in the ETR II in SA compared to untreated plants; this increase in electron transfer through PSII was completely revoked under low O 2 concentration ( Figure 5B). The fraction of oxidized P 700 (P 700 + ) in salt-treated SV was significantly higher under low O 2 (p ≤0.05; Figure 5C). This is accompanied by to a negligible decline in the g ETC , resulted in a slight increase in the electron flow through PSI potentially via cyclization across FRQ ( Figure 5G). Conversely, in SA, PSI ETR (ETR I ) in the absence or presence of salt (250 mM) decreased at low O 2 ( Figure 5H). This caused by an enhancement in P 700 oxidation and a fall in g ETC (Figures 5D-F). Activity of NAD(P)H Dehydrogenase (NDH)-Dependent Cyclic Electron Flow in Both Species Under Salt Stress NDH cyclic pathway activity around PSI was assessed as the postillumination rise (PIR) of F o Chl fluorescence was monitored after switching off AL (Essemine et al., 2016). The magnitudes of PIR for SV and SA under both control and salt stress conditions were displayed in Figure 6. Under normal conditions, we observe more than two times higher NDH activity in SA than in SV ( Figure 6). The results show as well an increase in the NDH in leaves of SV plants endured 50 mM NaCl for 12 d by about 2.36 times ( Figure 6). However, SA plants exposed for the same time period (12 d) to 250 mM NaCl exhibited a significant decrease by about 25% (p <0.05) in the NDH activity ( Figure 6) compared with the control. This is very likely attributable to the activation of PTOX in SA under salt stress. Hence, the activity of PIR declines in SA in favor of that of PTOX. This reflects the existence of an efficient competition between these two pathways (PTOX and NDH) for the oxidation/reduction of the PQ pool, respectively. Eventually, the oxidation of the PQ pool by PTOX overcomes its re-reduction by NDH cyclic (Figures 6 and 8). So far, PTOX may represent an alternative pathway to cyclic and linear routes for the protection of SA against intersystem overreduction and minimize or avoid damages to both photosystems (PSI and PSII). Thereby, it may function as a safety valve for the photosynthetic transport chain. In this regard, our findings are in line with that of Ahmad et al. (2012), where authors have shown a A B C D FIGURE 4 | The effect of salt on the leaf total chlorophyll content in SV (A) and SA (B). One month-old SV and two month-old SA were exposed to different NaCl levels as described in Figure 2. Leaves were collected 12 days after initiating salt treatment to determine chlorophyll concentration. For chlorophyll each data bar represents the mean of at least 10 replicates ± SE. Fast-and slow-relaxing components of NPQ (NPQ f and NPQs) in leaves of SV (C) and SA (D) exposed to 0 and 100 (C) or 0 and 400 mM NaCl (D). Measurements were carried out 16 days after initiating salt treatment at 25°C in the presence of 390 ml L −1 CO 2 . Leaves were illuminated with 800 mmol m −2 s −1 AL. Each data bar represents the means of at least six replicates ± SE. The different letters above the bars indicate significant differences at P ≤0.05 among the treatments for the same species. decrease in the PIR in tobacco overexpressing PTOX from Chlamydomonas reinhardtii (Cr-PTOX) compared to wild type, WT (Ahmad et al., 2012) and they demonstrated that the decrease in PIR is attributed to the enhanced activity of PQ pool oxidation by the high level of PTOX protein in the over-expressed line. Plastid Terminal Oxidase (PTOX) as a Plastohydroquinone : Oxygenoxidoreductase The improved efficiency and/or the additional turnover of PSII under salt treatment in SA at the presence of 21% oxygen, compared to either control with 21% O 2 or 250 mM NaCl with 2% O 2 , is very likely attributed to electron transfer directly to molecular oxygen (O 2 ). Since experiments were conducted under a saturating CO 2 concentration of 2,000 ml L −1 , we exclude the contribution of photorespiration to this effect. Usually, the photo-reduction of O 2 may happen at the PSI acceptor side via the Mehler reaction; nevertheless, the lack of sensitivity of PSI parameters to oxygen suggests that this is unlikely the reason, or at least not the only reason. So here we test the possibility that the putative quinoneoxygen oxidoreductase, the plastid terminal oxidase (PTOX) or IMMUTANS protein (Shahbazi et al., 2007;Heyno et al., 2009) might have played a role as well for SA. In SV, PSII quantum yield (ɸ PSII ) was insensitive to n-PG, regardless whether the plants have been exposed to salt stress or not ( Figure 7A). The same was observed for salt-untreated (control) SA. However, in SA subjected to 250 mM NaCl, ɸ PSII was obviously sensitive to n-PG ( Figure 7B). ɸ PSII measured 12 d after initiating NaCl treatment was reduced by about 32 and 45%, in leaves infiltrated with 5 mM n-PG, in the presence of 21 and 2% O 2 , respectively ( Figure 7B), falling thereby to the control level or even slightly lower ( Figure 7B). Interestingly, at low O 2 in salt-stressed plants, we observed a decrease in the ɸ PSII . This suggests strongly that molecular oxygen (O 2 ) may act as a terminal electron acceptor by oxidizing the plastoquinol (PQH2). The effect of n-PG suggests a potential activity of PTOX, which is situated on the stromal side of the membrane in SA though this does not exclude a potential contribution of the Mehler reaction to electron transport. To measure the electron flow to oxygen excluding any contribution of the Mehler reaction, leaves were infiltrated with the cytochrome b 6 /f (Cytb 6 /f) inhibitor dibromothymoquinone or 2,5-dibromo-3-methyl-6isopropylbenzoquinone (DBMIB), a specific inhibitor of the Q obinding site (Malkin, 1981;Malkin, 1982;Rich et al., 1991;Schoepp et al., 1999). In SV, this almost completely abolished the ɸ PSII and thereby the electron flow beyond the cytb 6 /f, regardless the NaCl treatment ( Figure 7C). In control SA leaves, DBMIB also strongly inhibited PSII, though a residual ɸ PSII and also electron transfer remained. Regardless the O 2 concentration used (2 or 21%), DBMIB dramatically declined ɸ PSII ( Figure 7D). In salt-stressed SA leaves, DBMIB only partially inhibited ɸ PSII ; however, decreasing the O 2 concentration (2%) resulted in a greater ɸ PSII inhibition ( Figure 7D). The extent of DBMIB-insensitive ɸ PSII but sensitive to oxygen decrease (2%) was similar to that of n-PGsensitive ɸ PSII in the same leaves ( Figures 7B, D). The dramatic decline in ɸ PSII in the presence of DBMIB at low O 2 in salt treated SA leaves ( Figure 7D) might be explained as a double restriction in the electrons flux beyond PSII. First limitation due to the blockage (or shortage) in the electrons flow towards PSI due to the presence of DBMIB and the second curtailment is tightly linked to the drop in the O 2 level (2%). Western-blot analyses of thylakoid membrane extracts of SA and SV using antibodies raised against Zea mays PTOX revealed the presence of a 35-kDa band in both species (Figure 8). For untreated plants, SA showed higher protein abundance than in SV. In the latter (SV), salt treatments resulted in a slight increase in the PTOX abundance ( Figure 8A and inset), though the expression level of PTOX transcript insignificantly decreased ( Figure 8B). In SA, treatment with 250 mM NaCl elevated PTOX abundance by 3-4 times compared to the control ( Figure 8A and inset). Similarly, the transcript abundance of PTOX was also elevated under NaCl treatment by the same amount ( Figures 8A, B). Reactive Oxygen Species Generation Under Salt in C 4 Species Histochemical staining with nitroblue tetrazolium (NBT) shows the appearance of dark-blue spots on the edge of SA leaves exposed to 250 mM NaCl for 12 d ( Figure 9B). This dark-blue staining reveals the interaction between NBT and the generated superoxide free radical (O − 2 • ) following exposure to moderately high light (500 mmol m −2 s −1 ). However, these dark-blue spots were spread all over the surface of SV leaves subjected to 50 mM salt for 12 d ( Figure 9D), suggesting that salt treatment dramatically increased the production of O − 2 • in SV. Similarly, histochemical staining using diaminobenzidine (DAB) showed no visible brown spots on either control or salt-treated SA leaves ( Figures 10A, B). In contrast, SV treated with only 50 mM NaCl for 12 d shows a widespread presence of brown spots on the leaf surface ( Figures 10C, D). Salt Stress Induced Up-Regulation of Electron Flow Through the PTOX Activity in SA There is a huge difference between SV and SA regarding their physiological response to salt. Here, we found that in SA, with time, either a stable or an increase in the K + /Na + was observed ( Figures 3E, F). This maintenance or increase in the K + /Na + is a major trait associated with salt tolerance (Shabala and Pottosin, 2014). Na + tolerance is associated with SOS1 antiporter localized to the root epidermis (Shi et al., 2002). Mostly, halophytes exhibit higher SOS1 abundance (Oh et al., 2009). Therefore, exclusion of Na + should also be a mechanism involved in salt-tolerance in SA. In addition to this known mechanism of salt tolerance, here our FIGURE 6 | NDH-dependent CEF pathway assessed as the post-illumination F o rise in plants grown on either salt-free medium (ctrl) or subjected to 50 or 250 mM NaCl for SV and SA, respectively, during 12 days. The postillumination F o rise was recorded in the dark after switching off 5 min illumination with AL (325 mmol m −2 s −1 ). Each data bar represents the means of at least 10 replicates taken on different leaves ± SE. The stars above the bars display the significance levels between control and salt treatment at P ≤0.05 (*) or P ≤0.001 (***). data suggest that under salt, SA gained increased salt tolerance through increased electron flow through PTOX. Firstly, under normal growth conditions, i.e. when there was no salt stress, the NDH-dependent CEF activity was more than two times higher in SA than in SV ( Figure 6). However, after NaCl treatment, the NDH activity was enhanced by 2.36 times in SV but decreased by about 25% in SA, compared to their respective control ( Figure 6). After exposure to salt stress, the J-step of OJIP curves was significantly enhanced for SV compared to SA (data not data). The increase in the J-step constitutes an indicator of a more reduced PQ pool and a more exacerbated Q -A (primary electron acceptor of PSII) accumulation under salt stress (Haldimann and Strasser, 1999). This leads to a strong PSII acceptor side limitation and a high PQ pool over-reduction in SV compared to SA. Furthermore, we found that under salt stress, the level of NPQ was similar between SA and SV, i.e. the incident light energy was not more dissipated in the form of heat in SA, as compared to SV. There must be a major source of electron which accept electron in SA under salt stress. Second, experiments using inhibitors suggest that PTOX is a major sink of electrons in SA under salt. To test this, we examined the PSII photoinhibition following salt stress in presence of n-PG (PTOX inhibitor) or DBMIB (Q o -binding site of Cytb 6 f inhibitor) at atmospheric CO 2 (390 ml L −1 CO 2 ) and in presence of 2 or 21% O 2 ( Figure 7). Our results revealed that the restriction in electrons flow towards PTOX (n-PG) has little effect on the F PSII in SV ( Figure 7A) but significantly decreased F PSII in SA under both conditions (normal and salt), especially in the presence of low O 2 ( Figure 7B). This reflects that a proportion of electrons from PSII is sensitive to both to n-PG and O 2 (13%, Figure 7B). This provides an evidence that an efficiently operating PTOX in SA but not in SV under salt stress. In fact, even under non-salt condition, there is a proportion of electron from PSI flow into PTOX driven reactions. Thirdly, using DBMIB, we observed that in SA, as compared to SV, under high NaCl treatment, the PSII was less photoinhibited, especially at the presence of 21% O 2 ( Figures 7C, D). This is possibly because under severe salt stress, electrons can be used to reduce O 2 in SA through PTOX without passing through Cytb 6 f. Consistent with this possibility, we observed an enhancement in the primary PSII electron transfer rate under salt in the presence of 21% O 2 and saturating CO 2 , 2,000 ml L −1 ( Figure 5B). Under 2,000 ml·L −1 CO 2 , the electron flux towards photorespiration pathway is minimized, leading to a reduction and/or restriction in the photorespiration process as a major sink for the reducing power. This provides further evidence that PTOX may functions as a major electron sink in SA under salt stress. Furthermore, in line with this notation, this enhancement of electron transfer rate was not observed under low O 2 (2%) under salt stress ( Figure 5A, B). The gene expression and Western-blot analysis also showed that under salt stress, there were increased amount of PTOX RNA and protein abundance in SA, but not in SV (Figure 8). Therefore, upon salt stress, the SA shows drastically increased electron flow into TPOX. The increase of PTOX levels have also been reported earlier in plants under stress, e.g. exposure of tomato to high light (Shahbazi et al., 2007) or thellungiella to salt stress (Stepien and Johnson, 2009). PTOX as a Safety Valve in SA Under Salt Stress to Protect Photosystems From Over-Reduction PTOX is an interfacial membrane protein (Berthold and Stenmark, 2003) attached to the stromal-side of the thylakoid membrane (Lennon et al., 2003). PTOX is involved in the carotenoid biosynthesis (Carol and Kuntz, 2001) and has been implicated in the oxidation of the plastoquinol pool, PQH 2 (Joët et al., 2002). Similar to the increase of PTOX under salt conditions in SA, the PTOX levels have been found to increase in higher plants subjected to abiotic stress such as high temperatures, high light and drought (Quiles, 2006;Dıáz et al., 2007;Ibañez et al., 2010), low temperatures and high light (Ivanov et al., 2012), salinity (Stepien and Johnson, 2009) and in alpine plants at low temperature and high UV exposure (Streb et al., 2005;Laureau et al., 2013), implying a generic role of PTOX under stress. Data from this study provide new evidence for the protective role of PTOX under salt stress. F o of Chl a fluorescence (OJIP) was found to increase in SV but was not changed or changed little for SA (data not shown). After exposure to salt stress, the J-step of OJIP curves was significantly enhanced for SV compared to SA (data not shown). The increased J level is an indicator of an exacerbated PQ pool reduction and a pronounced Q -A (primary electron acceptor of PSII) accumulation under salt (Haldimann and Strasser, 1999). This leads to a strong PSII acceptor side limitation and a high PQ pool over-reduction in SV compared to SA. In this regard, similar results have been reported by Shahbazi et al. (2007). These authors proved similar effect of high light treatment on the mutant of tomato ghost (gh) defective in PTOX compared to the control San Marzano (SM) (Shahbazi et al., 2007). The data from this study, together with these earlier studies, suggests that PTOX can oxidize over-reduced PQ pool and hence provides protective roles. As a reflection of the protective role, SA plants grew normally at a moderate salt stress and even survived under NaCl concentrations up to 550 mM NaCl without significant mortality. The Chl content of leaves did not drop significantly, particularly at NaCl concentrations below 250 mM ( Figure 4B) and both stomatal conductance (gs) and assimilation at atmospheric CO 2 concentrations (A) were maintained (Essemine et al., unpublished data). By comparison, SV was unable to survive at NaCl level higher than 100 mM for two weeks; even at NaCl concentrations lower than 100 mM, the Chl content of SV dropped drastically by about 42 and 58% after 12 d exposure of SV to 50 and 100 mM NaCl, respectively ( Figure 4A), concurrent with a dramatic decline in both gs and A (Essemine et al., unpublished data). The protective role is clearly shown by changes in the linear electron transfer rates under NaCl treatments. In SV, under salt stress, we observed a decrease in linear electron transfer rate (LEF), as shown by the decrease in the g ETC at saturating CO 2 , which has a concentration of 2,000 ml L −1 at either 21 or 2% O 2 levels ( Figure 5). Such decrease is common among C 3 species under stress, e.g. drought (Golding and Johnson, 2003), salt (Stepien and Johnson, 2009), and anaerobiosis (Haldimann and Strasser, 1999). In SA, in contrast, there was no apparent decrease in LET under salt ( Figure 5B); which suggests that the photosystem II in SA under stress was well protected. Consistent with these differential capacities to protect photosystem under salt, we observed much higher accumulation of ROS in SV compared to SA, even though the salt concentration used to treat SV was 50 mM, while that used to treat SA was 250 mM (Figures 9 and 10). The reactive oxygen species FIGURE 8 | Effect of salt treatment on PTOX protein expression (A) and the PTOX gene expression level assessed by q-PCR analysis (B) in leaves of SV and SA subjected to 0 and 50 (for SV) or to 0 and 250 mM NaCl (for SA). For more details on the sampling method, samples preparation, gel running and bands analysis and quantification, please refer to Materials and Methods (Western-Blot Analysis section). The synthesized cDNA was used for the q-PCR analysis of PTOX. Data points represent the mean of around five replicates for western SDS-PAGE ± SE and six replicates for qRT-PCR. Insert of panel A shows typical bands from an original blot, loaded on an equal protein basis. The different letters above the bars indicate significant differences at P ≤0.05 among the treatments for the same species. • ) and hydrogen peroxide, H 2 O 2 (Fridovich, 1997). The severe damage of salt to photosystem in SV is also reflected by a swelling in the chloroplast structure for SV after exposure to salt (Essemine et al., unpublished data). Altogether, these data suggest that having higher PTOX activity under salt (Figure 8) may contributed to the protection of chloroplast structure and function, as shown by maintenance of the photosynthetic linear electron transfer, chlorophyll a content, and less accumulation of ROS in leaves. It is worth mentioning here that the protective function of PTOX has been studied earlier through transgenic approaches. However, the data obtained so far from transgenic experiments are still not conclusive. When PTOX from C. reinhardii was transferred into tobacco (Ahmad et al., 2012), it resulted in growth retardation; furthermore, instead of inducing increased resistance to high light, it led to increased vulnerability to high light. The ortholog of PTOX in Arabidopsis has also been studied using both mutant and over-expression lines; which however, did not provide proof for a role of PTOX in the modulation of PQ redox status (Rosso et al., 2006). In tobacco, however, overexpression of PTOX led to increased photoprotection under low light but increased vulnerability under high light, or which the authors suggest that the PTOX can only provide a sufficient photoprotection when the reactive oxygen species generated by PTOX can be effectively detoxified (Heyno et al., 2009). However, the increased susceptibility of plant growth to high light was not shown in tobacco over-expressing PTOX from Arabidopsis (Joët et al., 2002). In high mountain species Ranunculus glacialis, the rate of the linear electron transfer far exceeds the rate of consumption of electrons for carbon assimilation rate under different temperature and light levels; especially under 21% O 2 and high internal CO 2 concentration (C i ), suggesting a major role of PTOX in photoprotection (Streb et al., 2005). PTOX and NDH-Mediated Cyclic Electron Transfer Under stress conditions, the cyclic electron transfer rate usually increases as demonstrated for spinach (Breyton et al., 2006) and Arabidopsis (Shikanai, 2007;Strand et al., 2015). In contrast, here we show that in SA, which has a great capacity of channeling electrons to PTOX, the rate of cyclic electron transfer rate decreased ( Figure 6). This is clearly shown by data from the post-illumination Fo rise (PIR) signal, which was used here to assay NDH (Burrows et al., 1998). Using this method, we found a stark contrast in the responses of NDH-dependent CEF and PTOX to salt stress between species (SA and SV). In SV, the strong stimulation of NDH-dependent CEF following salt stress (236%) was concurrent to a nearly stable PTOX level (Figures 6 and 8). However, in SA, we observed a decline in the NDHdependent CEF ( Figure 6) together with an increase of PTOX expression levels, which was up-regulated by up to four times compared to the control as assessed by both RNA-expression abundance and protein abundance (Figure 8). Our finding about the negative relationship between PTOX and NDH-CEF is in line with a number of earlier reports. In this regard, Ahmad et al. (2012) have reported a dramatic decline in the NDH activity in tobacco expressing PTOX from green algae (Cr-PTOX1). Furthermore, PTOX may efficiently compete with CEF for the plastoquinol (PQH 2 ) in the CRTI-expressing (carotene desaturase) lines (Galzerano et al., 2014). Joët et al. (2002) also showed a decrease in the NDH-dependent CEF flux in tobacco transgenic lines expressing PTOX from Arabidopsis. The activity of cyclic electron transfer is regulated by an array of mechanisms, including redox status (Breyton et al., 2006;Takahashi et al., 2013), H 2 O 2 (Strand et al., 2015), metabolite levels (Livingston et al., 2010), Ca signaling (Lascano et al., 2003;Terashima et al., 2012), and even phosphorylation of NDH components (Lascano et al., 2003). It is likely that NaCl induced differential changes in the NDH and PTOX, though mechanism is complexly unknown. It is possible that some internal signals from chloroplast, such as redox status of chloroplast electron transfer chain, or particular compound in the photosynthetic carbon metabolism, or even H 2 O 2 , might differentially regulate PTOX and NDH-CET. Mechanisms how PTOX and NDH-CET were differentially regulated under NaCl needs further elucidation. It is worth mentioning here that SA has been used as a model halophyte grass species to study adaptation to plants to salt stress and to mine salt stress-tolerance genes (Subudhi and Baisakh, 2011). Several earlier reports have demonstrated the utility importance of genes from this halophyte (SA) in the amelioration of crop salt resistance (Baisakh et al., 2012;Karan and Subudhi, 2012a;Karan and Subudhi, 2012b). Therefore, elucidation of how PTOX and NDH-CET respond under NaCl to protect photosystem and leaf functioning can help develop new strategy to protect photosystems under salt stress. DATA AVAILABILITY STATEMENT All datasets presented in this study are included in the article/ Supplementary Material.
2020-03-12T10:54:15.824Z
2020-03-12T00:00:00.000
{ "year": 2020, "sha1": "71d4d5711e3065a72b07840c2669a1bae2db695f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.01009/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "494f86a255acb0d787be12d3dabc1ae47e463b20", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
257674843
pes2o/s2orc
v3-fos-license
Surgical Management of Failed Roux-en-Y Gastric Bypass (RYGB) Reversal: A Case Study With the growing obesity epidemic, surgeons are performing more bariatric surgeries, including Roux-en-Y gastric bypass (RYGB) reversals. Although studies have identified indications for RYGB reversals, little information is available about the long-term effects of the procedure. We wish to highlight a case with long-term complications of RYGB reversal and subsequent management. We present a patient with multiple abdominal surgeries including an RYGB reversal that was complicated by a stenosed gastrogastric anastomosis that caused several gastrojejunostomy ulcerations and malnutrition secondary to intractable nausea and vomiting. A 51-year-old female with a complex surgical history including a simple RYGB reversal in 2019 presented to the ER with complaints of abdominal pain, uncontrolled diarrhea, and an inability to tolerate food for six months. Workup revealed multiple marginal ulcers at the remnant jejunum attached to the gastric pouch, and a stenosed gastrogastrostomy placed high along the cardia of the remnant stomach and pouch. This stenosis resulted in a nonfunctional, nondependent reversal that only drained when filled. Ultimately, a large gastrotomy was performed, and an endoscope was utilized to identify a small pinhole connection between the patient’s pouch and the remnant stomach along the superomedial portion of the remnant stomach’s fundus. The anvil of a 60 mm GIA black load stapler was guided through and fired twice to come across the stricture. After the stricture was completely crossed, the endoscope was passed through, confirming that it was widely patent. The postoperative course was uneventful, and the patient was discharged with total parenteral nutrition (TPN) on postoperative day 15 before being discontinued at her follow-up visit. She reported that she had been gaining weight and eating well. Long-term complications following RYGB reversal are not well-discussed in the literature. This case offers insight into such complications, discusses the surgical technique utilized to fix them, and calls for further research on the topic to better inform surgeons and patients alike. Introduction Obesity is an epidemic that continues to affect medical treatment in all areas of healthcare. Bariatric surgery, in particular Roux-en-Y gastric bypass (RYGB), is one area that has experienced dramatic growth in the last decade, driven by not only the surging number of surgical candidates, but also because of its safety, low morbidity, and low mortality. While RYGB is a fairly safe and effective method for treating obesity, there have been instances in which patients decide to undergo reoperations for various reasons including weight recurrence or other complications [1]. It is estimated that up to 25% of bariatric patients will require a second operation at some point including reversal operations [2]. RYGB reversal is a second operation that bariatric patients may receive to resolve complications of RYGB including gastrojejunostomy anastomotic stricture, marginal ulcerations, gastrogastric fistula, nutritional deficiencies, and weight gain [3]. The procedure often involves reverting a patient's anatomy to normal anatomy by removing the gastrojejunostomy and adding a new anastomosis between the gastric pouch and the remnant stomach. Despite the growing number of RYGB reversals being performed in response to persistent complications, there is limited information available and thus no standardized guidelines for determining whether RYGB reversal is appropriate for patients. The decision to reverse rather than revise or convert is highly individualized, and it is unclear whether this is patient-driven, surgeon-driven, or a combination of both [1]. Furthermore, most studies regarding RYGB reversal highlight the rationale behind the operation rather than the outcomes, making it even more difficult to decide on re-operative surgery. This is especially true of patients who have had multiple abdominal surgeries, or at the very least an initial RYGB and subsequent revisional surgery, because they are more likely to experience abdominal adhesions as well as intra-and/or post-operative complications. For these patients, careful consideration prior to surgery is necessary. Consequently, there is a need for a better understanding of complications after reversing an RYGB and the possible interventions for them to better guide patients on their decision on whether a reversal (rather than a revision, for instance) is appropriate for them. The following case study illustrates the complications of post-RYGB reversal leading to additional surgical revisions. Although this case does not fully encompass all the problems of a post-RYGB reversal, it does reveal the occurrence of one such problem in addition to the process of patient care and problem resolution as approached by bariatric surgeons. This article will be presented as a poster presentation at the 2023 SAGES Annual Meeting on March 29, 2023. Case Presentation A 51-year-old female presented to the emergency department with complaints of abdominal pain, chronic uncontrolled diarrhea, and an inability to tolerate food for the past six months. During those six months, she became TPN dependent and was taking pantoprazole. Upon physical examination, her abdomen was soft with tenderness in the epigastric area. The patient's surgical history was complex, consisting of an RYGB in 2003 (BMI: 48.9 kg/m 2 ) and partial colectomy with an ostomy for colon cancer which was complicated by multiple re-do procedures. In 2019, she had several revisional operations that included an RYGB reversal to normal anatomy without modification to sleeve gastrectomy (BMI: 18.5 kg/m 2 ), ileostomy reversal, ileorectal anastomotic stricture status post-dilation and stent placement. Her most recent operation prior to the presentation was an abdominoplasty in March 2022. Initial work-up (BMI: 29.0 kg/m 2 ) included a CT scan of the abdomen with contrast that revealed extensive postsurgical changes in the abdomen with diffuse small bowel dilation and narrowing at the ileorectal anastomosis. Initial esophagogastroduodenoscopy (EGD) was performed by gastroenterology which demonstrated a large anastomotic ulceration at the remnant jejunum of the gastric pouch and a patent, but difficult to access, reversal into the remnant stomach. In addition, a sigmoidoscopy was performed to rule out a distal stricture due to the patient's complaint of diarrhea. No obstruction was evidenced, and it was suspected that the patient's chronic diarrhea was functional in nature given her reconstructions. Due to the patient's ongoing symptoms, a repeat endoscopy was performed by surgery which showed ulcers at the patient's remnant jejunum at the gastric pouch ( Figure 1). Additionally, the gastrogastrostomy was not patent with only a small cuff of small bowel anastomosed to the pouch. Attempts to access the remnant stomach through the gastrogastrostomy were unsuccessful. Post-procedure, an upper gastrointestinal series was performed to evaluate the functionality of the patient's gastrogastrostomy ( Figure 2). This revealed that the previous gastric pouch needed to be filled completely before overflowing into the patient's remnant stomach through her reversal. The patient's gastrogastrostomy was placed high along the cardia of the remnant stomach and pouch and was therefore not dependent or particularly functional ( Figure 3A). Based on these findings, the patient's gastrojejunostomy ulceration was thought to be due to stasis in the gastric pouch, whereas her nausea and vomiting were due to an inability of her gastric pouch to drain easily into the remnant stomach. Given these findings and her complex history, we discussed at length with the patient about operative intervention that included attempting to open the stenosis of her gastrogastrostomy RYGB reversal to improve drainage of her pouch. Upon surgery, the left upper quadrant of the abdomen was initially explored laparoscopically. However, ongoing attempts to mobilize the abdomen with the laparoscope were only partially successful and the decision was made to convert to an open procedure. The abdomen was entered through the patient's previous laparotomy incision. Due to severe adhesive disease, dissection was difficult. Once it appeared that the remnant stomach had been identified, a laparoscopic port was placed into the remnant stomach, and it was insufflated. Examination revealed the patient's anatomy, allowing for visualization of the patient's closed gastrojejunostomy, pylorus, and duodenum. However, her gastric pouch, remnant jejunum, and gastrogastrostomy remained unidentified. Because the patient's connection between the remnant stomach and gastric pouch was unable to be identified, a large gastrotomy was performed on the posterior aspect of the patient's remnant stomach. An endoscope was utilized to discover a small pinhole connection between the patient's pouch and remnant stomach along the superomedial portion of the remnant stomach's fundus at approximately 11 o'clock direction. With this direct visualization, a silk suture was tied around the endoscope and pulled through to the area of concern. The remnant jejunum of the gastric pouch continued to be difficult to visualize and was not resected at this time. The anvil of a 60-mm GIA black load stapler was passed through the stricture, following the endoscope as a guide. Two fires of the 60mm GIA black load stapler were utilized to come across the stricture ( Figure 3B). After the stricture was completely crossed, the endoscope was passed through this area to confirm that it was widely patent. The gastrotomy was then closed with multiple fires of 60 mm GIA purple loads. A leak test was performed, and no leak was identified. Once the stomach was closed, the endoscope was passed again through this area of stricture and verified that it was much wider than before. The patient was then closed and taken to recovery. Blood loss of this operation was approximately 100 cc and the operative time was four hours. The patient's hospital course was complicated by persistent abdominal pain and episodes of emesis and nausea, either due to residual ulcers at the remnant jejunum of the gastric pouch or her recent ileus complication. She was discharged home on post-operative day 15 (June 4, 2022) with TPN. Although the patient returned to the hospital with complaints of abdominal pain and wound dehiscence, this was resolved soon after wound drainage and incision before being discharged. At her follow-up visit on June 27, 2022, the patient's TPN was discontinued. She has been gaining weight and eating well ever since. Discussion With the increase in RYGB procedures each year due to the ongoing obesity epidemic, it is inevitable that patients will experience failures or complications of the operation and, eventually, a second operation. Often, re-operative surgery consists of a reversal, revision, or conversion of the problematic bariatric operation. The decision to reverse, rather than revise or convert, has been steadily growing and is motivated by the perspective that revisions would also be problematic or persistent [1,[4][5][6]. However, there are no standardized guidelines for deciding on an RYGB reversal. Decisions are largely based on indications for a reversal, which include the presence of marginal ulcers, malnutrition, anatomic complications, and functional complications such as chronic unexplained pain [1][2][3][6][7][8][9][10][11][12][13][14], as well as the outcomes of the procedure. It is evident that once a reversal to normal anatomy is made, symptoms do resolve for some, but not all, patients [2,15]. However, whether this absence of complications was maintained needs to be taken into consideration. Currently, there is a paucity of information available on late complications (> 30 days, post-operatively) following RYGB reversal. The lack of data on outcomes of RYGB reversal to normal anatomy appears to be mostly attributed to the limited follow-up of the procedure, ranging from none to 41 months [15]. At present, most studies have discussed early complications (< 30 days, post-operatively), which include gastrogastric anastomotic leak, sepsis, and bleeding. The largest, single institute study to date reported a significant early complication rate of 29% [16]. Very few studies noted late complications, whereas those that did mostly commented on the resolution of the patient's preoperative complications which initially led to the reversal [15,17]. Vilallonga et al. reported the incidence of other long-term complications, citing a new-onset of gastro-esophageal reflux in three out of 20 (15%) patients and chronic diarrhea in one out of 20 (5%), closely resembling the post-RYGB reversal complications of the patient in this case study. Furthermore, the challenging nature of the procedure itself can lend to other perioperative complications, due to limited field of vision because of multiple intestinal adhesions and the abnormal anatomy that many of these patients present with, which may subsequently transform into long-term complications. Overall, it is apparent that reversal is fraught with an elevated risk for postoperative complications [2,18]. The case report presented here, therefore, provides a longer-than-normal glimpse into complications following RYGB reversal, as the patient presented to the hospital two and a half years after her reversal. The patient in this case study presented with pre-RYGB reversal complications of recurrent anastomotic ulceration. At present, the pathogenesis of these recurrent marginal ulcers remains unclear and hypotheses range from mechanical causes (pouch size, surgical technique) to ischemia and the presence of the foreign body at the anastomosis (such as staples, sutures, or Helicobacter pylori infection post-surgery) [3,7,10]. Because the patient's recalcitrant ulcers were not resolved through medical therapy, she had clear indications for an RYGB reversal, as delineated in multiple studies [1][2]12,16]. Unfortunately, her preoperative symptoms did not improve after the reversal with an additional post-reversal complication finding of a stenosed gastrogastric anastomosis and malnutrition secondary to intractable nausea and vomiting that required her to be placed on TPN. These findings display a clear pathophysiological mechanism to the patient's presentation. With the reversal being stenotic, the contents of the gastric pouch are unable to drain easily into the rest of the stomach, thereby resulting in severe gastroesophageal reflux. The stasis of fluid in the gastric pouch also gave rise to the gastrojejunostomy ulcer or marginal ulcer. It is interesting to note that few, if any cases, cite stenosis of the gastrogastric anastomosis as a post-operative complication requiring surgery. Although it is possible that the stenosis occurred due to an external pressure (i.e., lymph node enlargement) due to her history of colon cancer, this is unlikely since she presented to clinic 15-20 years after her initial diagnosis and there was no evidence of cancer recurrence on her numerous CT scans. Furthermore, the complication does not seem to be outside the realm of possibility since there have been cases of severe gastrojejunal anastomosis strictures due to marginal ulcers after RYGB procedures which, after multiple rounds of conservative treatment, eventually required a RYGB reversal [6]. Likewise, in this case, a conservative or less invasive procedure could have been attempted prior to converting to an open procedure. Additional endoscopic interventions could have been done to mobilize the gastrogastric anastomosis or possibly an endoscopic ultrasound-directed transgastric ERCP (EDGE) procedure. Otherwise, an open procedure may have been the safest option due to the complexity of this patient's revision of RYGB reversal due to the complex changes in anatomy, such as the inaccessibility of the patient's remnant stomach. Furthermore, the multiple adhesions that exist from prior surgical interventions would have made it difficult for a less invasive procedure. Conclusions It is remarkable that there is so little information on the complications of post-RYGB reversal, especially when there appears to be a variety of complications associated with post-RYGB that eventually results in the decision to reverse. This requires multiple surgeries followed by risks for several post-operative complications that may place a heavy burden on the patient. Many studies have shown that reversal to normal anatomy after RYGB is safe and effective. However, there is a need for longer follow-ups to confirm such findings with a higher level of evidence and to address the possibility of other late post-RYGB reversal complications to better prepare physicians for the care of patients like the patient in this case. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-03-23T15:27:39.360Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "d8c272f77f709108e8c326973e2e4809bca0cfe5", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/132552/20230321-20242-1wrtm4k.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80b19ca16cc6fe2a0a06ece4b123df1731e12166", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }