id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
234482686 | pes2o/s2orc | v3-fos-license | Improved $(g-2)_\mu$ Measurements and Supersymmetry : Implications for $e^+e^-$ colliders
The persistent 3-4$\sigma$ discrepancy between the experimental result from BNL for the anomalous magnetic moment of the muon and its Standard Model (SM) prediction, was confirmed recently by the"MUON G-2"result from Fermilab. The combination of the two measurements yields a deviation of 4.2$\sigma$ from the SM value. Here, we review an analysis of the parameter space of the electroweak (EW) sector of the Minimal Supersymmetric Standard Model (MSSM), which can provide a suitable explanation of the anomaly while being in full agreement with other latest experimental data like the direct searches for EW particles at the LHC and dark matter (DM) relic density and direct detection constraints. Taking the lightest supersymmetric particle (LSP) (the lightest neutralino in our case) to be the DM candidate, we discuss the case of a mixed bino/wino LSP, which can account for the full DM relic density of the universe and that of wino and higgsino DM, where we take the relic density only as an upper bound. We observe that an upper limit of ~ 600 GeV can be obtained for the LSP and next-to (N)LSP masses establishing clear search targets for the future HL-LHC EW searches, but in particular for future high-energy $e^+e^-$ colliders, such as the ILC or CLIC.
Introduction
While the LHC is yet to find any sign of new physics, indirect searches such as, low-energy experiments, astrophysical measurements etc. are providing crucial hints of beyond the SM (BSM) scenarios. Specifically, the sustained deviation of 3-4 σ in the anomalous magnetic moment of muon, (g − 2) µ , between the theoretical prediction of the SM [1] (see Ref. [2] for a full list of references) and the experimental observation by the Brookhaven National Laboratory (BNL) [3] has long been standing in support of new physics scenario. The BNL measurement, when compared with the SM prediction leads to a deviation of ∆a old µ = (28.1 ± 7.6) × 10 −10 , corresponding to a ∼ 3.7 σ discrepancy. The new result from Fermilab "MUON G-2" collaboration [4] was announced recently [5], which is within 0.8 σ in agreement with the older BNL result on (g − 2) µ . The combination of the two results was given as yielding a new deviation from the SM prediction of ∆a new µ = (25.1 ± 5.9) × 10 −10 , corresponding to a discrepancy of 4.2 σ. This result particularly upholds one of the leading candidates of BSM theories, the Minimal Supersymmetric Standard Model (MSSM) [6][7][8][9]. The deviation in Eq. (5) can easily be explained in the realm of MSSM scenario with electroweak (EW) supersymmetric (SUSY) particle masses around a few hunderd GeV. However, in view of the stringent constraints on EW SUSY particles from the direct searches at the LHC, it is essential to perform a comprehensive analysis including both the (g − 2) µ result as well as the latest constraints on neutralinos, charginos and sleptons from the LHC. On the other hand, the R-parity conserving MSSM naturally predicts a suitable Dark Matter (DM) candidate in terms of the lightest neutralino as the lightest SUSY particle (LSP) [10,11]. Therefore, it seems only natural to include the experimental constraint on the DM density and limits from direct detection in the analysis to further constrain the parameter space of our interest.
In these proceedings, following Refs. [2,12,13], we review the mass ranges of EW superpartners that can successfully explain the (g − 2) µ anomaly while being in agreement with all the relevant experimental data. In Refs. [12,13] we employed the constraint coming from Eq. (3) to constraint the EW MSSM parameter space. An updated analysis using the latest experimental world average can be found in Ref. [2], where we show that the new (g − 2) µ result (Eq. (5)) confirms the predictions made in Ref. [12] concerning the upper limits on the masses of the (next-to-) lightest SUSY particles. We include the latest LHC searches via recasting in CheckMATE [14][15][16]. For the DM, we consider two main scenarios depending on whether the LSP can account for the full relic density or it can be only a subdominant component of the total DM content of the universe. For the former scenario, we consider a mixed bino/wino LSP while the latter opens up the possibility of wino and higgsino DM. We observe that the combined data helps to narrow down the allowed parameter region, providing clear targets for possible future e + e − colliders, such as the ILC [17,18] or CLIC [18,19].
The EW sector of MSSM
We give a very brief description of the EW sector of MSSM, consisting of charginos, neutralinos and sleptons. The masses and mixings of the charginos and neutralinos are determined by U (1) Y and SU (2) L gaugino masses M 1 and M 2 , the Higgs mixing parameter µ and the ratio of the two vacuum expectation values (vevs) of the two Higgs doublets of MSSM, tan β = v 2 /v 1 . This results in four neutralinos and two charginos with the mass ordering mχ0 1 < mχ0 2 < mχ0 3 < mχ0 4 and mχ± Considering the size and sign of the anomaly, it is sufficient for our analysis to focus on positive values of M 1 , M 2 and µ [12]. For the sleptons, we choose common soft SUSY-breaking parameters for all three generations, ml L and ml R . We take the trilinear coupling A l (l = e, µ, τ ) to be zero for all the three generations of leptons. In general we follow the convention thatl 1 (l 2 ) has the large "left-handed" ("right-handed") component. The symbols equal for all three generations, ml 1 and ml 2 , but we also refer to scalar muons directly, mμ 1 and mμ 2 .
Following the stronger experimental limits from the LHC [20, 21], we assume that the colored sector of the MSSM is sufficiently heavier than the EW sector, and does not play a role in this analysis. For the Higgs-boson sector we assume that the radiative corrections originating largely from the top/stop sector brings the light CP-even Higgs boson mass in the experimentally observed region, M h ∼ 125 GeV. This naturally yields stop masses in the TeV range [22,23], in agreement with the above assumption. We have not considered CPviolation in this study, i.e. all parameters are real. M A has also been set to be above the TeV scale. Consequently, we do not include explicitly the possibility of A-pole annihilation, with M A ∼ 2mχ0 1 . Similarly, we do not consider h-or Z-pole annihilation (see, e.g., Ref. [24]), as such a light neutralino sector likely overshoots the (g − 2) µ contribution [12].
Relevant constraints
The most important constraint that we consider comes from the (g − 2) µ result. We use Eq. (5) (and in some older results also Eq. (3)) as a cut at the ±2 σ level. However, it is worth mentioning here that we did not take into account the results of the new lattice calculation for the leading order hadronic vacuuum polarization (LO HVP) contribution [25], which have also not been used in the new theory world average, Eq. (2) [1], but would certainly lead to significant change in our conclusion if turns out to be true, see also the discussions in Refs. [26][27][28].
We remind that the main contribution to (g − 2) µ in MSSM at the one-loop level comes from diagrams involvingχ ± 1 −ν andχ 0 1 −μ loops. In our analysis the MSSM contribution to (g−2) µ at two loop order is calculated using GM2Calc [29], implementing two-loop corrections from [30][31][32] (see also [33,34]). Various other constraints that are taken into account comprises the following: • Vacuum stability constraints: All points are checked to possess a stable and correct EW vacuum, e.g. avoiding charge and color breaking minima, using the public code Evade [35,36].
• Constraints from the LHC: All relevant EW SUSY searches are taken into account, mostly via CheckMATE [14][15][16], where many analyses had to be implemented newly [12]. We also take into account the latest constraints from the disappearing track searches at the LHC [37,38]. These become particularly important for wino DM scenario where the mass gap betweenχ ± 1 andχ 0 1 can be ∼ a few hundred MeV.
• Dark matter relic density and direct detection constraints: We use the latest result from Planck [39].
For the wino and higgsino DM cases, we take the relic density as an upper limit (evaluated from the central value plus 2 σ). The relic density in the MSSM is evaluated with MicrOMEGAs [40][41][42][43].
We employ the constraint on the spin-independent DM scattering cross-section σ SI p from XENON1T [44], evaluating the theoretical prediction using MicrOMEGAs. For parameter points with Ωχh 2 ≤ 0.118 (2 σ lower limit from Planck [39]), we scale the cross-section with a factor of (Ωχh 2 /0.118) to account for the fact thatχ 0 1 provides only a fraction of the total DM relic density of the universe.
Parameter scan
We scan the relevant MSSM parameter space to obtain lower and upper limits on the lightest neutralino, chargino and slepton masses. The three scan regions that cover the complete parameter space under consideration are as given below: (B) Higgsino DM: In all the scans we choose flat priors of the parameter space and generate O(10 7 ) points. We use SuSpect [45] as spectrum and SLHA file generator. The points are required to satisfy theχ ± 1 mass limit from LEP [46]. The SLHA output files from SuSpect are then passed as input to GM2Calc and MicrOMEGAs for the calculation of (g − 2) µ and the DM observables, respectively. The parameter points that satisfy the (g − 2) µ and DM constraint, and additionally the vacuum stability constraints checked with Evade are then passed to the final step to be checked against the latest LHC constraints implemented in CheckMATE. The branching ratios of the relevant SUSY particles are computed using SDECAY [47] and given as input to CheckMATE.
Results
In this section we review some of the results for the scenarios defined above [2,12,13]. We follow the analysis flow as described above and denote the points surviving certain constraints with different colors: • grey (round): all scan points.
• green (round): all points that are in agreement with (g − 2) µ .
• blue (triangle): points that additionally give the correct relic density.
• cyan (diamond): points that additionally pass the DD constraints.
• red (star): points that additionally pass the LHC constraints.
In Fig. 1 we show our results for the bino/winoχ ± 1 -coannihilation scenario in the mχ0 1 -mχ± 1 (left) and mχ0 1 -ml 1 (right) planes [2]. Starting with the (g − 2) µ constraint (Eq. (5)) (green points) in the mχ0 1 -mχ± 1 plane, one can observe a clear upper limit of about 700 GeV. Applying the CDM constraints reduce the upper limit further. The LHC constraints, corresponding to the "surviving" red points (stars), do not yield a further reduction from above, but cut (as anticipated) only points in the lower mass region. The LHC constraint which is effective in this parameter plane is the one designed for compressed spectra [48]. Other LHC constraint that is effective in this case is the bound from slepton pair production leading to dilepton and E / T in the final state [49]. Thus, the experimental data set an upper as well as a lower bound, yielding a clear search target for the upcoming LHC runs, and in particular for future e + e − colliders.
The distribution of the lighter slepton mass (where it should be kept in mind that we have chosen the same masses for all three generations), as in the mχ0 1 -ml 1 plane, is shown in the right plot of Fig. 1. The (g − 2) µ constraint is satisfied in a triangular region with its tip around (mχ0 1 , ml 1 ) ∼ (700 GeV, 800 GeV). This is slightly reduced when the DM constraints are taken into account. The LHC constraints cut out lower slepton masses, going up to ml 1 < ∼ 400 GeV, as well as part of the very low mχ0 1 points nearly independent of ml 1 . Details on these cuts can be found in Ref. [12]. Our results for the higgsino and wino DM scenarios are presented in Fig. 2 [13]. We show our results in the mχ0 2 -∆m(= mχ0 2 −mχ0 1 ) plane for the higgsino DM scenario (left) and in the mχ± 1 -∆m(= mχ± 1 −mχ0 1 ) plane for the wino DM scenario (right), where the (g −2) µ limit here corresponds to Eq. (3). No green points are visible in these plots as all the points that pass the (g − 2) µ constraint are also in agreement with the DM relic density constraint, resulting in only blue points. For the higgsino DM case, we also explicitly show the constraint from compressed spectra searches [48] as a black line 1 . In the case of wino DM the relevant LHC constraint are the disappearing track searches [37,38], due to the relatively long life-time of the NLSP, the light chargino. In both scenarios the combination of (g − 2) µ , DM limits and LHC searches put an upper limit on the (N)LSP masses. They are found at ∼ 500(600) GeV for higgsino (wino) DM. As for the case of bino/wino DM, clear search targets are set for future LHC runs, and in particular for the ILC and CLIC. For a more detailed description of these two scenarios see Ref. [13].
Future linear collider prospects
In this section we briefly discuss the prospects for the direct detection of the (relatively light) EW particles at possible future e + e − colliders such as ILC [17,18] or CLIC [18,19], which can reach energies up to 1 TeV and 3 TeV, respectively. We evaluate the cross-sections for various SUSY pair production modes for the energies currently foreseen in the run plans of the two colliders. The anticipated energies and integrated luminosities are listed in Tab. 1. The cross-section predictions are based on tree-level results, obtained as in [50,51], where it was shown that the full one-loop corrections can amount up to 10-20% 2 . We do not attempt any rigorous experimental analysis, but follow the idea that to a good approximation, final states with the sum of the masses smaller than the center-of-mass energy can be detected [53][54][55]. We also note that in case of several EW SUSY particles in reach of an e + e − collider, large parts of the overall SUSY spectrum can be measured and fitted [56].
In Fig. 3 we show the cross-section predictions for e + e − →χ ± 1χ ∓ 1 (left) and e + e − →μ 1μ1 (right) in the bino/winoχ ± 1 -coannihilation case as a function of the sum of the final state masses [12]. The points shown in different shades of green (violet) indicate the cross-sections at the various ILC (CLIC) energies. All shown points (open and filled) are in agreement with the old (g − 2) µ result, see Eq. (3) (Filled circles indicate a hypothetical future measurement as discussed in Ref. [12]). Using the updated result, Eq. Table 1: Anticipated center-of-mass energies, √ s and corresponding integrated luminosities, L int at ILC [57,58] and CLIC [59] (as used in [60]). covered. The reach could become even stronger in the case of the future (g − 2) µ constraint. With the same central value but only slightly better precision the upper limits on mχ0 1 go down to ∼ 450 GeV, implying effectively a full coverage at a 1000 GeV collider. In case of smuon pair production, as shown in the right plot of Fig. 3, energies up to ∼ 1800 GeV would be needed to fully cover the allowed parameter space.
All obtained cross-section predictions for the kinematically accessible parameter points are above 10 −2 pb for chargino production and above 10 −3 pb for smuon pair production. For each ab −1 of integrated luminosity this corresponds to 10000 (1000) events for chargino (smuon) pair production, which should make these particles easily accessible, see Tab. 1, if they are in the kinematic reach of the collider.
The above shown example cross-sections clearly show that at least some particles are guaranteed to be discovered at the higher-energy stages of the ILC and/or CLIC. If the upcoming runs from the MUON G-2 experiment further confirm the deviation of a exp µ from the SM prediction, the case for future e + e − colliders is clearly strengthened. -∆m plane with anticipated limits from compressed spectra searches at various future colliders, taken from Ref. [61]. Disappearing track searches are not included. Shown in blue, dark blue, turquoise are the points surviving all current constraints in the case of higgsino DM, wino DM and bino/wino DM withχ ± 1 -coannihilation, respectively, with relic density taken only as an upper limit.
In Fig. 4 we review the prospects at various high energy colliders for the compressed spectra searches, relevant for higgsino DM, wino DM and bino/wino DM withχ ± 1coannihilation [13]. We show our results in the mχ± 1 -∆m plane (with ∆m := mχ± 1 − mχ0 1 ), which was presented (upto ∆m = 0.7 GeV) in Ref. [61] for the higgsino DM case, but also directly applicable for the wino DM case [55]. In addition to the anticipated limits from HL-LHC, HE-LHC and FCC-hh, we show the following projected limits from various high energy linear colliders sensitive up to the kinematic limit [61], looking atχ ± 1χ ∓ 1 orχ 0 2χ 0 1 production (see also Ref. [55] and references therein) • ILC with 0.5 ab −1 at √ s = 500 GeV (ILC500): solid light green.
It can be observed that for the higgsino case, the HL-LHC can cover a part of the allowed parameter space, but an exhaustive coverage of the allowed parameter space can be reached only at a high-energy e + e − collider with √ s < ∼ 1000 GeV (i.e. ILC1000 or CLIC1500). For the wino DM, the ∆m is so small that it largely escapes the HL-LHC searches (but may partially be detectable at the FCC-hh with monojet searches). As in the higgsino DM case, also here a high-energy e + e − collider will be necessary to cover the full allowed parameter space. While the currently allowed points would be covered by CLIC1500, a parameter space reduced further by e.g. the improved HL-LHC disappearing track searches, could be covered by the ILC1000. The bino/wino parameter points (turquoise) represent a more complicated case for the future collider analysis, since the limits assume a small mass difference betweeñ χ 0 1 andχ ± 1 as well as pp production cross sections for the higgsino case. For bino/wino DM case, these typically have larger production cross sections (so is the pure wino), i.e. application of these limits to the wino/bino points in Fig. 4 serves as conservative estimate for pp based limits. Consequently, it is expected that the HE-LHC or the FCC-hh would cover this scenario entirely. On the other hand, the e + e − limits should be directly applicable, and large parts of the parameter space will be effectively covered by the ILC1000, and the entire parameter space by CLIC1500.
Although we do not consider possibility of Z or h pole annihilation, it should be noted that in this context an LSP with M ∼ mχ0 1 ∼ M Z /2 or ∼ M h /2 (with M = M 1 or M 2 or µ) would yield a detectable cross-section e + e − →χ 0 1χ 0 1 γ in any future high-energy e + e − collider. Furthermore, in the case of higgsino or wino DM, this scenario automatically yields other clearly detectable EW-SUSY signals at future e + e − colliders. For bino/wino DM this would depend on the values of M 2 and/or µ. | 2021-05-14T01:16:07.247Z | 2021-05-13T00:00:00.000 | {
"year": 2021,
"sha1": "53b0c194a706d3103c26a5d31d9f05134a7be322",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "53b0c194a706d3103c26a5d31d9f05134a7be322",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
57785304 | pes2o/s2orc | v3-fos-license | Social supports among college students and measures of alcohol use, perceived stress, satisfaction with life, emotional intelligence and coping
In this study I examined three domains of social supports among college students (close friends, casual friends and safe adults to turn to) in relation to indices of wellbeing and coping. Measures of positive wellbeing were most strongly associated with the safe adults domain of social support followed by the close friends domain of social support. Casual friends were associated only with measures of problem alcohol consumption but not with indices of wellbeing. Students with five or more safe adults to turn to as compared to four or fewer reported significantly lower perceived stress, greater satisfaction with life, higher emotional intelligence, better academic performance and lower problem drinking scores. The domain of safe adults was associated with the largest array of wellbeing indices of all three social support domains. Future research should examine additional measures of wellbeing that may be associated with distinct domains of support.
Introduction
The transition to life in college can be a challenging time for many young adults, thus perceptions of social support and being cared for by those in one's life can be very important to levels of wellbeing. Among high school students, Chou (2000) found that family social support was associated with lower levels of depression while friend social support was associated with lower levels of anxiety. Among college students, Clara et al. (2003) found that both family and friend social supports were associated with lower levels of depression, while Davis, Morris and Kraus (1998) found social support from friends to be the most powerful support associated with college student wellbeing followed by that of parents and romantic partners.
Whether actual or perceived, social support reflects overall feelings that one is cared for, accepted and that in difficult times one will have others to turn to who will provide assistance and help (Sarason, Sarason and Pierce 1990;Davis, Morris and Kraus 1998). Clara et al. suggest that it is the perception of global social support that appears to provide a buffering effect that protects individuals from 'succumbing to adversity ' (2003, p. 268). Such perceptions 'appear to reflect a pervasive worldview, rooted perhaps in childhood experiences and attachment history' while domain-specific social support is 'more clearly the result of experiences with particular relationships and tend to influence only judgments that are most closely tied to these relationships' (Davis et al. 1998, p. 478).
As a multidimensional construct, social support is often measured by the size of the social network, the quality and frequency of contact with members of the social network, as well as instrumental and emotional forms of support received (Barrera 1986;George 1989;Tardy 1988). Typical domains of social support reported in the research literature have included family members, friends and significant others. Davis, Morris and Kraus (1998) assessed social support available to college students across four specific domains including family, friends, romantic partners and faculty advisors. They expected the number of potential figures within each specific domain to vary, but they were interested in perceptions of support from each domain and not the size of the support network. They found that respondents made 'fairly sharp distinctions' between different social domains and perceptions of global social support. Friends were identified as the strongest source of support followed by parents and romantic partner, while support from faculty advisors was only weakly associated with global support. Overall, the friends domain of social support accounted for the most powerful associations with wellbeing in their study. Their findings are consistent with Chou (2000), Clara et al. (2003), and Brock, Peirce and Sarason (1996), all of whom identified friends, parents and family as the most often reported social supports among high school and college students and also the domains of support most strongly associated with wellbeing.
It is important to note that global social support is not necessarily explained by the additive combination of domain-specific supports. One explanation for this is that the domain-specific support categories measured in previous research (i.e. parents, family, friends, significant other, faculty advisor) may have been too specific, and may not have included an adequate range of possible supporters to capture the full array of support perceived to be available (Davis et al. 1998). Thus while global and domain-specific social supports are associated with wellbeing outcomes, they are also distinct constructs (Davis et al. 1998). Consequently, future research should take a broader view of the measures of specific social support domains.
It is also important to address a wider array of psychological wellbeing measures that may be impacted by social support. Much of the research to date has examined depression and anxiety as indices of wellbeing but much less emphasis has been placed upon strengths-based indices of wellbeing such as satisfaction with life. Therefore, it would be valuable to understand the unique associations with positive measures of wellbeing that specific domains of social support, in particular perceived support from friends as well as from caring adults, may have in the lives of college students.
Present investigation
Of greatest interest to the present research was the broad domain of social support perceived to be available from caring and safe adults. To address the need for greater breadth in measurements of social support domains, I did not limit this domain by any specific category of adult. Instead, I simply asked respondents how many safe adults (not peers) they had to turn to in difficult times. I was primarily interested not only in the size of this domain-specific social support of primary interest, but also in the identification of who these safe adults were that college students felt they could turn to.
To address the need for a wider array of psychological wellbeing assessments as they relate to perceptions of social support, I included several strengths-based and positive wellbeing measures in this study. Both academic performance and satisfaction with life seemed to be key indicators of healthy and positive wellbeing outcomes among college students. Doing well academically and feeling satisfied with one's life would be for many a successful college experience. Additionally, positive wellbeing among college students is also thought to be indicated by higher levels of emotional intelligence (EI) as EI is assumed to reflect characteristics that enable a person to attend to and value feelings, while also being clear about the meaning of feelings and the expression of them (Gohm and Clore 2002). Therefore, higher levels of emotional intelligence might be argued to lead to higher levels of wellbeing and better everyday coping and problem solving (Gohm and Clore 2002), which may be associated with specific domains of perceived social support, in particular safe and caring adults.
Given that both stress (Misra and Castillo 2004) and problematic alcohol consumption patterns (Ham and Hope 2003;O'Malley and Johnston 2002) are prevalent and often perennial challenges faced by many college students as they transition to college life and face the academic pressures associated with it, it is valuable to examine both factors with regard to perceived social support domains. Perceived stress is highly correlated with depressive symptomatology and, although not a measure of depression, lower levels of perceived stress typically reflect higher levels of wellbeing and lower scores on depression measures.
Problematic alcohol consumption patterns among college students may be a means of coping with perceived stress and may also vary in relation to perceptions of social support available from specific support domains. Additionally, other coping strategies used by college students in dealing with their day-to-day challenges should be explored in relation to perceptions of social supports. It might be argued that perceptions of more caring and safe adults to turn to in difficult times may provide not only support but opportunities to develop better and healthier coping skills than might be developed when turning to one's peers for support.
In the present study, I intended the domain of safe adult social support to be broader and more inclusive than previous studies, while I intended the social support domain of 'friends' to be more specific with regard to the type of 'friend' associated with measures of wellbeing, specifically close versus casual. Therefore, I examined three domains of perceived social support: (1) the number of close friends reported, (2) the number of casual friends reported, and (3) the number of safe adults they felt safe turning to in difficult times (see Figure 1). I examined the associations between these three domain-specific social supports in relation to three subjective wellbeing (SWB) areas including (1) negative SWB, (2) positive SWB, and (3) coping strategies (see Figure 1). Negative SWB measures included perceived stress and patterns of problem alcohol consumption (the CAGE, AUDIT and substance use subscale from the Brief Cope measure). Positive SWB measures included academic performance (e.g. self-reported GPA), satisfaction with life, overall emotional intelligence (EI), and four subscales of EI (self-emotion appraisal, other-emotion appraisal, regulation of emotion and use of emotion). Finally, I measured coping outcomes using the Brief Cope scale, resulting in four subscales: emotion-focused coping, problem-focused coping, other coping mechanism: adaptive, and other coping mechanism: maladaptive (see Figure 1). I expected that, among the three specific domains of social support examined, having more safe adults to turn to in difficult times would yield the greatest associations with measures of wellbeing among college students than either having more close or casual friends would yield. Additionally, I expected that having more close friends would yield greater associations to measures of wellbeing than would having more casual friends. My expectations were based on previous findings indicating that perceived support from families (Chou 2000) and sometimes both families and friends (Clara et al. 2003) was associated with lower levels of depression, while at least one study (Chou 2000) found that support from friends was associated with lower levels of anxiety but not depression. Because relationships with 'close' friends and also 'safe' adults would suggest some level of trust and intimacy as part of the relationship, the same assumption would not necessarily be made within the relationships with 'casual' friends. Therefore, I expected to find fewer associations with wellbeing with the casual friends domain of social support as compared to the other two domains of support.
I also expected that social support from safe adults would yield the greatest number of associations with coping measures. In part, this was due to the assumption that college students who reported more safe adults to turn to in difficult times may actually have had encounters with these adults in the past that promoted the development of better coping strategies in dealing with life's challenges. Although close friends may provide resources of social support, they may also be at the same developmental stage in life, and therefore such support may yield fewer associations with effective coping strategies. Finally, I wanted to examine more closely the patterns of associations with wellbeing of two levels of safe adults in the lives of college students. Specifically, I compared those with more versus fewer perceived safe adults to turn to in difficult times on all wellbeing measures. I expected that comparisons between those who reported four or fewer safe adults to turn to in difficult times as compared to those with five or more would result in weaker and fewer associations with wellbeing measures as well as differences in coping strategies.
Participants
The overall sample included 259 respondents from a small, Catholic, residential college in the northeast region of the United States, comprised of 118 (46%) males and 141 (54%) females whose average age was 20 years. The majority of respondents were white (246) (95%) and heterosexual (254)
Procedure
The majority of participants (75%) were approached at various locations around the college campus and asked if they would be willing to volunteer to participate in a survey on health and wellbeing among college students. Other participants (25%) were volunteers from introductory and other psychology classes who received extra credit for their participation. The survey included measures of wellbeing related to this research study and required 30-40 minutes for completion. The three questions about domain-specific social supports were located in a lengthy demographic section at the end of the survey (i.e. 'How many close friends do you have?', 'How many casual friends do you have?' and 'How many safe adults (not your peers) do you have to turn to in difficult times?'). All participants read and signed an informed consent form and were also debriefed in writing following their participation. The debriefing form explained that several measures associated with wellbeing were included on the survey and that the research was interested in whether different domains of social support (close friends, casual friends or safe adults) might yield more beneficial outcomes to college students.
Measures
Negative SWB measures I used the Perceived Stress Scale (PSS) (Cohen, Kamarck and Mermelstein 1983) as a brief 14-item measure of the degree to which situations in one's life are appraised as stressful. It is highly correlated with depression symptomatology measures and assesses states that place people at risk for clinical disorders. For example, '[I]n the last month, how often have you felt nervous and "stressed"?', and '[I]n the last month, how often have you felt that you were on top of things?' scores are summed and range from 14 to 56. The mean score for college students ranges between 23 and 25, with a mean of 23.18 for females and 23.67 for males.
I measured problem alcohol consumption patterns using three scales including the CAGE, AUDIT and the Substance Use scale of the Brief Cope measure. The CAGE is an assessment instrument used internationally for identifying those experiencing problems with alcohol consumption (Ewing 1984), which requires less than a minute to complete. CAGE is an acronym that stands for Cut-Annoyed-Guilty-Eye and corresponds to the four items on the questionnaire: (1) Have you felt the need to stop or cut down your drinking? (2) Have you been angry or annoyed at other people talking about or criticising your drinking? (3) Have you felt guilty as a result of something you did when you were drinking? (4) Have you ever taken a drink first thing in the morning? (eye-opener/early morning drinking). These very straightforward questions are responded to by 'yes' (score of 1) or 'no' (score of 0) and scores are derived by summing the responses from the four questions. A score of two indicates possible problematic drinking while a score of three or four indicates problematic drinking and potential dependence. The CAGE has been found to be better at identifying lifetime alcohol abuse and dependence while the Alcohol Use Disorders Identification Test (AUDIT) is used for the detection of hazardous and harmful drinking (McCusker et al. 2002).
The AUDIT was developed by the World Health Organization as a swift screening method for excessive drinking and is a sensitive questionnaire that detects hazardous and harmful drinking patterns (Saunders et al. 1993). It consists of 10 questions with response scales ranging from 0 to 4, where 0 indicates 'never' and 4 may indicate 'daily or almost daily', resulting in an overall maximum score of 40. It assesses subscales of hazardous alcohol consumption (e.g. frequency of drinking, quantity and frequency of heavy drinking), alcohol dependence (e.g. impaired control over drinking, increased salience of drinking and morning sickness), and harmful alcohol problems (e.g. guilt after drinking, blackouts, alcohol-related injury and others concerned about drinking). A score between 0 and 7 would recommend alcohol education, between 8 and 15 would recommend simple advice, between 16 and 19 would recommend simple advice plus counselling and monitoring, and scores between 20 and 40 would recommend referral to a specialist for diagnostic evaluation and treatment.
The third measure of problem alcohol consumption was the Substance Use Subscale of the Brief COPE (Carver 1997). This subscale assesses the degree to which respondents turn to the use of alcohol or other drugs as a way of disengaging from the stressor. It includes two statements: 'I've been using alcohol or drugs to make myself feel better' and 'I've been using alcohol or drugs to help me get through it'. Respondents offer responses on a scale from 1, 'I haven't been doing this at all' to 4, 'I've been doing this a lot'. A higher score indicates greater risk of alcohol abuse.
Positive SWB measures
Positive subjective wellbeing measures include academic performance, satisfaction with life and emotional intelligence. I assessed academic performance through selfreported cumulative GPA and also GPA within the respondent's major (ranging from 0 to 4.0). I assessed satisfaction with life using the satisfaction with life scale (Diener et al. 1985), designed to assess one's satisfaction with life as a whole. The measure is composed of five questions including: 'I consider my life close to ideal' and 'Overall I am satisfied with my life'. Each of the questions is scored on a 7point Likert Style scale (1 = strongly disagree to 7 = strongly agree). The overall score is a total of the five items, with scores ranging from 5 to 35, with 5 indicating not at all satisfied with life and 35 indicating total satisfaction with life. This measure has been shown to have high internal consistency and reliability and correlates well with other measures of subjective wellbeing (Diener et al. 1985).
I assessed emotional intelligence using Wong and Law's (2002) Emotional Intelligence Scale (EI), a 16-item self-report measure. An overall emotional intelligence scale score and four subscale scores result. The subscales are (1) Self-Emotional Appraisal (SEA) (i.e. 'I have a good understanding of my own emotions'), (2) Others' Emotional Appraisal (OEA) (i.e. 'I am a good observer of others' emotions'), (3) Use of Emotion (UOE) (i.e. 'I am able to control my temper and handle difficulties rationally'), and (4) Regulation of Emotion (ROE) (i.e. 'I always set goals for myself and then try my best to achieve them'). The participants were asked to indicate on a 7-point Likert style scale, where 7 indicated strongly agree and 1 indicated strongly disagree, how strongly they agreed or disagreed with each of the sixteen statements. A higher score reflected individuals with higher levels of emotional intelligence.
I used the Brief COPE scale (Carver 1997) to assess coping strategies. A 28-item self-report measure based upon concepts of coping from Lazarus and Folkman (1984), it assesses how respondents typically react to stressful events. It is an abbreviated scale from a full Cope measure and yields 14 subscales of two items each, using a scale where 1 indicates 'I haven't been doing this at all', and 4 indicates 'I've been doing this a lot'.
The combination of these 14 subscales results in four primary subscales of coping including (1) emotion-focused coping, (2) problem-focused coping, (3) other coping mechanism: adaptive, and (4) other coping mechanism: maladaptive. Emotionfocused coping consists of subscales including use of emotional support (e.g. getting sympathy or emotional support from someone), positive reframing (e.g. making the best of the situation by growing from it, or viewing it in a more favorable light), and religion (e.g. increased engagement in religious activities). Problem-focused coping consists of three subscales including active coping (e.g. taking action, exerting effort to remove or circumvent the stressor), planning (e.g. thinking about how to confront the stressor, planning one's active coping efforts), and use of instrumental support (e.g. seeking assistance, information or advice about what to do).
'Other coping mechanism: adaptive' consists of acceptance (e.g. accepting the fact that the stressful event has occurred and is real) and humour (e.g. making jokes about the stressor) while 'other coping mechanism: maladaptive' consists of 6 subscales. They include venting (e.g. an increased awareness of one's emotional distress, and a concomitant tendency to ventilate or discharge those feelings), behavioural disengagement (e.g. giving up, or withdrawing effort from, the attempt to attain the goal with which the stressor is interfering), mental disengagement (e.g. self-distraction), psychological disengagement from the goal with which the stressor is interfering (e.g. daydreaming, sleep or self-distraction), self-blame (e.g. criticising or blaming oneself for the stressor that has occurred), substance use (e.g. turning to the use of alcohol or other drugs as a way of disengaging from the stressor), and denial (e.g. an attempt to reject the reality of the stressful event).
Results
I conducted correlational analyses between the number of close friends, the number of casual friends and the number of safe adults reported and measures of negative SWB, positive SWB and coping. I also conducted one-way analyses of variance to examine differences between those who perceived four or fewer safe adults to turn to in difficult times versus those with five or more safe adults to turn to in difficult times on the three wellbeing outcome areas (negative SWB, positive SWB and coping).
Casual friends. Having more casual friends was not associated with any positive measures of wellbeing but was positively correlated with problematic alcohol consumption (AUDIT) (r=.34, p=.01) and possible alcohol dependence (AUDIT) (r=.19, p=.01) (see Figure 2). These results suggest that greater numbers of reported casual friends in this college sample was associated with heavy and or high-risk alcohol consumption patterns but not with any positive indices of wellbeing.
Safe adults to turn to. As expected, the greatest number of associations with wellbeing were found for perceived safe adult social support rather than either close or casual friends domains. Overall, having more safe adults to turn to in difficult times was correlated with lower levels of perceived stress (r=-.16, p=.01), and lower CAGE problem drinking scores (r=-.13, p=.05), higher levels of satisfaction with life (r=.24, p=.01), and negatively correlated with self-distraction coping (r=-.25, p=.033) (see Figure 2). Finally, having more caring/safe adults to turn to was correlated with higher emotional intelligence scores (r=.15, p=.01) including two subscales of emotional intelligence, specifically self-emotion appraisal (r=.19, p=.01) and regulation of emotion (r=.13, p=.01) (see Figure 2). Contrary to expectations, no associations were found with any of the coping measures.
One-way analyses of variance. I conducted one-way analyses of variance between respondents with 0-4 or 5 or more safe adults to turn to in difficult times on (1) negative wellbeing measures (perceived stress, the substance use coping subscale from the Brief Cope scale, the AUDIT alcohol consumption and alcohol dependence scores, and the CAGE problem drinking scale score), (2) positive wellbeing measures (e.g. satisfaction with life, emotional intelligence (EI), and EI subscales including self-emotion appraisal, other-emotion appraisal, regulation of emotion and use of emotion) and (3) The two levels of safe adults were created by splitting the six response categories (i.e. 0, 1-2, 3-4, 5-7, 8-10, 11+) into two levels, specifically, 4 or fewer safe adults (i.e. 0, 1-2, 3-4) and 5 or more safe adults (e.g. 5-7, 8-10, 11+). Frequency distributions indicated that 56.8 per cent (147) of respondents comprised the 4 or fewer safe adults group and 43.2 per cent (112) of respondents comprised the 5 or more safe adults group.
Negative wellbeing. Results indicated perceived stress scores were significantly lower (and within the average range for college students) for respondents with 5 or more safe adults to turn to in difficult times (M=24.1, sd=7.6) as compared to respondents with 4 or fewer safe adults to turn to (M=26.6, sd=7.3) (above the average range) (F(1,256)=7.49, p=.007, d=.34). However, no differences were found in alcohol problem indices including consumption (AUDIT), dependence (AUDIT), substance use as a coping method (BCOPE) or problematic drinking patterns (CAGE).
Positive wellbeing. Results from the examination of positive outcomes indicated that respondents with 5 or more caring/safe adults to turn to in difficult times reported significantly higher levels of satisfaction with life (M=26. Coping measures. Coping strategies might be expected to differ between college students who perceived greater numbers of safe adults available to them and students who perceived fewer safe adults to turn to in difficult times. However, results yielded only one significant difference in coping strategies between these two groups. Specifically, respondents with five or more caring/safe adults to turn to reported lower scores on the self-blame coping measure (i.e. criticising and blaming oneself for the stressor) as compared to respondents with four or fewer caring/safe adults (M=3.8, sd=1.5, M=4.6, sd=1.9, respectively) (F(1, 69)=4.01, p=.049, d=-.50).
Categories of adults to turn to in difficult times. Finally, I also assessed who the adults were that college students perceived they could turn to in difficult times to understand this broader social support domain more fully. Respondents were asked to check boxes on the survey of several categories of adults that constituted their estimated number of safe adults to turn to in difficult times including parents, relatives, teachers, religious leaders, neighbours, employers and other (followed by fill-in-the-blank). Response frequencies were examined between respondents who indicated five or more adults to turn to as compared to respondents with four or fewer caring/safe adults to turn to in difficult times.
The four most cited safe adult categories for the 5+ and 4 or fewer safe adult levels were parents (94% and 80%, respectively), relatives (85% and 60%, respectively), teachers (55% and 27%, respectively) and neighbours (41% and 18%, respectively), followed by employers (33% and 7.5%, respectively), other (25% and 16.3%, respectively) and religious leaders (15% and 5%, respectively). Clearly, respondents with five or more caring/safe adults reported higher percentages of caring/safe adults in all of the categories. Nearly twice as many respondents with 5 or more caring/safe adults identified teachers and three times as many identified employers as caring/safe people to turn to than did respondents who perceived four or fewer caring/safe adults to turn to in difficult times.
Discussion
The transition to college can be a daunting experience and, even once there, navigating the many challenges of social life and academic expectations can continue to challenge even the best of students. For college students, the sense that they are loved, cared for and supported by friends, family members and others can make a significant difference in their overall wellbeing, especially in regard to depression and anxiety. But with growing understanding of the multidimensional nature of social support and its many correlates to wellbeing, published research has been criticised for limiting the way social support has been measured (Vaux, Riedel and Stewart 1987). Chou (2000) stated clearly that studies that fail to consider the source of social support may indeed lose very important information as to what and how these perceptions impact on wellbeing. Thus, in the present study, one domain of social support among college students designed to be broad was safe adults to turn to in difficult times. This domain-specific social support included any caring adults perceived to be safe to turn to in difficult times and did not draw distinctions between whether those adults were family, relatives, teachers, coaches, employers or others. In so doing, this domain was both broad and yet also distinct from peer friendships and romantic partner support domains.
Building upon the Davis et al. (1998) study, which assessed social support available to college students across four specific domains (family, friends, romantic partners and faculty advisors), the present research examined three specific domains of support: the number of close friends, casual friends and safe adults to turn to in difficult times. However, unlike the Davis et al. (1998) study, this study was indeed interested in the size of the domains of social support and not the perceptions of support provided by them. Therefore, instead of asking respondents which domains they perceived provided the most support and consequently contributed the most to wellbeing, this investigation measured several indices of wellbeing and coping which were then correlated individually with each of the three domain-specific social supports.
Two aspects of the present study differentiate it from other studies on social support, namely the safe adults to turn to domain of social support and the greater breadth of wellbeing measures included. First, the safe adults to turn to domain was broader than previous measures of adult social supports and was inclusive of any and all caring and safe adults that college students felt they had available for support during difficult times. Second, the measures of wellbeing in this study were also broad and included both deficit (e.g. perceived stress and problem alcohol consumption patterns) and strengths-based (satisfaction with life, emotional intelligence and academic performance) indices as well as measures of coping (four primary subscales of the Brief Cope scale).
I expected that the perception of more safe adults to turn to in difficult times among college students would be associated with more measures of wellbeing and would demonstrate more associations with wellbeing and coping than would either close or casual friends social support domains. I further expected that the casual friends domain of social support would yield the fewest associations to wellbeing of the three social support domains. Finally, I expected that comparisons between those who had four or fewer safe adults to turn to in difficult times as compared to those with five or more would result in weaker and fewer associations with wellbeing outcomes and differences in coping strategies as well.
The results were consistent with my expectations in that a greater number of associations with wellbeing measures were found for the safe adults domain of social support followed by the close and casual friends domains of social support. In fact, no indices of positive wellbeing were associated with the casual friends domain of social support at all. Overall, having more safe adults to turn to in difficult times among college students was associated with six positive wellbeing outcomes, namely lower perceived stress, greater satisfaction with life, higher overall emotional intelligence (EI), self-emotion appraisal (EI subscale), regulation of emotion (EI subscale) and lower problem drinking (CAGE) scores. Similarly, having more close friends was associated with two positive wellbeing outcomeslower perceived stress and greater satisfaction with life -but, unlike the safe adults domain, having more close friends was also associated with higher problem alcohol consumption (AUDIT) scores. Not only were close friends associated with alcohol consumption, but having more casual friends as a college student was associated with both higher problem alcohol consumption (AUDIT) and higher possible alcohol dependence (AUDIT) scores.
I also expected that coping strategies would be different between the domains of social support. I thought that better coping strategies would be associated with the domain of safe adult social support more than for close or casual friend supports, and indeed there were differences but none that reflected any significant benefits associated with safe adults as I expected. Respondents who indicated having more safe adults to turn to reported only lower mental disengagement (e.g. selfdistraction) coping strategies, while respondents who indicated having more close friends reported lower 'other: maladaptive' coping (e.g. venting, behavioural disengagement, mental disengagement, self-blame, substance use and denial), but also lower emotion-focused coping (e.g. emotional support and positive reframing) and lower 'other: adaptive' coping (e.g. acceptance and humour) strategies.
Last, I expected that examination of the safe adults domain of social support would yield more associations with wellbeing for respondents who indicated that they had five or more safe adults to turn to as compared to those with four or fewer safe adults to turn to. Consistent with my expectations, college students with five or more safe adults to turn to in difficult times reported significantly lower levels of perceived stress, greater satisfaction with life, higher overall emotional intelligence (EI) scores, self-emotion appraisal (EI subscale), regulation of emotion (EI subscale), higher academic achievement (both cumulative and major GPAs), and also lower self-blame coping scores than respondents with four or fewer safe adults to turn to. Clearly, having a smaller network of these adults perceived to be safe, caring and available in trying times yielded significantly fewer associations with wellbeing among college students.
Consistent with previous research, the present findings suggest that certain domains of social support including safe adults to turn to and close friends are associated with positive wellbeing among college students and may be considered contributing factors to a buffering effect from adversity. While Davis et al. (1998) found that 'friends' accounted for the most powerful associations with wellbeing in comparison to family, romantic partners and faculty advisors, the present study found this to be true for only a certain type of 'friend', specifically close friends but not casual friends. Having more 'close friends' was associated with lower perceived stress and higher satisfaction with life scores but having more 'casual friends' was associated only with measures related to problematic alcohol consumption patterns. Even though having more close friends was associated with certain aspects of positive wellbeing, it too was associated with problem alcohol consumption patterns among college students. In fact, the only domain of social support in this study associated with lower problem drinking scores was that of safe adults to turn to. College students who perceived and reported more safe adults to turn to in difficult times tended to report lower CAGE problem drinking scores, suggesting lower levels of lifetime alcohol abuse and dependence.
The perception of having more safe adults to turn to in difficult times, in addition to being the only domain associated with lower problem drinking, was also the only domain of social support associated with overall emotional intelligence (EI) and two EI subscales: self-emotion appraisal and regulation of emotion. Emotional intelligence is assumed to be associated with an awareness and adeptness in dealing with one's own feelings as well as in reading the emotional states of those around us. Those higher in emotional intelligence are assumed to have higher levels of mental health and better coping and problem-solving skills and capacities (Gohm and Clore 2002).
Finally, not only is it important that we feel supported and cared for by those in our lives but how we cope with challenges also impacts on our wellbeing. Coping is generally understood as a process that involves appraisal of situations and the use of skills or strategies to manage or reduce the stressful circumstances (Lazarus 1966;Lazarus 1980, 1985). Lazarus (1993) emphasised two styles of coping -problem-focused coping and emotion-focused coping -and although the two styles are viewed separately they may also occur congruently in dealing with a wide array of stressful situations. While problem-focused coping involves changing something in the situation or acting directly to remove the cause of stress, emotionfocused coping involves the reduction or management of the emotional distress associated with the situation (Sica et al. 1997).
The present research found that having more close friends was associated with lower emotion-focused coping, lower other: maladaptive coping, but also lower other: adaptive coping. It would appear that close friends may indeed contribute to a buffering effect from adversity, as Davis et al. (1998) found, where having more close friends was related to less need to regulate emotion with regard to situational distressors and was associated with lower levels of maladaptive coping. It may very well be that it is within the realm of coping where wellbeing is most greatly impacted by perceived social support from 'friends', as the Davis et al. (1998) study reported.
Although it might be expected that perceiving more connections with caring/safe adults might have resulted in greater contact with them and consequently more effective coping strategies, the results of this study did not support this. Instead, it would appear that college students cope very similarly regardless of whether they have more or fewer caring/safe adults in their lives to turn to in difficult times with the exception of mental disengagement (e.g. self-distraction) and self-blame (e.g. criticising and blaming oneself for the stressor). Young adults may not necessarily be learning coping skills to help them deal with stressors as a function of their relationships with caring/safe adults, but instead may simply be benefiting from feeling supported and cared for by these adults as they navigate certain stressful challenges of life.
It would appear from the findings of this study that both close friends and safe adults are valuable resources when it comes to coping with life's challenges. However, the two domains may function quite differently with regard to the type of support, the quality of support, and the possible timing with which college students seek support. Close friends possibly because of their proximity and availability may provide more immediate coping resources that reduce the need to seek out emotional support (i.e. sympathy) and positive reframing (i.e. making the best of the situation) as well as needs for venting and distraction or denial. Alternatively, having safe adults to turn to in difficult times may reduce the need to mentally disengage and distract oneself from the problem, possibly as a function of the perceived availability of a wiser adult who may provide support, guidance and reassurance that a peer may not be able to provide. Possibly the safe adults that college students turn to in difficult times are sought out when close friends are not able to provide the type or quality of support or assurance that life experience affords, whereas trustworthy and safe adults are able to assure them that everything will be alright. The data from this research did not find problem-focused coping occurred in association with having more safe adults to turn to and, thus, it may be that safe adults are not assisting in the promotion of problem-focused coping skills but may simply provide a different caring response to the stressful situation than close friends are able to offer.
Perhaps the most important and interesting findings from this study were those related to college students who reported having four or fewer safe adults to turn to in difficult times as compared to those who reported five or more safe adults to turn to in difficult times. Those who perceived that they had five or more safe adults to turn to in difficult times reported lower perceived stress scores, higher satisfaction with life scores, higher overall emotional intelligence scores, higher self-emotion appraisal, higher regulation of emotion, higher self-reported academic performance (both cumulative GPA and term GPA), and lower levels of self-blame coping than college students with four or fewer safe adults to turn to in difficult times.
In conclusion, although I measured three domains of support networks in this study, namely the number of close friends, casual friends, and safe adults to turn to, in the present research I was most interested in the number of safe adult supports perceived to be available to college students in difficult times. Although much more research is needed, these results suggest that a valuable and simple screening assessment might assist college personnel to identify students at greater risk of 'succumbing to adversity' and who may be at risk of lower levels of wellbeing while at college. Could it be as simple as asking students to report the number of safe adults they perceive to be available to them in difficult times? For respondents in the present study who indicated they had four or fewer safe adults to turn to, it would seem to be the case.
Although much more research is needed, screening incoming students with a simple question about how many safe adults they have to turn to in difficult times might in fact provide a pathway for administrators and faculty to assist incoming first year students (as well as returning students) in developing connections with safe and caring adults at the start of their college career. Such a process may increase the likelihood of certain students having more successful academic and social experiences during one of the most important developmental periods of their life.
Future research should continue to examine the role of close friends and caring, safe adults as social supports in the lives of college students given the findings from the present study. It should also incorporate many more measures of positive wellbeing associated with these and other domains of support in order to understand better the breadth of positive impact that close and caring adults can have on adolescents and young adults today. Finally, research should also examine the qualitative differences in perceptions of social support available from different domains including who supporters are defined as and what characterises the type and value of support provided. | 2019-01-23T18:32:39.248Z | 2010-11-15T00:00:00.000 | {
"year": 2010,
"sha1": "067ac9730bbe0c368daba7c2287c7f02fc7c217b",
"oa_license": null,
"oa_url": "https://ojs.unisa.edu.au/index.php/JSW/article/download/588/524",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b95e8b4033c6cafb475981798422537a2b0b31ae",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
222153851 | pes2o/s2orc | v3-fos-license | Tamsulosin plus a new complementary and alternative medicine in patients with lower urinary tract symptoms suggestive of benign prostatic hyperplasia: Results from a retrospective comparative study
Background: We aimed to compare the effi-cacy of tamsulosin 0.4 mg once a day alone and the combination therapy involving tamsulosin 0.4 mg once a day plus the complementary and alternative medicine consisting of vitamins (C and D), herbal products (Cucurbita maxima, Capsicum annum, Polygonum capsicatum) and amino acid L-Glutamine bid in patients with lower urinary tract symptoms related to benign prostatic hyperplasia (LUTS/BPH). Methods: We performed a retrospective matched paired comparison. The clinical records of LUTS/BPH patients who underwent medical therapy with tamsulosin 0.4 mg/day plus the complementary and alternative medicine consisting of vitamins (C and D), herbal products (Cucurbita maxima, Capsicum annum, Polygonum capsicatum) and amino acid L-Glutamine bid between January 2019 to September 2019 were reviewed (Group 1). These patients were compared in a 1:1 fashion with LUTS/BPH patients who underwent therapy with tamsulosin 0.4 mg/day alone (Group 2). Total, storage, voiding and Quality of Life (QoL) international prostate symptom (IPSS) score, as well as overactive bladder (OAB)-v8 score and treat- ment-related adverse events recorded at 40 days follow-up in both groups were compared. Results: At 40 days follow-up mean total, storage, voiding and QoL IPSS sub-scores as well as OAB-v8 score significantly improved in both groups. Intergroup comparison showed statistically significant lower mean total IPSS score (11.6 vs 12.4, p = 0.04) mean storage IPSS sub-score (6.5 vs 7.5, p = 0.01), and mean OAB v8 score (16.7 vs 18.8, p = 0.03) in patients in the Group 1. Conclusions: The combination of tamsulosin 0.4 mg/die plus the and alternative medicine consisting of (C and herbal products (Cucurbita maxima, Capsicum annum, Polygonum capsicatum) and L- Glutamine provides statistically significant advantages in terms of storage LUTS improvements in patients with LUTS/BPH compared to tamsulosin mg/day These findings are preliminary and further prospective studies on a greater number of patients are needed to confirm it.
KEY WORDS: Benign prostatic hyperplasia; Combination therapy; Lower urinary tract symptoms; Phytotherapy. Submitted 20 February 2020; Accepted 13 March 2020 everyday urological practice and their prevalence increases with ageing (1)(2)(3). In the EPIC study, Riboli et al. reported an incidence of storage and voiding LUTS of about 51% and 26% of men evaluated, respectively (3). Interestingly, approximately 18% of men reported the coexistence of storage and voiding symptoms (4,5). The European Association of Urology (EAU) guidelines strongly recommend a1-adrenoceptor antagonists as first-line therapeutic option in patients with moderate to severe symptoms as they significantly improve urinary symptoms and maxim urinary flow (Q max ) (6). In men with moderate-to-severe LUTS who mainly have bladder storage symptoms EAU Guidelines strongly recommend muscarinic receptor antagonists (strong recommendation) or beta-3 agonists (weak recommendation) (7). However, a number of concerns have been reported with the prescription of these drugs. Antimuscarinics might theoretically decrease bladder strength, thus increasing post-void residual volume (PVR) urine and causing urinary retention. Moreover, not all antimuscarinics have been evaluated in elderly men, and long-term studies on their efficacy in men of any age with LUTS are not yet available. Furthermore, antimuscarinics are contraindicated in patients with angle-closure glaucoma, gastrointestinal obstruction, paralytic ileus, myasthenia gravis, severe heart disease (8,9). On the other hand, mirabegron has been evaluated mainly in female patients (8,9). In recent years the prescription of phytotherapeutic compounds in patients with LUTS/BPH has gained growing interest (10). These agents represent a heterogeneous group and may contain differing concentrations of active ingredients. The complementary and alternative medicine Kubiker (Naturmed, Italy), consisting of vitamins (C and D), herbal products (Cucurbita maxima, Capsicum annum, Polygonum capsicatum) and amino acid L-Glutamine, has been proposed in the treatment of overactive bladder syndrome (OAB) (11). We aimed to compare the efficacy of the combination therapy involving tamsulosin 0.4 mg once a day plus Kubiker bid and therapy with tamsulosin 0.4 mg alone in patients with LUTS/BPH.
MATERIALS AND METHODS
We performed a retrospective comparative study. The clinical records of LUTS/BPH patients who underwent
INTRODUCTION
Lower urinary tract symptoms related to benign prostatic hyperplasia (LUTS/BPH) represent a common complaint in Summary 03Fusco_Stesura Seveso 24/09/20 14:12 Pagina 173 medical therapy with tamsulosin 0.4 mg/day plus Kubiker bid between January 2019 to September 2019 were reviewed (Group 1). These patients were compared in a 1:1 fashion with LUTS/BPH patients who underwent therapy with tamsulosin 0.4 mg/day alone (Group 2). The followings were considered exclusion criteria: post-void residual volume (PVR) > 150 ml, prostate specific antigen (PSA) > 10 ng/ml, concomitant therapy with 5-alpha reductase inhibitors and/or phosphodiesterase type 5 inhibitors and/or muscarinic receptor antagonists or beta-3 agonists, presence of neurological disorders, previous pelvic surgery, diabetes, urinary tract infections, history of acute urinary retention. The matched-pair comparison was based on the following criteria: PSA, prostate volume (PV), Q max , PVR, total international prostate symptom score (IPSS), and 8-item overactive bladder questionnaire -8 (OAB-v8) score. Total, storage, voiding and Quality of Life (QoL) IPSS scores, as well as OAB-v8 score and treatment-related adverse events recorded at 40 days followup in both groups were compared. Descriptive data of continuous variables were expressed as mean ± standard deviation (SD) and compared using the Student' s t tests. The analyses were considered significant for a p-value < 0.05. All statistical analyses were performed with SPSS version 16.0 software. The study was performed in accordance with the ethical standards laid down in the Declaration of Helsinki. Verbal informed consent was obtained from subjects.
RESULTS
Overall, 36 eligible patients who underwent medical therapy with tamsulosin 0.4 mg/day plus Kubiker were identified and compared to 36 patients who underwent therapy with tamsulosin 0.4 mg/day alone. Baseline patients' characteristics in both groups are reported in Table 1. At 40 days follow-up mean total, storage, voiding and QoL IPSS sub-scores significantly improved in both groups (Table 2). Similarly, a statistically significant improvement in terms of OAB v8 score and Q max was observed in both groups (Table 2). Intergroup comparison showed statistically significant lower mean total IPSS score, mean storage IPSS sub-score, and mean OAB v8 scores in patients in the Group 1. Not statistically significant differences in terms of voiding IPSS sub-score, Q max and PVR emerged from intergroup analysis. Not clinically significant treatment-related adverse events were recorded in both groups.
DISCUSSION
Benign prostatic obstruction has been reported to cause morpho-functional alterations involving the detrusor muscle. Clinically, these alterations can impair bladder contractility and cause detrusor overactivity, decreasing bladder compliance, and onset of storage LUTS characterized by an altered bladder sensation, increased daytime frequency, nocturia, urgency and urgency incontinence (121). Experimental models have shown that bladder outlet obstruction causes detrusor smooth muscle cells hypertrophy and hyperplasia as well as extracellular matrix alterations that may lead, over time, to detrusor overactivity and, later, to reduced bladder contractility (13)(14)(15). As reported in the EpiLUTS study, 45.7% of the 14.139 men evaluated had storage LUTS (16). a1-blockers act by inhibiting the effect of endogenously released noradrenaline on smooth muscle cells in the prostate thus reducing prostate tone and bladder outlet obstruction (17). These drugs can reduce both storage and voiding LUTS and are considered the first-line drug treatment for male LUTS due to their good efficacy, and low rate and severity of adverse events. LUTS/BPH patients with mainly bladder storage symptoms represent a difficult to treat subset of patients. Indeed, therapy with a1-blockers may be suboptimal. On the other hand, both muscarinic receptor antagonists and beta-3 agonists should be prescribed with cautions and adherence to treatments with these drugs is often inadequate. Herbal treatments are an increasingly popular alternative for treating storage LUTS (18). To the best of our knowledge, we compared, for the first time, the clinical efficacy of the combination of tamsulosin 0.4 mg/day plus Kubiker and tamsulosin 0.4 mg/day alone in patients with LUTS/BPH. We found that the combination therapy provided statistically significant advantages in terms of storage LUTS as demonstrated by lower IPSS storage sub-scores as well as lower OAB-v8 score. A number of evidences exist about the potential beneficial effects provided by the compounds contained in the food supple- ment Kubiker. Cucurbita maxima, contained in pumpkin seeds, has been reported to provide benefits in both preclinical and clinical models of lower urinary tract dysfunction (19)(20)(21)(22)(23)(24)(25). Pre-clinical studies have shown that pumpkin seeds have antioxidant and inflammatory properties and inhibit lipid peroxidation (20). Pumpkin seeds administered to rats affected by overactive bladder (OAB) syndrome showed to cause an increase of the production of nitric oxide (NO) via the NO/arginine pathway (22). Independently of the acetylcholin/adrenaline system, this pathway generated the relaxation of the bladder detrusorial musculature (23). Pumpkin seeds were also shown to modulate prostate growth. Abdel Rahman et al. found that rats fed with high amounts of pumpkin seeds in the diet had smaller prostate sizes as compared to untreated rats (24). Furthermore, Tsai and co-workers showed that rats receiving subcutaneous testosterone to induce an increase in prostate size and subsequently treated with pumpkin seeds for 14 days, presented a smaller prostate gland compared to the control group treated only with prazosin (25). Nishimura et al. observed that the administration of pumpkin seed extract for 12 weeks significantly reduced the symptoms of OAB with no side effects (19). Polygonum capsicatum has a strong antioxidant activity, which has been observed in vitro (26). Capsaicin is the first vanilloid investigated for therapeutic purposes and evidence exists demonstrating its efficacy in the treatment of LUTS (11). The capsaicin has been used for the treatment of OAB syndrome due to its ability to desensitize the transient receptor potential vanilloid 1 receptor (27). Evidence exists demonstrating that vitamin C from food and beverages can modulate voiding symptoms (11). However, the knowledge of the exact mechanism of action deserves further investigations. Vitamin D is essential for the proper functioning of the pelvic floor. It has been widely reported that a vitamin deficiency can predispose patients to a high risk of developing LUTS and incontinence (28,29). To date, the role of glutamine in patients with LUTS is widely under-investigated and deserves careful investigations. Overall, although preliminary, results from the present study have relevant clinical implications and pose the basis for further investigations. The combination of a1-adrenoceptor antagonists and phytotherapeutic agents containing a mixture of compounds that can interfere with the pathophysiology of bladder dysfunction at multiple levels like Kubiker may represent a strategy to discuss in patients with prevalent storage LUTS/BPH for which therapy with a1-adrenoceptor antagonists alone is suboptimal and medical treatments with muscarinic receptor antagonists or beta-3 agonists are not recommended or not tolerated. The main limits of the present study are the retrospective design, the small sample size, and the short follow-up. Moreover, the specific role of the various components of Kubiker could not be assessed. Therefore, results from the present study should be considered preliminary and further studied are needed to confirm the efficacy and safety of the combination of tamsulosin and the food supplement Kubiker in LUTS/BPH patients and to identify the subset of patients that can benefit most from this approach.
The role of Kubiker in women with storage LUTS represents a further area of interest (11,30).
CONCLUSIONS
The combination of tamsulosin 0.4mg/die plus Kubiker bid provides statistically significant advantages in terms of storage LUTS improvements in patients with LUTS/BPH compared to tamsulosin 0.4 mg/day alone. These findings are preliminary and further prospective studies on a greater number of patients are needed to confirm it. | 2020-10-06T13:37:19.433Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "788ce6804f545eddc39801b05b1e19164a3ad4ab",
"oa_license": "CCBYNC",
"oa_url": "https://pagepressjournals.org/index.php/aiua/article/download/aiua.2020.3.173/9010",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5756e56c1790f1806097de0744fdb229810734c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236504033 | pes2o/s2orc | v3-fos-license | Optimal Investment Decision of Distribution Network With Investment Ability and Project Correlation Constraints
Power grid enterprises are faced with a serious mismatch between limited investment capacity and numerous investment projects. How to accurately match the weak links with investment projects according to the power system diagnosis is the key to improve investment accuracy. On the basis of an investment-oriented label, a project portfolio optimization framework with coherent diagnosis–evaluation–optimization is proposed in this study. First, a two-layer index system for investment benefit evaluation is established, which considers unit investment efficiency and macroinvestment benefit. Second, by weighing the diagnosis results of power grid and the biased investment environment, the benefit evaluation of the project is implemented as the basis of project portfolio optimization. To meet different investment demands, two combination optimization models of maximizing investment benefit and minimizing investment cost are established considering the coupling benefit relationship and time series relationship between projects. Finally, a case study is conducted for a regional distribution network. The proposed framework has been proven to be able to effectively cope with different investment needs and realize the dynamic adjustment of the scheme in the whole investment cycle.
INTRODUCTION
As a significant part to ensure safe and stable operation of power grids and improve power supply quality, the distribution network planning has become the focus of medium-and long-term investment of power enterprises. Taking China as an example, the scale of power grid investment has increased continuously from 344.8 billion yuan in 2010 to 460 billion yuan in 2020. At the same time, with the access to a high proportion of clean energy and the deepening of the interaction between supply and demand, the investment decision of distribution network involves new elements, such as clean energy (Telukunta et al., 2017;Erdiwansyah and Husin, 2021) installation and automation equipment so that grid investment faces a large number of investment projects of various types. In addition to the basic power supply level and quality, the unbalanced economic development between regions and the policy changes are also within the planning scope of the decision-making level. The traditional rough attribute association system is not conducive to quantifying the investment benefit of the project. Therefore, realizing fine fund allocation and accurate project investment by selecting targeted construction projects from planning projects for efficient investment is of great significance for decision-making departments.
A great deal of research has been conducted in establishing an index evaluation system to optimize distribution network projects. By sorting the investment scale and investment direction of distribution network, an accurate investment project was realized by Wang et al. (2019a). In a study by Liu et al. (2019a), on the basis of the historical power supply situation in a specific area, through the time series weight analysis of key indicators of the distribution network, the urgency consideration function of the feeder is established to optimize a reconstruction project. The entropy weight method is used to make up for the subjective defects in weighting, and the projects are evaluated on the basis of comprehensive quantitative indicators as given in Luo and Li (2013). On the basis of the ability of different attributes to improve existing problems of the distribution network, the optimization index of high-and medium-voltage distribution network is constructed in the study by Tang et al. (2018). The TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) method is introduced to sort the ideal solutions of the firstand second-level indicators of planning projects and guided the second ranking by project relevance and investment limit as given by Ye et al. (2019). However, the investment projects in the abovementioned models are mainly ranked by scores and lack specific optimization models. In this regard, predecessors have made progress in using optimization models to assist decision making. In the study by Li et al. (2018), a project optimization model combining investment quota and investment scale is constructed by associating the attributes of satisfying power supply demand, heavy load, and neck problem with a single planned project. By focusing on the spatial layout according to the impact of projects under construction on the whole or local distribution network, the dynamic planning of distribution network projects is realized by Fu et al. (2019). The investment efficiency of a single attribute is calculated through the historical investment effect, and the allocation iteration model is established to realize the investment management with different granularities in the work by Li et al. (2019). As given by Wang et al. (2019b), the quantitative and qualitative indexes of project evaluation are transformed into the numerical value of [−1, 1] interval, the subjective risk preference of decision makers is considered, and the maximum prospect optimization model is established. In the study by Huang et al. (2020), considering the random errors of distribution network indicators, a two-stage robust project optimization model under uncertain factors is constructed and the adaptability of the model is verified by the C&CG algorithm.
Besides, there are also studies conducting in-depth discussions based on data envelopment analysis (DEA) in project evaluation (Çelen and Yalçın, 2012;Gouveia et al., 2015;Oskuee et al., 2015;Arcos-Vargas et al., 2017;Mardani et al., 2017;Liu et al., 2019b). The above method builds the model according to the different project characteristics, but the whole system lacks the consideration of the coupling benefit relationship between the projects and fails to finely construct the necessary correlation constraints for the optimization of distribution network projects. In addition, the diagnosis-evaluation-optimization stages in the project optimization process are conducted independently, but a set of coherent project optimization methods is lacking.
In view of the above problems, the tool of investment-oriented label is introduced in this study. A two-layer index system for investment benefit evaluation of distribution network projects is first constructed to diagnose the development of power grids, which quantifies the efficiency of project funds based on the specific label. On the basis of constraints such as traditional total investment, power grid development demand, and portfolio return, combined with the project coupling benefit and timing correlation constraints in actual production, a distribution network portfolio optimization model aiming at maximizing comprehensive benefits and minimizing investment cost is established, respectively. Finally, with years of deduction and analysis of time series results, reference opinions can be provided for dynamic adjustment of investment schemes and improve the effectiveness of project construction.
The rest of the article is organized in the following manner. In the Label-Based Investment Decision Framework section, the investment decision-making framework based on project label is introduced. In the Multistage Project Selection Method Considering Coupling Benefit and Time Series Correlation section, the proposed project portfolio optimization method, considering the characteristics of coupling benefit and time series correlation, is described in detail. In the Case Study section, a case study is conducted for a regional distribution network. Conclusion section draws the conclusions.
LABEL-BASED INVESTMENT DECISION FRAMEWORK
China's power grid investment plan usually takes five years as a cycle. The label-based investment decision process can be summarized as follows: ① Before the investment period, each region reports the project according to the current situation of the local power grid and the expected state of power grids. Because of the difference in specific construction contents, various projects usually have different functional directions, such as meeting the new load demand and strengthening the grid structure to solve heavy overload equipment. ② According to the distribution network construction list and the unit project cost in previous years, the required investment amount in the next planning period is calculated. Then, the project investment effect is estimated; a project label system including different kinds of information (e.g., project basic information, project investment information, project scope information, and project progress information) is also formed. ③ According to the given investment budget and the list of future construction in each region, on the premise of ensuring that the total distribution network investment in each region does not exceed the total investment budget, each investment project is evaluated and screened. The project optimization process is shown in Figure 1.
④ In the five-year cycle, to prevent the expected amount of project investment from changing due to the change of project plan and price level, reviewing and adjusting the investment plans and budgets of each city every year are necessary to ensure that the total project investment in the planning cycle meets expectations.
MULTISTAGE PROJECT SELECTION METHOD CONSIDERING COUPLING BENEFIT AND TIME SERIES CORRELATION
The whole project optimization process can be divided into three main stages: project diagnosis and comprehensive benefit evaluation, project portfolio optimization, and portfolio deduction analysis.
Step 1: project diagnosis and comprehensive benefit evaluation On the basis of the project label system of distribution network planning, this study investigates the weighting mode of project weight coefficient; considers the development demand, investment expectation, and investment ability as a whole; and establishes an intelligent evaluation model aimed at the optimal comprehensive benefit of technology, economy, and society.
Step 2: optimal selection of project portfolio Under the given investment capacity or development demand, an intelligent optimal selection model aiming at the optimal comprehensive benefit of technology, economy, and society can be established. Here, the objective function considers the coupling benefit characteristics and the constraints involve time series correlation characteristics among the projects.
Step 3: Analysis of portfolio deduction The results deduction analysis considers the investment preference in different time periods and flexibly selects the appropriate investment decision model according to different investment needs. Through time series analysis, the investment scheme can be dynamically adjusted in the whole life cycle.
The next section describes the details of each phase.
Project Diagnosis and Comprehensive Benefit Evaluation
The fundamental purpose of investment decision precision is to improve efficiency. A scientific and reasonable index evaluation system is the basis of analyzing the input and output of the distribution network. At present, relevant research (Mansor and Levi, 2017;Molina et al., 2017) has provided a comprehensive index evaluation system. To harmonize the results of portfolio investment benefit evaluation and power grid development diagnosis, the evaluation index system used in this study is divided into two layers which are shown in Table 1. The upper index is used to diagnose the current situation of power grid development to accurately sense the investment demand and determine the boundary conditions of the optimal selection model. The lower index is mainly used for the comprehensive benefit evaluation of the project through scoring to quantify the unit input-output benefits of different dimensions to provide a numerical basis for the optimal selection of project portfolio. In this research, considering the evaluation objectives, objectivity, and difficulty of data acquisition, the evaluation index system reference is given from five levels: grid strength, power supply safety quality, operation economy and efficiency, power supply coordination, and social friendliness. On the basis of the target layer index, according to the actual situation of the target distribution network, we can select the representative quite easily to quantify the index, such as voltage qualified rate and network loss rate. The index system includes the traditional dimensions of safety, economy, and reliability, considering the new characteristics of distributed energy and electric vehicles, focusing on the investment efficiency of the project.
Determination of Index Weight
The evaluation index of different dimensions is usually reflected by the way of empowerment. However, the power grid investment decision is a complex decision-making process affected by multidimensional factors such as investment capacity and policy orientation. Subjective weighting (Shen et al., 2018;Alvarado et al., 2019) ignores the natural physical relationship among indicators, whereas objective weighting (Muñoz-Delgado et al., 2019;Verástegui et al., 2019) based on the data itself cannot consider the external environmental impact dominated by human factors. In this study, the index weight and investment weight are used to quantitatively evaluate the impact of the internal development of the power grid and the external complex environment on the key investment direction of the power grid.
On the one hand, with the results of power system diagnosis, the index weight w 1 of different dimensions can be determined by the difference between the current development situation and the expected objectives, which can directly reflect the weak links of power grid development to screen the investment projects. Eq. 1 is the formula used for calculating the index weight as follows: where s k and s k ′ , respectively, represent the status score and target score of the kth target layer indicators and M represents the total number of target layer indicators.
On the other hand, under the constraint of the total investment, the distribution network investment should also emphasize the investment bias and have a choice. In different periods, different regions may have different investment weights w 2 . For example, high requirements for power supply quality are demanded during the Olympic Games. w 2 can be flexibly adjusted on the basis of w 1 to meet different investment needs. In general, w 2 is mainly given by experts' experience according to the actual situation in a certain period, considering factors such as natural climate and environment and policy support. w 2 can also be flexibly adjusted according to work priorities or experiences.
Finally, the comprehensive weight can be calculated as follows: w c,k w 1,k p w 2,k . (2)
Comprehensive Benefit Evaluation Model
Considering the different dimensions and attributes of each index, initializing the data is necessary before the comprehensive scoring of the project is performed.
According to the principle of preferred investment, the larger the output value of unit investment is, the more ideal it will be. The maximum value of the index in all projects is set to 100 points, whereas the minimum value is 0 points. The scores of each index are calculated as follows: where s i,j and l i,j are the jth index scores and data values of the ith project and l j max and l j min represent the maximum and minimum values of the jth index in all projects, respectively.
The investment benefit score of a single project can be calculated using the following formula: where S i is the comprehensive score of the ith investment project; s i,j are the scores of the jth index; w m represents the weight of the kth objective criterion; and n represents the total number of indicators.
Project Optimization Model With the Goal of Maximizing Benefit
Benefit Coupling Characteristics Some distribution network planning projects have the characteristics of matching and continuity (Shen et al., 2020). According to the benefit relationship from two projects or a single one, the project coupling benefit characteristics can be defined as compatibility and support. Figure 2 illustrates the diagram of both coupling benefit characteristics.
Time Series Correlation Characteristics
In addition to the coupling benefit characteristics, time series correlation characteristics also exist among distribution network planning projects according to the construction requirements and time. Thus, some distribution network planning projects must be constructed according to certain timing construction, cannot be put into operation at the same time, or must be put into operation at the same time to be able to implement and play a role, respectively, defined as dependent, mutually exclusive, and complementary characteristics.
Objective Function
According to the difference in objective function, the investment decision problem in this study is divided into two categories: one is to improve the performance within a certain amount of investment, and the other is to minimize the total investment cost under certain security constraints. To improve the comprehensive investment benefit of distribution network and minimize the investment cost in the economic life cycle of equipment, the optimal selection model of the project is established.
Maximum Investment Benefit Model The optimal performance improvement model mainly solves the problem of how to improve the performance index of the distribution network as much as possible under the condition that the total investment is certain. In Eq. 4, S (P) k is the present value of the investment benefit converted to the first year and R k represents the coupling characteristic matrix considering the efficiency coupling characteristics of each project, which can be expressed as follows: where r i,j represents the coupling coefficient between projects i and j. When projects i and j are compatible, a promoting effect is observed between the two projects, which is manifested as r i,j > 0; when i and j are supporting projects, an overlap is found between the two projects, which is manifested as r i,j < 0; the total investment constraint is taken as an additional constraint as follows: where c represents the project investment cost and t represents the investment year.
Minimum Investment Cost Model
The minimum investment cost model considers the problem of how to find the technical path with the minimum investment under certain performance index requirements. At this time, the model takes the minimum total investment amount as the objective function as follows: where c(x i ) represents the cost of the ith project and y i represents the status variable of whether the ith project is selected. If y i 0, then it means that project i is eliminated. The performance index constraints are taken as additional constraints as follows: where Φ represents the performance index set and Φ min (Ω) and Φ max (Ω), respectively, represent the minimum and maximum values of corresponding indexes.
Base Constraints
In addition to the total investment constraints and performance constraints, the two models should meet other basic constraints, such as mutually exclusive and complementary project constraints.
Mutually Exclusive Project Constraints
Suppose Ω e is the information set of mutually exclusive relationships of projects; if {P i , P j } ∈ Ω e , then it means that P i and P j are mutually exclusive projects. That is, projects i and j can only be put into operation at most, which can be expressed as follows:
Dependent Project Constraints
Suppose Ω d is the information set of dependent relationships of projects; if {P i , P j } ∈ Ω d , then it means that P i can only be put into operation depending on P j . That is, the selected year of project i must be after the selected year of project j, which can be expressed by the following formula:
Complementary Project Constraints
Suppose Ω b is the information set of dependent relationships of projects; if {P i , P j } ∈ Ω b , then it means that P i and P j are complementary projects and projects i and j must be put into operation at the same time, which can be expressed as follows:
Radio Constraints
Given that the project selection process considers the multiyear timing relationship, to avoid the same project being selected multiple times in different years, the radio constraint should be added for each project and the function relationship can be expressed as follows: where x i,t represents the status of the first project in year t and T represents the investment cycle.
Logical Constraints
Some logical constraints exist between the one-year selected state and the final selected state. No matter which year of project i is selected in the investment cycle, the project will be reflected as the final selection. That is, the project selected in a single year must be selected in the final project, expressed in mathematical terms as follows: T t 1 x i,t y i .
CASE STUDY
Taking a batch of investment plans of a county-level company as an example, the proposed model is verified. The annual power supply capacity of the company's 35 kV and below distribution network is 2.101 billion kWh, the average annual load of the whole society is 119.92 MW, and 2,743 distribution transformers (including on-column transformers) and 180 medium-voltage distribution lines (35 and 10 kV) are installed. The electricity consumption of the secondary industry accounts for 60.13%, mainly textile and manufacturing industries, and the electricity consumption of the tertiary industry and residents accounts for 33.54%. In recent years, the electricity consumption of the service and commercial industries has increased rapidly and the annual load growth rate is expected to reach 5.3%. The electrification degree of terminal energy consumption in the county is high, and the proportion of electricity consumption in the tertiary industry is increasing year by year. The investment capital is planned to be 30 million yuan. Twenty key alternative projects exist in the project library, with a total capital demand of 52.6731 million yuan, far exceeding the existing investment capacity. This section assumes that the investment period is five years. First, the diagnosis and analysis process of power grid development is displayed on the basis of the project label, and the weights of various indicators are obtained by analyzing the weak links according to the diagnosis results. Second, taking the calculation of investment performance of a project as an example, the scoring method is expounded. Considering the coupling benefit and time series relationships of the project, the optimal multiyear portfolio investment scheme of the distribution network project in this region is also calculated. Last, to adapt to the changing investment demand in the whole life cycle, by setting different development scenarios, the project optimization scheme is extended and deduced on the basis of the proposed project optimization model for realizing the dynamic adjustment of the scheme.
Power System Diagnosis and Analysis
The statistics of power grid development indicators in this area are shown in 94.27%, respectively, indicating that the network frame of the area is relatively strong. However, a huge gap is observed between the current and expected values of power supply safety and quality. For example, the coverage rate of distribution automation and the line insulation rate are only 47.1 and 40.73%, respectively, suggesting a gap with expectations. In addition, further optimization is needed for the equipment loss. Through the above analysis, the idea that the key points of regional investment should be concentrated on power safety and quality and on the economic and efficient operation can be concluded.
According to the method of determining the weight described in the Project Diagnosis and Comprehensive Benefit Evaluation section, the weight of the indicators shown in Table 3 is obtained. Figure 3 shows the content and structure of the project label by taking Project 1 as an example. Four kinds of indicators, namely, project type, project correlation, project function, and project effectiveness, are distinguished by different colors. Reference information is also revealed by labels in different stages of project optimization.
Label-Based Project Evaluation
Among them, the project type index describes the functional classification of the project; the project function index provides the estimated values of various diagnostic indicators after the implementation of the project; the project effectiveness index shows the unit investment return brought by construction project n on the basis of the project function index, which mainly provides data information for the evaluation and scoring of this project; the and project correlation index includes the construction urgency, mutually exclusive project, complementary project, and dependent project of Project 1, which provides constraint conditions for the combination optimization model. Based on the contents in the Project Diagnosis and Comprehensive Benefit Evaluation section, the scoring results of Project 1 are shown in Table 4.
The project properties of the label show that the project is mainly used to strengthen the grid structure, including the construction of new wells, underground cables, and new communication equipment. The qualified rate of subsection and the index of automatic distribution with per unit of line invested are excellent, the scores of which are 88.9 and 83.33, respectively. The underground cable laying not only effectively improves the poor insulation rate in the area but also further improves the power supply quality. Moreover, due to the implementation of the project, the comprehensive rate of voltage qualification in this area is expected to increase by 0.15% and the reliability of power supply will increase by 0.45%.
The five major categories of projects in the project library, namely, load-satisfaction, weakness-elimination, substationsupplementary, grid-enhancement, and overload-relief are evaluated in Table 5, where T1, T2, T3, T4, and T5 represent the five performance categories, namely, strong structure, safety and quality, economy and efficiency, power supply coordination, and social friendliness.
The evaluation results based on the scoring standard will be compared and indicate the benefit contribution from various projects intuitively. The load-satisfaction projects that meet the added demand load mainly improve the indicators of power supply coordination, power supply safety, and social friendliness, The weakness-elimination projects to eliminate hidden danger of equipment and overload-relief projects to solve line overload can significantly reduce the equipment loss and heavy load, and make great contributions to improve the economy and efficiency with the scores of 45.06 and 45.79, while the grid-enhancement projects is mainly conducive to the grid stability and power supply quality.
Considering the impact of uncertain load growth in the future, there is a preference in the setting of investment weights w 2 which is given as w 2 [1, 0.75, 0.67, 3, 1]. Based on the weights w 1 shown in Table 3, the final comprehensive weight can be Figure 4. As can be seen from the figure, load-satisfaction projects can generally obtain high scores, whose average score is 38.99. Gridenhancement projects also obtain good scores because of their high contribution to power supply quality. Both types of projects in the case of only considering the benefit of investment have great advantages in the optimization, which is in line with the results of the diagnosis of the power grid. Along with good benefits, they are also accompanied by high construction costs. The average construction costs of load-satisfaction and grid-enhancement projects reach 4,027,700 yuan and 3,889,500 yuan, respectively, which are much higher than those of other projects.
Project Portfolio Optimization
During the actual project optimization process, apart from the investment cost constraint, the investment benefit should also consider the time series correlation constraints, such as mutually exclusive, dependent, and complementary projects. The optimal portfolio cannot be selected simply by the score. On the basis of the project labels shown in the Power System Diagnosis and Analysis section, the coupling benefit relationship among various projects is given. The basic information is presented in Table 6. In particular, Project 11←Project 3 indicates that Project 11 depends on Project 3 to be implemented.
According to the project optimization model proposed in this study, the optimization of the abovementioned project library is divided into the following three cases: Case 1: given that the total investment limit of the region in the investment cycle is 30 million yuan, the goal is to maximize the investment benefit for five years in the whole investment cycle. The optimization process considers the project coupling benefit relationship and time series correlation relationship. Case 2: Investment decisions are independent each year in the investment cycle, and investment optimization is performed on the basis of the previous year's investment portfolio with the maximization of investment benefit as the objective function. For the convenience of comparative analysis, the total investment limit in the investment cycle of this region is set to be the same as case 1, and the total investment in each year is six, seven, six, five, and six million. Case 3: Given the constraints of (1) meeting the annual maximum load growth of 5.3%, (2) the N-1 pass rate being increased to 90%, and (3) maintaining the proportion of heavy overload equipment below 3%, the investment ability is unknown and the goal is to minimize the total investment cost while meeting the performance requirements.
GUROBI is used to solve the problem, and the optimal portfolio scheme is obtained, as illustrated in Figure 5. Figure 5 displays the project portfolio schemes under different cases in which multiyear shows the total investment distribution and corresponding accumulated benefits as of each year, whereas single year shows the new projects and the investment proportion of various projects every year in the investment cycle. In addition, the single year in case 2 gives the investment ability of each year additionally. Also, the single year in case 3 indicates the index improvement value of each year. LR, TR, N-1, and LG, respectively, represent the heavy load rate of lines, the heavy load rate of transformers, the "N-1" pass rate, and the satisfied load growth rate.
As illustrated in Figure 5, cases 1 and 2 use the maximum investment benefit model, whereas case 3 uses the minimum investment cost model for project optimization. From the perspective of investment benefit, the benefits brought by case 1 are obviously higher than those of the two other cases, thereby maximizing the benefits of all cases. However, case 1 also has higher investment costs. As observed in the f in Figure 5, LR, TR, N-1, and LG of case 3, respectively, reached 0, 0, 95.7, and 45.45% in the fifth year, which met the performance requirements and ensured the economy well. This finding is consistent with the original intention of the model setting. By setting different models, the proposed model can flexibly meet different investment decision-making needs.
In addition, some details between cases 1 and 2 deserve further discussion. Although case 2 adopts the model of maximizing investment benefit, the ultimate benefit improvement is less than case 1. The reason is that the investment decision-making process is conducted every year, which is limited by the annual total investment. Moreover, the flexibility of the project portfolio decreases to a certain extent; thus, the optimal project portfolio scheme cannot be obtained. From this perspective, we can see that case 1 can make more effective use of funds and achieve better investment benefits. However, due to the influence of geographical environments, natural disasters, and policy requirements, determining the load growth and various unexpected situations in advance for the actual investment decision-making process is often difficult. In addition, the optimal investment portfolio scheme in the whole multiyear investment cycle can only be determined on the basis of the investment background of the first year, which makes the investment scheme deviate greatly from reality. Considering that case 1 takes the total investment benefit in the investment cycle as the objective function and lacks specific constraints on a single-year investment, no project may be selected in nth year. In this scenario, no essential difference is observed when n takes 3 or 4, and the optimal portfolio scheme may not be unique. However, during the actual process, the uncertainty of the scheme may bring different risks and benefits, which are often difficult to quantify. Coincidentally, case 2 can only effectively make up for the poor flexibility by making investment decisions separately. Through the combination of cases 1 and 2, various complex investment scenarios can be considered comprehensively, thus providing reference opinions for the dynamic adjustment of optimized portfolio in the whole investment cycle.
CONCLUSION
In view of the problems of extensive investment and low investment efficiency in the current power grid investment, this study integrates the stages of diagnosis-evaluation-optimization by introducing investmentoriented label and puts forward a decision-making framework for an accurate investment in distribution network, considering the coupling relationship of project benefit and time series correlation. The following conclusions are drawn through the case study: 1) The adoption of a two-layer index system can macroscopically analyze local investment demand and quantify the investment efficiency of a single project. By setting the two-layer weights of investment weight and index weight, the weak links of power grid development can be effectively matched with various investment projects. While considering the influence of external investment environment, the investment portfolio can also be targeted according to the actual situation which improves the investment accuracy. 2) By introducing the concept of investment-oriented label, the problems of inconsistent information and dimensions of various projects are solved; the coupling benefit relationship and time series relationship between various projects are also quantitatively considered during the project optimization process. The proposed method can well meet the investment demand under the two modes of maximizing investment benefit and minimizing investment cost. 3) By setting different cases, the advantages and disadvantages of two investment decision-making schemes based on single-year and multiyear investment are demonstrated.
The results reveal that the multiyear investment scheme can make more effective use of funds and obtain better investment benefits, but it cannot cope with the investment capacity and the change of investment environment in the investment cycle. Moreover, a situation exists where the optimal scheme is not unique, which further brings different risks and benefits. The combination of the two schemes can consider various complex investment scenarios more comprehensively and realize the dynamic adjustment of the optimized portfolio in the whole investment cycle.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author. | 2021-07-30T13:07:34.332Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "032ecd8bac352561a677f4647e67753d74b5c392",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenrg.2021.728834/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "032ecd8bac352561a677f4647e67753d74b5c392",
"s2fieldsofstudy": [
"Engineering",
"Economics"
],
"extfieldsofstudy": []
} |
235663694 | pes2o/s2orc | v3-fos-license | Health-Related Quality of Life in Patients With Different Diseases Measured With the EQ-5D-5L: A Systematic Review
Background: The EQ-5D-5L is a generic preference-based questionnaire developed by the EuroQol Group to measure health-related quality of life (HRQoL) in 2005. Since its development, it has been increasingly applied in populations with various diseases and has been found to have good reliability and sensitivity. This study aimed to summarize the health utility elicited from EQ-5D-5L for patients with different diseases in cross-sectional studies worldwide. Methods: Web of Science, MEDLINE, EMBASE, and the Cochrane Library were searched from January 1, 2012, to October 31, 2019. Cross-sectional studies reporting utility values measured with the EQ-5D-5L in patients with any specific disease were eligible. The language was limited to English. Reference lists of the retrieved studies were manually searched to identify more studies that met the inclusion criteria. Methodological quality was assessed with the Agency for Health Research and Quality (AHRQ) checklist. In addition, meta-analyses were performed for utility values of any specific disease reported in three or more studies. Results: In total, 9,400 records were identified, and 98 studies met the inclusion criteria. In the included studies, 50 different diseases and 98,085 patients were analyzed. Thirty-five studies involving seven different diseases were included in meta-analyses. The health utility ranged from 0.31 to 0.99 for diabetes mellitus [meta-analysis random-effect model (REM): 0.83, (95% CI = 0.77–0.90); fixed-effect model (FEM): 0.93 (95% CI = 0.93–0.93)]; from 0.62 to 0.90 for neoplasms [REM: 0.75 (95% CI = 0.68–0.82); FEM: 0.80 (95% CI = 0.78–0.81)]; from 0.56 to 0.85 for cardiovascular disease [REM: 0.77 (95% CI = 0.75–0.79); FEM: 0.76 (95% CI = 0.75–0.76)]; from 0.31 to 0.78 for multiple sclerosis [REM: 0.56 (95% CI = 0.47–0.66); FEM: 0.67 (95% CI = 0.66–0.68)]; from 0.68 to 0.79 for chronic obstructive pulmonary disease [REM: 0.75 (95% CI = 0.71–0.80); FEM: 0.76 (95% CI = 0.75–0.77)] from 0.65 to 0.90 for HIV infection [REM: 0.84 (95% CI = 0.80–0.88); FEM: 0.81 (95% CI = 0.80–0.82)]; from 0.37 to 0.89 for chronic kidney disease [REM: 0.70 (95% CI = 0.48–0.92; FEM: 0.76 (95% CI = 0.74–0.78)]. Conclusions: EQ-5D-5L is one of the most widely used preference-based measures of HRQoL in patients with different diseases worldwide. The variation of utility values for the same disease was influenced by the characteristics of patients, the living environment, and the EQ-5D-5L value set. Systematic Review Registration: https://www.crd.york.ac.uk/PROSPERO/, identifier CRD42020158694.
Background: The EQ-5D-5L is a generic preference-based questionnaire developed by the EuroQol Group to measure health-related quality of life (HRQoL) in 2005. Since its development, it has been increasingly applied in populations with various diseases and has been found to have good reliability and sensitivity. This study aimed to summarize the health utility elicited from EQ-5D-5L for patients with different diseases in cross-sectional studies worldwide.
Methods: Web of Science, MEDLINE, EMBASE, and the Cochrane Library were searched from January 1, 2012, to October 31, 2019. Cross-sectional studies reporting utility values measured with the EQ-5D-5L in patients with any specific disease were eligible. The language was limited to English. Reference lists of the retrieved studies were manually searched to identify more studies that met the inclusion criteria. Methodological quality was assessed with the Agency for Health Research and Quality (AHRQ) checklist. In addition, meta-analyses were performed for utility values of any specific disease reported in three or more studies.
Results: In total, 9,400 records were identified, and 98 studies met the inclusion criteria. In the included studies, 50 different diseases and 98,085 patients were analyzed. Thirty-five studies involving seven different diseases were included in meta-analyses. The health utility ranged from 0. 31 Conclusions: EQ-5D-5L is one of the most widely used preference-based measures of HRQoL in patients with different diseases worldwide. The variation of utility values for the same disease was influenced by the characteristics of patients, the living environment, and the EQ-5D-5L value set.
Keywords: HRQOL, health utility, EQ-5D-5L, disease, EuroQol BACKGROUND As a quantitative indicator of health-related quality of life (HRQoL), the health utility reflects people's preference for a given health state. The health utility is measured on a scale from zero to one, where zero represents death and one represents full health (1). The worse the perception of the health status is, the lower the utility value. It can be a negative value when a health state is perceived as being worse than death. There are several preference-based measurement tools for health utility, such as the EuroQol 5 dimensions (EQ-5D) family of instruments (2), the Short Form-6 Dimensions (SF-6D) (3), and the Health Utilities Index (HUI) (4). Health utility can be used as quality-of-life weight to calculate QALYs in cost-utility analysis (CUA). Thus, health utility plays an important role not only in the measurement of HRQoL but also in health economics evaluations (5,6).
The EQ-5D, developed by the European Quality of Life Group (EuroQol Group), is currently one of the most widely used questionnaires in HRQoL research (7). The original version of the EQ-5D was introduced in 1990 and contains five dimensions: Mobility, Self-Care, Usual Activities, Pain/Discomfort, and Anxiety/Depression (2). For each dimension, there were three levels to describe the severity, namely, have no problems, have some problems, and have extreme problems, which could describe 243 different health states (2). However, there may be some issues when using the EQ-5D-3L to detect small changes in mild conditions, and the EQ-5D-3L had obvious ceiling effects (8). Therefore, in 2005, the EuroQol Group developed a new version of the EQ-5D based on the same five dimensions but with five rather than three severity levels (EQ-5D-5L); this instrument could detect 3,125 unique health states (8). Published studies have shown that compared with the EQ-5D-3L, the EQ-5D-5L was significantly more sensitive, with reduced ceiling effects (9,10).
To derive health utility from the responses on the EQ-5D instruments, country-specific value sets need to be estimated (11). Since 2016, more than 20 countries and regions have published standard EQ-5D-5L value sets (Europe: 9; Asia: 9; Americas: 3; Africa: 1) (12). In 2012, before any standard EQ-5D-5L value set was established, van Hout et al. (13) developed a crosswalk project to map the EQ-5D-5L to the EQ-5D-3L, enabling researchers to obtain a crosswalk value set for the EQ-5D-5L based on published EQ-5D-3L standard value sets. Besides that, the psychometric properties of the EQ-5D-5L have been validated in both general and disease populations (12).
In recent years, with the availability of the EQ-5D-5L value sets, an increasing number of studies have used the EQ-5D-5L to measure the HRQoL of patients with different diseases and perform economic evaluations to support health decision-making (14,15). At present, a comprehensive review of these studies is lacking. For HRQoL measured with EQ-5D-5L, cross-sectional studies mainly focus on the current health status of the patients while randomized controlled trials (RCTs) pay attention to the effects of different interventions on health outcomes. This study focuses on the use of the EQ-5D-5L to explore the variation in health utility in patients in different conditions, provide information to perform CUAs, and inform health policies.
According to the selection criteria, all studies were original cross-sectional studies reporting EQ-5D-5L utilities for any specific disease with or without comorbidities and using countryspecific value sets or the crosswalk method (mapping from EQ-5D-3L). Due to the lack of EQ-5D-5L standard value sets in many countries, the crosswalk method is the most important value set to calculate utility measured by EQ-5D-5L. In addition, the crosswalk method is recommended by the National Institute for Health and Care Excellence (NICE) to perform CUA when EQ-5D-5L is used to measure health outcomes in England. Therefore, it is useful and necessary to include these articles in this review. Studies reported that multiple utility values using value sets from different countries in the same published article were also included. The language of publication was limited to English. This review excluded reviews, protocols, or abstracts; studies focused on the general population; longitudinal studies or effects evaluation studies of different interventions; studies that reported only synthetic utilities of multiple diseases, non-EQ-5D-5L utilities, or no utilities; and studies unrelated to HRQoL.
Data Collection and Quality Assessment
After removing duplicates, title and abstract screening was conducted by two authors independently. Following the application of the selection criteria, all eligible studies with full-texts were read, and the relevant references were checked manually. Two researchers independently collected the data using a predesigned data extraction table, including author, publication year, country or region, sample size, disease type, mean age, health utility, EQ-5D VAS score, proportions with problems in the five dimensions, value set, and administration method (i.e., face-to-face, telephone survey). When there was any discrepancy between the two researchers, it was resolved by discussion.
Quality assessment was conducted with the 11-item crosssectional research checklist developed by the Agency for Healthcare Research and Quality (AHRQ) (17). According to the description in the study and the AHRQ checklist, the reviewer selects one of three options ("Yes, " "No, " and "Unclear") for each item. "Yes" was assigned one point, while "No" or "Unclear" was assigned zero points. The quality level of each study was determined by summing all the item scores. For each assessed study, 0-3 points indicated low quality, 4-7 points indicated moderate quality, and 8-11 points indicated high quality.
Statistical Analysis
This review involved the analysis of the range of mean health utility values of the overall sample (or subgroups when there is no overall utility value reported) among different studies and value sets used in each study for a specific disease with or without comorbidities. In addition, this study reports the ranges in mean EQ-VAS scores and responses on each dimension of the EQ-5D-5L.
Meta-analysis was performed to synthesize utility data when three or more studies reported utility values and standard error/deviation for a specific disease. For any study that reported multiple utility values for the same sample using different EQ-5D-5L value sets, the average value or the utility calculated by using a local country-specific value set was applied in meta-analysis. Heterogeneity was assessed with the I 2 statistic. Random-effect (DerSimonian-Laird estimator method) and fixed-effect (inverse variance method) models were both used to calculate the pooled utility for a specific disease. Sensitivity analysis was conducted by removing EQ-5D-5L utility values derived from crosswalk value sets. All analyses were performed with R (version 4.0.5).
RESULTS
A total of 9,500 articles were identified from the four databases, and four additional studies were obtained from the manual search. After eliminating duplicates, 6,409 documents were screened to assess eligibility, of which 98 articles (15,16, were finally included in qualitative analyses and 35 studies were included in meta-analyses (Figure 1). Those (29,39,79) that only included male patients and one study (96) that only included female patients, the rest of the studies included patients of both sexes. Twenty studies did not report the mode of administration. Of the remaining 78 studies, 47.4% involved the face-to-face administration of the survey, 47.4% involved selfadministered surveys, and 5.2% involved telephone surveys. The AHRQ checklist scores ranged from four to nine points, the median was six points, and the mode was five points (details in Supplementary Table 2). There were no low-quality studies; 87 studies and 11 studies were of moderate and high quality, respectively. The data about the distributions of EQ-5D-5L are summarized in Supplementary Table 3.
In this review, health utility values derived from the EQ-5D-5L were reported for 50 different diseases. Among these, diabetes mellitus, neoplasms, multiple sclerosis, cardiovascular disease, chronic obstructive pneumonia disease (COPD), human immunodeficiency virus (HIV) infection, chronic kidney disease, and fracture were reported in three or more studies and metaanalyses were performed for these diseases (fracture was not included in meta-analysis, because only two of the studies reported standard error/deviation). The sensitivity analysis results (remove all the utility values derived from the crosswalk value set) are presented in Supplementary Figure 1.
Diabetes Mellitus
For patients with diabetes mellitus ( Table 2), 12 studies reported health utility values ranging from 0.31 to 0.99 (14,15,(18)(19)(20)(21)(22)(23)(24)(25)(26)(27). The Chinese standard EQ-5D-5L value set (18) and Crosswalk UK value set (24) were used to derive the utility values in the studies that reported the highest value and lowest value, respectively. The former focused on diabetes patients without diabetic retinopathy with a mean disease duration of 10.3 years and a mean age of 67.9 years (18), while the latter involved patients with severe comorbidities on hemodialysis, with a mean age of 60.3 years (24). Additionally, Lamu et al. (19) used eight country value sets (England, the Netherlands, Spain, Canada, Uruguay, China, Japan, and Korea) to analyze 924 diabetic patients from six countries. The results showed that the utility value calculated with the Uruguay value set was the highest at 0.880, while the lowest, 0.735, was derived with the value set from the Netherlands. The EQ-5D VAS scores were reported to range from 50.9 to 72.6 in six studies (14,20,(22)(23)(24)(25). Among the five dimensions of the EQ-5D-5L, pain/discomfort was the dimension with the most reported problems. The prevalence of diabetes comorbidities ranged from 55 to 100%, which was one of the most important factors negatively affecting the HRQoL of patients.
Neoplasms
Seven studies reported health utility values for cancer patients ranging from 0.62 to 0.90 (26,(28)(29)(30)(31)(32)(33). The highest utility value was in early-stage prostate cancer patients using the crosswalk UK value set (29), while the lowest value was in colorectal cancer patients, 49.7% of whom had stage III-IV disease, applying the China value set (28). The EQ-5D VAS scores ranged from 56.2 to 77.5 in two studies (30,32). The decrease in health utility in cancer patients was mainly due to problems related to the pain/discomfort dimension of the EQ-5D-5L. As the cancer progressed, the health utility value decreased.
Multiple Sclerosis
The health utility ranged from 0.31 to 0.78 for multiple sclerosis patients in six studies (34)(35)(36)(37)(38)(39). The upper and lower utility values were generated with the crosswalk France value set (35) and the crosswalk UK value set (39), respectively. The study with the highest value (39) reported a shorter disease duration (9 vs. 15 years) than the study with the lowest utility value (35). In addition, the former had a higher proportion of relapsingremitting multiple sclerosis patients than the latter (71.5 vs. 52.8%). EQ-5D VAS scores ranged from 58.3 to 78.0 in five studies (35)(36)(37)(38)(39). Pain/discomfort and usual activities were the dimensions with the most reported problems among multiple sclerosis patients. The meta-analytic utility estimate of multiple sclerosis patients was 0.56 (95% CI = 0.47-0.66, heterogeneity I 2 = 99%, P < 0.01) using the random-effect model, and it was 0.67 (95% CI = 0.66-0.68) using the fixed-effect model (Figure 2C).
Cardiovascular Disease
For cardiovascular disease patients, the health utility values ranged from 0.56 to 0.85 in eight studies (26,(40)(41)(42)(43)(44)(45)(46). The lowest value was derived from the Chinese value set (45), while the study with the highest value did not report the value set used (40). In the study with the highest utility value (40), all patients were evaluated 4 years after cardiac arrest, and the proportion of men was 80%. In the study with the lowest value, the patients had atrial fibrillation; 43% of them were men, and 23% had diabetes mellitus (45). Berg et al. (41) compared utility values among nine subgroups of patients with different cardiovascular diseases. Among these subgroups, heart transplant patients had the highest value, which was 0.82, while arrhythmia patients had the lowest value, which was 0.70. The EQ-5D VAS scores ranged from 61.4 to 77.8 in six studies (26,(40)(41)(42)(43)(44). Anxiety/depression and pain/discomfort were the dimensions with the most reported problems among cardiovascular disease patients.
COPD
For patients with COPD, the health utility values ranged from 0.68 to 0.79 in four studies (47)(48)(49)(50). The crosswalk US value set and UK standard EQ-5D-5L value set were used in the studies that reported the highest utility value (49) and the lowest value (50), respectively. The mean age of COPD patients in the study reporting the lowest utility was 70.4 years, and the mean predicted forced expiratory volume in 1 s (FEV1) was 49.8% (50). Meanwhile, the patients in the study with the highest value had a younger mean age (68.5 years old) and a better predicted FEV1 (49). The EQ-5D VAS scores ranged from 60.5 to 70.6 in four studies (47)(48)(49)(50). Mobility was the dimension with the most problems affecting the HRQoL of COPD patients based on EQ-5D-5L. In addition, as the predicted FEV1 decreased, the health utility value in COPD patients decreased.
HIV Infection
The health utility values of patients infected with HIV ranged from 0.65 to 0.90 in four studies (51)(52)(53)(54), and both extreme values were derived with a crosswalk value set [Thailand (53) and Spain (54)]. The study (54) with the highest utility value involved patients in relatively good condition and without any comorbidities, while the study (53), with the lowest value focused ※ Crosswalk method is using the EQ-5D-3L standard value set to calculate EQ-5D-5L utility values.
*Only reported median value. The country of crosswalk method not reported.
on patients who had symptomatic HIV infections. The EQ-5D VAS scores ranged from 68.8 to 88.6 in four studies (51)(52)(53)(54). The decrease in utility in HIV-infected patients was mainly due to problems related to the anxiety/depression dimension of the EQ-5D-5L. The pooled utility value of patients infected with HIV was 0.84 (95% CI = 0.80-0.88, heterogeneity I 2 = 83%, P < 0.01) using the random-effect model, and it was 0.81 (95% CI = 0.80-0.82) using the fixed-effect model (Figure 2F).
Chronic Kidney Disease
For chronic kidney disease patients, the health utility values ranged from 0.37 to 0.89 in three studies (55)(56)(57). The Japan value set and crosswalk UK value set were used to calculate the highest utility value (56) and the lowest value (57), respectively. The mean age of chronic kidney disease patients in the study reporting the highest value was 49.8 years old, and all of them had received kidney transplants (56), while those in the study reporting the lowest value were 59.4 years old, and 33.7% of them had been on dialysis for 4 years or longer (57). One study (57) reported that the EQ-5D VAS score was 59.4. Among the five dimensions, self-care was the dimension with the most reported problems among chronic kidney disease patients.
Fracture
The health utility values of patients with fractures ranged from 0.56 to 0.88 in the three studies (59)(60)(61). However, neither of the studies that reported the maximum and minimum values described the value sets used (59,61). The patients in the study reporting the highest value (59) had midshaft clavicular fractures and a much younger mean age (44.5 vs. 73.5 years old) than the osteoporotic vertebral compression fracture patients in the study reporting the lowest value (61). Two studies reported EQ-5D VAS scores of 80.3 (60) and 77.2 (59). No information was available for the dimensions that contributed the most to the HRQoL of fracture patients.
Other Diseases
For Prader-Willi syndrome, hypertension, ulcerative colitis, ankylosing spondylitis, psoriasis, actinic keratosis, Parkinson's disease, overactive bladder, hereditary angioedema, spinal cord injury, schizophrenia, hemophilia, asthma, and hepatitis, only two studies reported the health utility values for patients with each disease. For the remaining 29 diseases (87-113), the HRQoL and utility values were only reported by one study each. Patients with adolescent idiopathic scoliosis had the highest utility value of 0.93 (104), while children with Morquio A syndrome, who must use wheelchairs, had the lowest value of −0.18 (88).
Furthermore, two studies compared utility values calculated with different country-specific value sets in the same sample (67,87). For patients with psoriasis living in central South China (67), value sets for Japan, China, and the UK were used separately to obtain the EQ-5D-5L utility values, and the results were 0.86, 0.90, and 0.90, respectively. van Dongen-Leunis et al. (87) used two EQ-5D-5L country-specific value sets to calculate the health utility of acute leukemia patients, and the value derived from the Dutch value set (0.81) was lower than that derived from the UK value set (0.85). The rest of the studies all used a single value set. Compared with other dimensions, pain/discomfort was the dimension with the most problems reported by patients in most of the studies.
DISCUSSION
In this study, we reviewed the health utility values in patients with different diseases according to the EQ-5D-5L in crosssectional surveys. We found that the EQ-5D-5L has been widely applied in populations with specific diseases, including various chronic non-communicable diseases, such as diabetes mellitus, neoplasms, multiple sclerosis, and cardiovascular disease, and infectious diseases, such as HIV and Dengue fever. The health utility values for a specific disease measured by the EQ-5D-5L differed based on patient characteristics, survey location, the use of country-specific value sets, and other factors. Meta-analyses were performed to synthesized utility data of any specific disease reported in three or more studies.
Health utility measures the preference of people for a given health state and reflects their status with regard to quality of life (1). Sex is one of the factors that affect health utilities (47). There are differences in the perception of health status between males and females, and in most of the included studies that reported sex-specific utilities, men had better HRQoL as measured by the EQ-5D-5L than women. For instance, the utility value was 0.80 for men with COPD and 0.69 for women with COPD, and the proportion of men who reported having problems on all five dimensions was lower than the proportion of women (47). In addition, health utility values decreased as the age of patients increased due to the deterioration of physical function and reduced disease tolerance. Among patients with COPD, for example, the utility value for patients under 65 years of age (0.77) was lower than that for patients who were 65 years old and older (0.79) (48).
In general, the severity of disease is reflected by the magnitude of the health utility value. The variation in values measured by EQ-5D-5L for the same disease under different conditions reflects its discriminative ability. As the disease progresses, the utility value decreases. Alvarado-Bolaños et al. (70) used Hoehn and Yahr staging to categorize Parkinson's disease patients into groups with mild, moderate, and severe disease, and the utility values were 0.77, 0.65, and 0.47, respectively. In addition, the number of comorbidities and the different types of comorbidities substantially affect the HRQoL of patents. Patients who have comorbidities usually report a lower utility value than those without comorbidities. Van Duin et al. (54) reported that the utility value was 0.90 in patients with HIV infections who did not have any comorbidities; however, it was reduced to 0.84 when patients had comorbid diseases. In Al-Jabi's study (58), for hypertension patients with one, two, and three or more comorbidities, the utility values were 0.81, 0.73, and 0.66, respectively. Various living environments result in different lifestyles, which may influence HRQoL and health utility. Zyoud et al. (57) reported that among patients with end-stage renal disease in Palestine, those living in villages had a higher mean utility value than those living in cities (0.44 vs. 0.29). In another study (44), among patients with cardiovascular disease, the utility value was a little bit higher for those living in urban Vietnam than those in rural areas (0.82 vs. 0.81).
To calculate health utility, the target patients' responses to the EQ-5D-5L and a country-specific value set are needed. The health preferences of patients living in different countries are affected by their social environment, living standards, and health system. Therefore, the EQ-5D-5L value sets estimated based on residents' preferences for health states vary across countries or regions. Different results can be observed in the same sample when various country value sets are used to calculate health utility values. In the same sample of patients with acute leukemia, van Dongen-Leunis et al. (87) reported that the value obtained with the Dutch value set was higher than that obtained with the UK value set. In countries where the EQ-5D-5L utility value set has been estimated, it is more appropriate to use the local value set. Before any standard country-specific EQ-5D-5L value set was published, the crosswalk method developed by van Hout et al. (13) in 2012 was an alternative means of calculating health utility measured by EQ-5D-5L. For cost-utility analyses performed in England, the NICE recommends the use of the crosswalk method to obtain EQ-5D-5L utility values and calculate quality-adjusted life-years (QALYs) because there are some concerns about the current standard value set published by Devlin et al. (114). In this review, a crosswalk value set was used in half of the studies to calculate utility values due to the lack of a local standard EQ-5D-5L value set when the survey was conducted. Therefore, the crosswalk value set is still important for researchers to calculate health utility.
The heterogeneity of health utility derived from different studies for any specific disease is significant. Although, this may lead to some issues of the direct comparison among these studies, the trend of variation and the influence factors of health utility can be observed. In addition, to perform CUA, different sources of health utilities are need to be identified and applied in the model (1). The summarization and review of health utility for different diseases are helpful and useful.
There are some limitations of this study. Among the 50 different diseases analyzed in this review, nearly half of them were only discussed in one study each. The included studies were limited to those published in English. In addition, some of the studies did not describe the value set used. This review focused on health utility measured by the EQ-5D-5L in cross-sectional studies, and the comparison of different utility-based instruments (i.e., SF-6D, HUI) in populations with specific diseases needs further exploration.
A deeper understanding of the HRQoL and health utility of patients with different diseases facilitates the provision of a more appropriate range of services for disease management and treatment. In addition, health utility is used for HRQoL weighting when calculating QALYs. QALY is used as the outcome measure in CUA and plays an important role in health technology assessments (12). The summarization of health utility from various sources provides information to perform CUA which could inform health decision making and the reasonable allocation of health resources.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
TZ, HG, and AM designed this study protocol. HG, AM, and TZ conceived the literature strategies. LW and MR reviewed the title/abstract independently. TZ and LW performed the original study review. TZ, HG, and YZ extracted and analyzed the data from included studies. TZ and MR assessed the methodological quality with AHRQ checklists. TZ and YZ contributed to the writing of the manuscript. All the authors approved the final version of this systematic review. | 2021-06-29T13:27:02.298Z | 2021-06-29T00:00:00.000 | {
"year": 2021,
"sha1": "2ce76a6688df7c7e6ec5a13d32fb91c2b7a841c1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.675523/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ce76a6688df7c7e6ec5a13d32fb91c2b7a841c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232016771 | pes2o/s2orc | v3-fos-license | Transcriptional Profiling of Amygdala Neurons Implicates PKCδ in Primate Anxious Temperament
Kovner R, Souaiaia T, Fox AS, French DA, Goss CE, Roseboom PH, Oler JA, Riedel MK, Fekete EM, Fudge JL, Knowles JA, Kalin NH. Transcriptional Profiling of Primate Central Nucleus of the Amygdala Neurons to Understand the Molecular Underpinnings of Early-Life Anxious Temperament. Biol Psychiatry. 2020 Oct 15;88 (8):638-648. doi: 10.1016/j.biopsych.2020.05.009. Epub 2020 May 19. PMID: 3,27,09,417; PMCID: PMC7530008.
Even during infancy, parents recognize aspects of their child's temperament. Although children interact with and are affected by their environment, their temperament is, in part, genetically determined and relatively stable across time. Importantly, variations in early-life temperament can predict the emergence of later-life psychopathology. For example, children with an extreme anxious temperament (AT) are at an increased risk for developing later-life depression and anxiety. 1 Deciphering the neural circuits, cell types, and molecular mechanisms underlying AT is critical for understanding how neuropsychiatric disorders arise and will aid in the development of early-life treatment strategies.
Because of their recent evolutionary divergence from humans, non-human primates are ideally suited to study the conserved neural, cellular, and molecular components of threat processing. This is relevant for understanding human stress-related neuropsychiatric disorders which are characterized by alterations in behavioral, emotional and physiological responses to potential threats. Our laboratory developed and characterized a young rhesus monkey model of AT that shows remarkable similarities to childhood AT. 2 Monkeys with extreme AT freeze for long durations in response to a potential threat, 1 similar to the behavioral inhibition exhibited by shy children when meeting a stranger. Using multimodal brain imaging, we identified an AT circuit that includes the central nucleus of the amygdala (Ce) and bed nucleus of the stria terminalis (BST) 2 and we demonstrated that Ce neurotoxic lesions decrease AT. 3 The Ce can be divided into at least two subnuclei, the lateral Ce (CeL) and the medial Ce (CeM). The CeM projects out of the amygdala. The CeL coordinates threat responses by integrating threatrelevant information and modulating the CeM's output though a GABAergic microcircuit comprised of neuronal subpopulations e.g., protein kinase C type delta (gene: PRKCD, protein: PKCd) and somatostatin (gene: SST, protein: SST) neurons. 4 In our recently published study, 5 we describe our efforts to understand Ce gene expression in relation to individual differences in AT. Among our top hits was PRKCD, a marker of a CeL neuronal subpopulation that is critical for threat responses. 4 Mouse studies have focused on the function of PRKCD neurons in threat responses and CeL circuit function. 4 However, our finding demonstrates that PRKCD is not only a neuronal marker, but that its expression levels are positively associated with AT, a clinically-relevant phenotype. Together, our finding and the known role of CeL PRKCD neurons make PRKCD and its downstream pathway a potential treatment target. Future studies knocking out PRKCD expression are required to understand the role of CeL PRKCD in primate AT, threat processing, and CeL function.
In mice, PKCd neurons receive input from SST neurons and this microcircuit modulates CeM and BST function, and threat responses. 4 While this circuit is well characterized in mice, little work has been done in primates. Following up on our PRKCD finding, we performed a stereological analysis of PKCd and SST neurons in mouse and monkey and identified several potentially important species differences. Specifically, we found that monkey PKCd neurons are evenly distributed across the CeL's anterior-posterior extent whereas in mice, they are concentrated in the posterior CeL. We also found that monkey CeL SST neurons are less abundant than in mice. Although descriptive, these findings suggest species differences in circuitry organization with implications for the complex responses of primates to threats in their environment. The characterization of species differences in cell types and microcircuitry organization will enable us to better understand the extent to which rodent findings translate to humans.
The BST, like the Ce, is involved in responding to aversive stimuli. The CeL sends robust projections to the laterodorsal BST (BSTLd) and this pathway is important in threat responses. In our study, 5 we found that about half of PKCd neurons project to the BSTLd, whereas very few SST neurons project to the BSTLd. We also noted dense SST varicosities surrounding BSTLd-projecting PKCd neurons, suggesting that SST may modulate this pathway. The synaptic connection between CeL SST neurons and BSTLd-projecting PKCd neurons may provide an additional point of intervention for modulating threat responses through this circuit.
Our transcriptome-wide study in CeL neurons provides evidence supporting a primate CeL to BSTLd microcircuit relevant for understanding AT and points to specific molecules within this circuit that could serve as potential treatment targets for human anxiety disorders. Building on this work, we are actively pursuing gene expression studies in other components of the AT circuit. These studies will allow us to identify transcriptomic changes that are shared across, or unique to, ATrelated brain regions. It will be interesting to investigate whether these genes are associated with specific cell populations, which may provide insights into the relevant cellular components within the AT circuit. These studies in primates linking molecules, cell types, and circuits have the potential to inform the pathophysiology underlying human stress-related disorders and ultimately guide the development of novel treatments.
Declaration of Conflicting Interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: NHK has received honoraria from CME Outfitters, Elsevier, and the Pritzker Consortium; has served on scientific advisory boards for Actify Neurotherapies and Neuronetics; currently serves as an advisor to the Pritzker Neuroscience Consortium and consultant to Corcept Therapeutics; has served as co-editor of Psychoneuroendocrinology and currently serves as Editor-in-Chief of The American Journal of Psychiatry; and has patents on promoter sequences for corticotropin-releasing factor CRF2a and a method of identifying agents that alter the activity of the promoter sequences (70,71,323; 75,31,356), promoter sequences for urocortin II and the use thereof (70,87,385), and promoter sequences for corticotropin-releasing factor binding protein and the use thereof (71,22,650). All other authors report no biomedical financial interests or potential conflicts of interest.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by funding awarded to NHK from the NIMH (R01MH081884 and R01MH046729) and RK from the NIMH (5T32MH018931). We thank Kartik Pattabiraman for comments and suggestions on the original draft. | 2021-02-24T05:05:17.880Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ffd30461fc3b783bd9fea4f1e3b3a95dcca8760a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2470547021989329",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffd30461fc3b783bd9fea4f1e3b3a95dcca8760a",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258721711 | pes2o/s2orc | v3-fos-license | Needle-Based Electrical Impedance Imaging Technology for Needle Navigation
Needle insertion is a common procedure in modern healthcare practices, such as blood sampling, tissue biopsy, and cancer treatment. Various guidance systems have been developed to reduce the risk of incorrect needle positioning. While ultrasound imaging is considered the gold standard, it has limitations such as a lack of spatial resolution and subjective interpretation of 2D images. As an alternative to conventional imaging techniques, we have developed a needle-based electrical impedance imaging system. The system involves the classification of different tissue types using impedance measurements taken with a modified needle and the visualization in a MATLAB Graphical User Interface (GUI) based on the spatial sensitivity distribution of the needle. The needle was equipped with 12 stainless steel wire electrodes, and the sensitive volumes were determined using Finite Element Method (FEM) simulation. A k-Nearest Neighbors (k-NN) algorithm was used to classify different types of tissue phantoms with an average success rate of 70.56% for individual tissue phantoms. The results showed that the classification of the fat tissue phantom was the most successful (60 out of 60 attempts correct), while the success rate decreased for layered tissue structures. The measurement can be controlled in the GUI, and the identified tissues around the needle are displayed in 3D. The average latency between measurement and visualization was 112.1 ms. This work demonstrates the feasibility of using needle-based electrical impedance imaging as an alternative to conventional imaging techniques. Further improvements to the hardware and the algorithm as well as usability testing are required to evaluate the effectiveness of the needle navigation system.
Introduction
Current healthcare practices often require needle insertions through tissue layers such that the needle tip is positioned in a particular region of interest. These procedures involve blood draw, medication delivery, tissue biopsies, and cancer treatments such as brachytherapy, radiofrequency ablation, and cryoablation. For example, 70% of patients in acute care settings require a Peripheral Intravenous Catheter (PIVC) [1]. Worldwide, over one billion PIVCs are placed in hospitalized patients annually [2]. Despite the high frequency of interventions, complications still arise fairly often due to incorrect needle positioning [3,4]. Minor bruising and hematoma are the most frequent complications observed [5]. Other consequences can include pain, sclerosis, phlebitis, vein occlusion, nerve punctures, and infections. In many cases, the puncture has to be repeated, which jeopardizes the safety and well-being of the patient [6]. The errors arise from uncertainties that the intervention entails. At present, venipuncture is carried out by hand after the target blood vessel has been located and the appropriate angle and depth of insertion have been determined. The only indication of the correct depth of insertion is a change in the force required to push the needle forward or the appearance of a blood "flash" in a designated window of the needle. As a result, the success rate is heavily influenced by the expertise of the clinician and the patient's physiology [3,4].
In terms of needle guidance, ultrasound imaging can be seen as the gold standard. Ultrasound requires minimal preparation, does not involve ionizing radiation, and provides continuous information about the needle position. However, it is not ideal as the spatial resolution is deficient [7]. In general, ultrasound images tend to be noisy due to reflections, reverberations, shadows, air pockets, and biological speckle, resulting in a possible incoherence of the reconstructed image of the distal end from the echo and the actual tip of the needle. Lastly, the interpretation of the 2D images is highly subjective, and slight needle movements are often required to avoid misjudging of the images [8,9].
Another possibility for needle guidance that is being researched is the identification of tissue types by means of bioimpedance measurements. This approach is based on the variance of dielectric properties of different tissue types. Research around needle impedance measurements includes the optimization of electrode geometries in order to achieve the best possible spatial resolution, which corresponds to the smallest possible sensitivity field. The smaller the sensitive volume, the more accurate the tissue information provided. As the spatial sensitivity distribution cannot be directly determined experimentally, Finite Element Method (FEM) simulation has proven to be a useful method [10][11][12].
It has been shown that monopolar measurements, which are characterized by a high current density close to the active electrode surface, can be obtained by using a sufficiently large neutral electrode in a two-electrode setup [13,14]. The volume sensitivity, which is a function of the squared current density in a given tissue volume, was found to be small enough to neglect the area outside the sensitive area. Thus, the impedance is dominated by the tissue in the vicinity of the active electrode. Monopolar setups were used to investigate the discrimination of muscle and fat for drug administration [15,16], the detection of venous blood [17], and intraneural injections [18]. Bipolar setups are characterized by the utilization of two similar electrodes for both carrying the current and picking up the voltage. Various simulative investigations were conducted showing different needle geometries that minimize the sensitivity field [11,12,19,20]. Bipolar setups exist, which use at least one needle body as active electrode or even two nested needles [12,[20][21][22][23]. In other works, two active electrodes were installed at the needle tip [24][25][26][27][28], or a concentric needle electrode designed for Electromyography (EMG) was used [29].
Tetrapolar setups can improve the accuracy of a measurement by using four electrodes: two Current-Carrying (CC) and two (voltage) Pick-Up (PU) electrodes. In this way, the voltage at the PU electrodes is less influenced by the polarization effects at the CC electrodes. Current density distributions for tetrapolar or even pentapolar setups have been studied [30,31]. The integration of tetrapolar electrode setups was studied by means of wire electrodes made of stainless steel [30] and flexible thin-film sensors [32]. Efforts have also been made to increase the extent of information that impedance measurement setups can provide. Therefore, the integration of multiple electrode setups on a single needle has been proposed, which we refer to as multi-local impedance measurements [30,32,33]. The aim is to provide localized impedance information at different locations at the needle [33].
In this work, we suggest that localized impedance information can be exploited to realize an imaging technology that can be used for needle navigation. Our proposed concept of needle-based electrical impedance imaging involves acquiring tissue information at multiple locations along the needle and defining the boundaries of the identified tissue types. By combining individual tissue information, we can generate a 3D visualization. To demonstrate the feasibility of our concept, we developed an electronic system capable of measuring impedance and switching between electrode pairs. Additionally, we modified standard hypodermic needles by incorporating 12 electrodes. To interpret the impedance data, we conducted an extensive analysis of the sensitivity distribution of our needle through FEM simulation. For tissue identification, we employed a Machine-Learning (ML) approach based on recorded data for training. Finally, we integrated all the data and visualized it in a 3D Graphical User Interface (GUI), which serves as a tool for needle navigation.
Sensor Development
The development process of the sensing electrodes was derived from the main requirements we set for this research project. To preserve the function as a hollow needle, the electrodes had to be mounted on the outside of the needle. We adopted a previously reported fabrication protocol and altered it slightly [30]: a 14 Gauge (G) hypodermic needle (Sterican, B. Braun SE, Melsungen, Germany) with an outer diameter of 2.1 mm served as a basis. As a first step, a Polyvinylidene Difluoride (PVDF) heat shrink tube (DERAY-KY 175, MCE Mauritz Electronics, Nieder-Olm, Germany) with an inner diameter of 3.2 mm and a shrink ratio of 2:1 was applied to insulate the needle. We used a scalpel to cut the protruding piece of the heat shrink tube to match the shape of the needle tip. Afterwards, the surface of the heat shrink tube was evenly coated with a sprayable adhesive (3M Repositionable 75 Scotch-Weld, 3M Deutschland GmbH, Neuss, Germany). Next, 12 stainless steel wire electrodes (AISI 316L) with a circular cross-section with a diameter of 0.06 mm (Zivipf.de, Thomas Schmoll, Treuchtlingen, Germany) were placed circularly around the needle tip, each 30°apart and displaced by 2 mm in the axial direction (cf. Figure 1). The wire endings were galvanized with gold. The second layer of the heat shrink tube was applied afterwards, and the excessive piece was cut so that 1 mm of the wire endings was exposed. The remaining adhesive was removed eventually. The proximal part of a needle that has been equipped with electrodes in this way can be seen in Figure 1. The galvanized wire ends at the distal part were contacted electrically to an adapter Printed Circuit Board (PCB). To connect the needle to the PCB mechanically, we 3D printed a male Luer-Slip connector attached to the PCB. The PCB with the needle is placed in a small housing mounted to a motorized linear stage (LINAX Lcx 80F40, Jenny Science AG, Rain, Switzerland) that is providing the movement (cf. Figure 2). A servocontroller (XENAX Xvi 75V8, Jenny Science AG) provides the motor control.
Experimental Design
The measurement system for the needle navigation consists of an Impedance Analyzer (IA, Sciospec ISX-3v2, Sciospec Scientific Instruments GmbH, Bennewitz, Germany) and a switching unit (cf. Figure 3). For an unobstructed workflow and a low latency of the needle navigation system, we use a single measurement frequency rather than an impedance spectrum. The IA is constantly measuring with an excitation voltage of 200 mV and a frequency of 100 kHz. The switching unit is responsible for switching the different electrode pairs. It comprises a microcontroller (Arduino Nano V3.0, Arduino LLC, Somerville, MA, USA) and two analog 16:1 multiplexer (CD74HC4067, Texas Instruments Inc., Dallas, TX, USA). Both systems are connected to a control PC (i5-9500, 32 GB RAM, Intel UHD Graphics 630). We developed a GUI in MATLAB (MATLAB R2022a, The MathWorks Inc., Natick, MA, USA) that allows the control of the measurement.
Control PC
Switching Unit Impedance Analyzer Figure 3. Overview of the components used for the needle navigation system.
For our experiments, we used tissue phantoms with realistic electrical properties. In the case of venipuncture, we fabricated phantoms that mimicked blood, fat, and dermis (skin). The phantoms are based on a mixture of water, gelatine, and agar. In this way, the phantoms provide enough mechanical stability and the possibility to use molds for shaping. The electrical properties are adjusted by adding sodium chloride and propylene glycol. The ingredients for phantom fabrication can be seen in Table 1 [22,34].
For training the classification algorithm, we recorded impedance measurement data using the modified needles. For this purpose, we inserted a needle into the tissue phantom and captured the measured impedance values for each excitation state. As there are 12 different excitation states, 12 measurements are taken each run. We repeated the procedure for three needles and collected a total of 420 impedance values for each tissue type. We conducted two different tests to demonstrate the sensing capability of the needle and to evaluate the classification algorithm. The first test included the manual insertion of the needle into individual tissue types. The impedance data for each excitation state were recorded and classified according to the classification algorithm. This procedure was repeated five times for each tissue phantom type. In the second test, we used the linear stage to perform a controlled needle movement into a layered phantom with different tissue types to verify the classification and the ability to determine the puncture depths. Each tissue phantom had a thickness of 1.5 cm in the sample holder. The distance from the foremost electrode to the rearmost electrode in the longitudinal direction was approximately 1.2 cm. The linear stage inserted the needle with 1 mm/s and with position increments of 3 mm. There were therefore two positions in which the needle part covered with electrodes is located completely in one type of tissue phantom. To assess the classification success rate, we only evaluated the impedance data at positions with complete coverage by one type of tissue phantom. Three insertions with three different needles were performed. Each insertion involved two positions in three different tissue types. In total, there were 54 positions that are evaluated. Each position yielded 12 different excitation states, resulting in 648 classifications altogether. The puncture depth could be derived from the combination of absolute position given by the linear stage and the measured impedances at the known electrode positions along the needle. To verify the position of the needle tip, we used pictures of the needle taken from the top view.
Simulation
The simulative analysis by means of FEM is pivotal for the imaging method. It determines the spatial sensitivity distribution. The simulations were performed with COMSOL Multiphysics version 5.6 (Comsol Multiphysics GmbH, Göttingen, Germany). We used the Alternating Current/Direct Current (AC/DC) and the Computer-Aided Design (CAD) Import module for the simulations. All simulations were conducted on a simulation PC (i9 12900K, 96 GB RAM, RTX3080 Ti).
Geometry
We defined a cube with a side length of 30 mm, which represents a piece of homogeneous and isotropic tissue. The tissue type was changed in between the simulations. After that, we imported the CAD model of the modified needle. Both parts were connected using the form union node to constitute combined geometric domains.
Material Properties
Electrical conductivity and permittivity are the most important properties for our purposes. The parameters that we used are derived from the literature and can be found in Table 2 [34-38].
Boundary Conditions
We set the boundary conditions using the Electric Current (EC) interface. We simulated a bipolar measurement with a terminal source of 1 A (unity current) assigned to the excitation electrode and a ground terminal assigned to the counter electrode. Electrical isolation boundaries were inserted between all interfaces except for the interfaces between the tissue and the measuring electrodes. This ensured that the current flowed exclusively between the measuring electrodes without penetrating into other materials. These simulation settings were implemented in eleven other EC interfaces to simulate different excitation states with different electrode pairs. As a next step, a suitable mesh had to be found to divide the geometry into finite elements. Starting from a very coarse mesh, the mesh was modified until a compromise between mesh resolution and computational effort was found. The final mesh consisted of four separate meshes. The first mesh was created by assigning a two-dimensional Free Triangular mesh to the surface boundaries of the outward wire ends. The surface boundaries of the outward end of the needle as well as the surface boundaries of the two insulation layers were also equipped with a Free Triangular mesh. To extend the twodimensional mesh into the third dimension, a Swept Mesh was set in such a way that for each surface element, a long prismatic element is created, which extends from the outward end of the needle to the tissue surface. The second mesh comprised the wire tips, which are constructed as Free Tetrahedral mesh. In addition, the third mesh for the tissue and the fourth mesh for the needle tip including the insulation were meshed as Free Tetrahedral with slightly different resolutions.
Post-Processing
The transfer impedance between a CC electrode pair and a PU electrode pair is given by where ρ is the resistivity of the medium, J CC is the normalized current density vector field from a unit current applied to the CC electrodes, and J reci is the normalized current density from a unit reciprocal current applied to the PU electrodes [39]. The sensitivity of a tissue voxel is defined as In a two-electrode system (bipolar or monopolar), the forward and reciprocal current densities are identical (J CC = J reci = J ). Therefore, S = |J | 2 . The bulk of the sensitivity, which we refer to as sensitive volume, defined as 97% of its accessible value range, is in the direct vicinity of the electrodes [10].
Due to simulation singularities, we used the line method to handle unphysical behavior [40,41]. The method involved constructing a geometric line starting from the singularity and pointing into the free tissue space. Sensitivities were evaluated along this line for a finer and a coarser mesh. By comparing the relative differences in sensitivity with respect to the original mesh, we could define an acceptable error margin and limit the sensitivity to the maximum value found at a certain distance away from the singular point.
Software Architecture
The control software comprised control modules for the switching circuitry, the IA, and the linear stage. In addition, the tissue classification algorithm was implemented as well as the 3D visualization. The main requirements for the software were a low latency between the actual measurement and the representation in the visualization as well as the possibility of displaying the tissue around the needle in three dimensions. We aimed for a scanning frequency of the electrodes in the range of 7-30 Hz.
The switching circuitry had two functions: switching of the multiplexers and providing the information about the current switching state. Both functions were implemented in an Arduino Integrated Development Environment (IDE) script. The data stream of the Arduino was read by the MATLAB GUI. The IA was controlled via the Communication Port (COM) interface. The linear stage was connected through a Transmission Control Protocol/Internet Protocol (TCP/IP) connection via an ethernet cable. The motor controller used an American Standard Code for Information Interchange (ASCII) protocol to control the motor movement. We developed MATLAB libraries that translate all commands into MATLAB functions.
For tissue classification, we implemented an ML-based algorithm. We defined a class for data preparation that conditions the simulated impedance data in an interpretable way. In addition, needle models as well as sensitivity fields were loaded. We chose a k-Nearest Neighbors (k-NN) algorithm with k = 34 neighbors and an Euclidean norm as classifier; i.e., the 34 data points closest to the query point, the 34 nearest neighbors, were determined. Since every neighbor was labeled, the query point was assigned the class that was represented most frequently among the 34 neighbors.
The visualization was the central interface for the user. As we considered both manual insertions as well as automated insertions, a local and a global visualization mode were implemented. The visualization concept involves the utilization of the needle CAD model as a basis. Depending on the classified tissue type for a particular electrode pair exciting, the sensitive volume around the electrode pair is displayed in a certain color, which is representative for the identified tissue type.
For local visualization, i.e., manual insertion, we had no feedback about the orientation and position of the needle. Only the local circumstances (classified tissue types) surrounding the needle were displayed. For the global visualization, i.e., automated insertion, we could use the position feedback from the linear stage to display the needle inside a coordinate system. Assuming the tissue was relatively stiff with little deformation, we could assign particular coordinates to the classified tissue types and display them. The sensitive volumes were represented by isosurfaces enclosing all tissue voxels that accounted for up to 97% of the total sensitivity.
The latency was determined using MATLAB's tic/toc commands. A measurement was finished as soon as a callback was triggered by the Arduino's excitation state change. The tic command registered the starting time for data processing, classification and visualization, and the toc command recorded the end time.
Needle Navigation Software
The IA and linear stage manufacturer's software packages were used to create a GUI that provides an intuitive interface for measurement, control, and visualization. Figure 4 shows the visualization tab of the MATLAB GUI. Further tabs allow the control and the adjustment of measurement settings. The measured data which were recorded for the k-NN algorithm can be seen in Figure 5. Figure 6 demonstrates the 3D visualization of different tissue types around the needle. Venipuncture serves as an exemplary application here. Red volumes represent blood, and yellow-colored volumes depict fat tissue. Figure 7 shows the sensitive volumes around the electrodes, which were obtained by simulation.
Results
The average latency between measurement and visualization was 112.1 ms (standard deviation 66.4 ms) and the median was 86.9 ms (interquartile range 40.1 ms).
Local Visualization
The classification results are summarized in Figure 8. For the blood phantom, the overall success rate of the classification is 48.33%; i.e., 29 out of 60 classification attempts were correct. For dermis, the success rate is 63.33% with 38 correct classifications. For fat, 60 out of 60 classification attempts were successful, which equals a success rate of 100.00%. Altogether, 70.56% of the individual classifications were correct. The classfied tissue types can now be visualized within the sensitive volumes enclosing the tissue volumes, in which the identified tissue types are predominant. Examplarily, locally visualized tissue types are shown in Figure 6. Figure 8 displays the classification success rate in a confusion matrix. As can be seen, the success rate has decreased in contrast to the local visualization. Less than one-third of the blood and skin phantom punctures were correctly classified. Approximately half of the classification attempts in fat were correct. It was also observed that the classification rate decreased with the number of repetitions.
Global Visualization
We can now also consider the needle positions in which the electrodes are not completely located in one type of tissue phantom, i.e., the positions in which we are transitioning between two types of tissue phantom.
Based on the different colors in the visualization, we can determine when the transitions between tissue types happen. This can be seen in Figure 9. Whereas rear electrodes detect the already penetrated tissue type, the foremost electrodes already indicate a new tissue type. The spatial resolution of this needle is determined by the distances between the pairs of electrodes, which in this model is 2 mm in the longitudinal direction of the needle. For tissues with homogeneous properties, the spatial resolution is determined solely by the geometric arrangement of the electrodes. The puncture depth in the tissue being punctured is also identified through the queried position of the linear stage, which is displayed in the GUI.
Discussion
Impedance measurements by means of a needle equipped with stainless steel wire electrodes as well as tissue classification are possible with the proposed setup. Furthermore, using sensitive volumes for visualization that enclose the identified tissue types has demonstrated their functionality. In Figure 7, it can be seen that the sensitive volumes are distributed almost axisymmetrically around the needle. This is an indicator of the validity of this method, since the spatial sensitivity distribution itself is a quantity that depends solely on geometry, and we are using an axissymmetric needle geometry. The fluctuations that nevertheless remain can be attributed to numerical inaccuracies in the simulation and the fact that the mesh for the evaluation is not axisymmetric.
The hardware developed entails further uncertainties. The quality of the electrode that has been galvanized with gold cannot be guaranteed. Polarization effects can be more present when the gold layer has detached due to mechanical stress during the insertions. The fabrication process is performed completely manually, resulting in needles with different qualities and properties. In Figure 9, it can be seen that during the transitions, the visualization still displays the previous tissue layer for electrodes that should already be in the next tissue layer. This can also be accounted to the fabrication process, as the electrodes are not exactly positioned as in the ideal CAD needle model. As can be seen in Figure 1, the distal electrodes have a larger distance to the needle tip as designed.
Compared to the CAD model, the distal electrodes are approximately 3-4 mm further away from the tip as the software expects, which is also noticeable in the visualization.
We also chose a relatively large diameter (14 G) to include more electrodes on the needle. It is questionable if this needle size would be used for venipuncture. In our current setup, the ability to use the needle as a cannula is not given anymore because it is plugged into the Luer-Slip connector of the adapter PCB. This has to be restored in future iterations. As our switching frequency between the excitation states is only 7 Hz, a slower relais board could be used instead of the multiplexers, eliminating any parasitic effect the switching electronics might impose.
Currently, the classification algorithm uses the unprocessed measurement data as a basis. The quality of the data can still be improved. The available data set had coefficients of variation for the real part between 0.735 (blood) and 0.941 (dermis) and for the imaginary part between 1.643 (fat) and 4.578 (dermis). Reducing the extent of variability would increase the classification success rate. It has not yet been investigated if preprocessing of the data or a different presentation of the data improves the success rate of the k-NN algorithm used. A coordinate transformation or feature scaling of the data could be investigated. In addition, a comparison of different classification algorithms such as support vector machines, or modifications of the k-NN algorithm, e.g., fixed-radius nearest neighbor, could be helpful.
It is obvious that the classification provided better results for single tissue phantoms, i.e., local visualization, than for the layered tissue structure (global visualization). This could be due to the fact that we used distilled water to rinse the needle prior to an insertion. However, it cannot be guaranteed that the distilled water has fully dried or evaporated before the measurement. Supporting this suspicion is the fact that we observed an improved classification success rate for needles that have been exposed to air for a longer time. Tissue parts from previous tissue types could also have remained at the exposed wire electrodes, which were than also dragged into the next tissue type, affecting the measurement result. In addition, we observed a slight leakage of blood phantom into the puncture holes once the layer above has been punctured, which also influences the measurement. The observation that the classifications became worse the higher the repetition is supporting this assumption. It would be advisable to perform the insertions with fresh tissue phantoms without any previous puncture holes. The classification rate of the fat phantom was highest for both test modes. This is probably because the recorded impedance data for dermis and blood are quite similar (cf. Figure 5). The data for fat differ more, which makes it better distinguishable.
Conclusions
For the first time, the concept of needle-based electrical impedance imaging has been presented. It exploits the sensitivity distribution of the electrode configuration in use. A prior determination of the sensitivity distribution is therefore pivotal. An average classification success rate of 70.56% for individual tissue phantoms was achieved. Future works will especially address hardware improvements and improvements to the classification process as well as the underlying dataset. Furthermore, a thin film-based approach on electrode structuring could be worth investigating as minimal structure sizes, and thus the spatial resolution, can be scaled down to the micrometer range. An implementation in C++ or C# is conveivable to avoid potential latency issues. In addition, more tissue types will be included to include more applications such as epidural anesthesia. Eventually, usability tests will be performed to evaluate the efficacy of the needle navigation system as an alternative to conventional imaging techniques.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 2023-05-17T15:25:10.301Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "cc570f1cc363dd020e8f93cfd47cf276bac1e2e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/bioengineering10050590",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e97c7a7738591a3ae28602a2495f92d2534e195b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267539964 | pes2o/s2orc | v3-fos-license | BMP6 inhibits gastric cancer growth and predicts good prognosis
Background Gastric cancer (GC) is a common tumors in the digestive tract, and effective treatment methods are still lacking. Bone morphogenetic protein 6 (BMP6) is closely related to the occurrence and development of various tumors, but its relevance to GC is still unclear. The aim of the study was to explore the relationship between BMP6 and the occurrence and development of GC. Methods In this study, we investigated the relationship between BMP6 and the prognosis of GC patients using bioinformatics technology and clinical tissue samples. We also explored the connection between BMP6 and the biological behavior of GC cells through molecular biology experiments and relevant in vivo animal experiments. Finally, we examined the mechanisms by which BMP6 inhibits the onset and progression of GC. Results Through analysis of The Cancer Genomics Atlas (TCGA) database, we observed that BMP6 is expressed at low levels in GC, and its low expression is associated with a poor prognosis in GC patients. Cell experiments demonstrated that BMP6 expression can influence the proliferation of GC cells both in vitro and in vivo. Furthermore, we discovered that BMP6 is linked to the nuclear factor-κB (NF-κB) pathway, and subsequent experiments confirmed that BMP6 can inhibit the biological activity of GC cells by activating the NF-κB pathway. Conclusions Our findings suggest that BMP6 is a potential prognostic biomarker in GC and can regulate the biological activity of GC cells through the NF-κB pathway. BMP6 may serve as a promising therapeutic target for GC, and our study introduces novel ideas for the prevention and treatment of this disease.
The literature cited is appropriate, and the length is commensurate with the message.For the mentioned reasons, the manuscript may be accepted for publication with major revisions.
Major points: 1) Overexpression and silencing experiments could be described in detail.
Reply 1:Thank you very much for your hard work.Changes in the text:We have added relevant content to the methodology section, providing a more detailed description of the Overexpression and silencing experiments, and marked it in red font.
Reviewer C
The manuscript attempts to address the lack of understanding of BMP6 expression in gastric cancer and its relevance to malignancy.Whereas the topic is of relevance, I have quite a few remarks: 1.Overall, the language should be proofread by a native speaker.The grammar and flow of text need improvements.The different results are partly described in an unclear manner.
Reply 1: Thank you very much for your hard work.Changes in the text:We have invited professional native English speakers to edit our manuscript, and the edited content has been marked in red font in the text.
2. The Materials and Method section lacks information about the data extraction method using the mentioned database.
Reply 2: Thank you very much for your hard work.Changes in the text:We have made additions to the methodology section of the manuscript, providing an explanation of the data extraction methods from the database, and the modified content has been marked in red font.
3. The "Results" chapter is not clearly divided into sub-chapters, lack of clarity.
Reply 2: Thank you very much for your hard work.Changes in the text:We have rewritten the results section, dividing it into distinct sections for each part, and marked the changes in red font.Reply 3: Thank you very much for your hard work.Perhaps due to our oversight, some parts of the description were not clear enough.
Changes in the text: We have made modifications to certain parts of the text to enhance clarity.Additionally, we have included relevant immunohistochemistry images in Figure 1c.The original associated image files will be provided in the supplementary materials.Reply 4: Thank you very much for your question.The results represented in Fig. 1a and 1b are both intended to demonstrate the high expression of BMP6 in normal tissues and low expression in tumor tissues.However, the data used for Fig. 1a compares cancer and adjacent tissues, which may not be from the same patient's tissue.In contrast, Fig. 1b requires a paired comparison of cancer and adjacent tissues, typically from the same patient.Fig. 1c: What statistical test was used to calculate significance?It does not look significant to me just by glancing over.Why are the error bars so small if the sample distribution is clearly quite wide?
Reply 5: Thank you very much for your question.Regarding the statistical methods, we used independent sample t-tests.The small error bars in the figures might have been due to the way the images were created.We have reworked the images to make them appear clearer and more understandable.
Fig. 2c: What does "P23" stand for?Why do we not see any BMP6 expression in the WB? Reply 6: Thank you very much for your question."P23" in the text represents the control group, and this may have been unclear due to our oversight.We have added supplementary labeling in red font in the caption of Figure 2 to clarify this.In the experimental methods section, we have also provided additional information.To validate BMP6 overexpression, we detected the flagtagged protein.When the flag-tagged protein is detected, it indicates that the plasmid overexpressing BMP6 has been successfully transduced into the cells.Fig. 2d, f, g: The graphs lack proper labelling of the cell lines analysed.How many repeats of this experiment were done?Error bars are very small.Reply 7: Thank you very much for your question.We have provided detailed labeling and explanations for the cell lines used in Fig. 2d, f, and g.Additionally, we have included supplementary information in the methods section of the manuscript.All experiments were conducted three times, and statistical calculations were performed using independent sample ttests.
Fig. 2h+I: The colour intensity of the plate photographs and the quantitative graph do not match.
Reply 8:Thank you very much for your question.This might have been due to issues with the image output quality.We have re-exported our images with increased resolution to make them clearer and more understandable.Fig. 3e: Why suddenly a different cell line (AGS-7901) which has not even been described in the text or materials and methods?Reply 9:Thank you very much for your feedback.We have carefully reviewed and found that, due to our mistake, we incorrectly wrote "SGC-7901" as "AGS-7901."This was an error on our part.We have made the necessary corrections in the images.Once again, we appreciate your diligence and attention to detail.I would have liked to see the same WB on normal non-cancerous gastric cells and cells overexpressing BMP6.Fig. 3 in general: A chemical inhibition of NFkb signalling would be useful to prove the said hypothesis of BMP6 influencing carcinogenesis via NFkb.
Reply 9: Thank you very much for your suggestion.In our future research, we will conduct experiments to verify BMP6 overexpression in normal cells.Additionally, we plan to explore the relationship between BMP6 and gastric cancer development by inhibiting the NF-κB pathway using inhibitors in our upcoming experiments.We have already included these considerations as limitations in the conclusion section of the article.
Fig. 1 :
Fig. 1: I would like to see at least one image of the actual tissue microarray.What data are generated using the TCGA database and what data using the microarray?The descriptions in the text and in the figure do not match.
Fig
Fig. 1a/b: Are the data used for fig.1a and 1b the same?If yes, why are there different significances? | 2024-02-08T16:12:38.928Z | 2023-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "b6880e1f1d6eecf0cf8cf5bae763ffc820b4e670",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.21037/jgo-23-512",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94bd4005a3a764db6e516641996a04e0e513a122",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4659195 | pes2o/s2orc | v3-fos-license | Enrichment and mutation detection of circulating tumor cells from blood samples
The potential of circulating tumor cells (CTCs) in the diagnosis and prognosis of cancer patients has become increasingly attractive. However, molecular analysis of CTCs is hindered by low sensitivity and a high level of background leukocytes in CTC enrichment technologies. We have developed a novel protocol using a microfluidic device, which enriches and retrieves CTCs from blood samples. The principle of CTC capturing is that tumor cells are larger and less deformable than normal blood cells. To evaluate the potential of utilizing Celsee PREP100 in CTC molecular analysis, we prepared prostate cancer cell lines PC3 and LNCaP, retrieved the captured cells and analyzed them using PCR amplicon sequencing. We were able to recover an average of 79% of 110-1,100 PC3 and 60–1,500 LNCaP cells, and detect the p.K139fs*3 deletion of the p53 gene in PC3 cells and p.T877A mutation of the androgen receptor gene in LNCaP cells. Next, we spiked these two types of cells into normal donor blood samples, captured the cells and analyzed them using PCR amplicon sequencing. The PC3 and LNCaP cells were captured and retrieved with the ratio of captured CTCs to the background leukocytes reaching 1:1.5 for PC3 and 1:2.9 for LNCaP cells. We further revealed that the p.K139fs*3 deletion and p.T877A mutation can be detected in the captured PC3 and LNCaP cells, respectively. We successfully validated this approach using clinical blood samples from patients with metastatic prostate cancer. Our results demonstrated a novel approach for CTC enrichment and illustrated the potential of CTC molecular characterization for diagnosis, prognosis and treatment selection of patients with metastatic malignancy.
Introduction
Circulating tumor cells (CTCs) have been identified in the blood and bone marrow of patients with breast, prostate and colon cancers (1)(2)(3) in as low as 1/100 million or 1 billion blood cells. Molecular characterization of CTCs may provide a greater understanding of the disease metastases, identify aggressive tumors and enable therapeutic selection and monitoring of the disease for patients undergoing treatment (4,5). A variety of technologies have been developed to improve detection and capture of CTCs from peripheral blood, which include immune-magnetic bead separation using monoclonal antibodies targeting cell-surface antigens for positive or negative selection, cell sorting using flow cytometry, filtration-based size separation, density gradient centrifugation, microfluidic devices and fast-scan imaging (6)(7)(8)(9)(10). For example, CellSearch™ was the first CTC technology that demonstrated its clinical validity in predicting progression-free and overall survival of metastatic cancer patients based on CTC enumeration (3)(4)(5)(6).
It is of great interest to go beyond cell enumeration and further characterize the CTCs by assessing clinically relevant molecular markers on or within CTCs to gain insight into the mechanisms of metastasis and best treatment modalities for patients (1)(2)(3)11,12). For example, significant progress has been made in breast cancer, including effective hormonal therapy, chemotherapy and targeted therapies against estrogen receptor (ER) and HER-2. In prostate cancer, androgen receptor (AR) variant 7 has been implicated in predicting response to targeted therapies on AR. Established clinical, pathological features and biomarker status are routinely used to guide treatment options. It has become critically important to determine which patients are most likely to benefit from specific therapies. Detecting such molecular markers using a minimally-invasive blood test for CTCs has great potential in clinical practice to guide therapy choice for patients. However, despite advances in CTC technologies, the low frequency of CTCs in cancer patients and the extensive background leukocytes have limited the synergism of biomarkers and CTC technologies (11,12).
We have developed a novel microf luidic device, Celsee PREP100 that uses a size and deformability-based capturing mechanism of CTCs (13). The microfluidic chip has a parallel network of fluidic channels which contain about 56,000 capture chambers (13,14). The chip fabrication begins with a silicon master device containing micro-features that make up a fluidic network (75-µm deep), leading to individual cell trapping chambers (20x25x30 µm) with a pore size of 10x8 µm. Each chamber ensures smaller blood cells such as red blood cells and most of the leukocytes escape while larger cancer cells get trapped and isolated in the chamber. The manufacturing process uses standard photo-lithography and deep reactive ion etching for micro-fabrication. From the master device, a soft elastomeric negative mold is created by pouring and curing against the silicon master. The final micro-substrate is created by hot embossing a plastic plate made of cyclic olefin polymer (COP) against the elastomeric negative mold. A thin plastic laminate containing pressure-sensitive adhesive is then laminated against the COP micro-substrate to create the final microfluidic chip. The chip is placed on the Celsee PREP100 device for CTC capturing.
Since the device captures cells using a label-free mechanism, it provides an improved sensitivity in capturing CTCs and an open platform for investigators to use a variety of antibodies to identify and characterize CTCs upon capturing (13,14). In a previous study, we compared CTC enumeration between the Celsee system and the FDA-cleared CellSearch system using blood samples from patients with metastatic prostate cancer. CTC counts were significantly higher using the Celsee system (14). The captured CTCs could also be retrieved reproducibly using a back-flow procedure from the microfluidic chip for further nucleic acid extraction and molecular analysis. In the present study, we report the development of a novel protocol for capturing cells using blood sample retrieval and analysis of the cells using PCR amplicon sequencing. Using this method, we evaluated the potential of applying Celsee PREP100 in CTC molecular analysis by analyzing p.K139fs*3 of TP53 and p.T877A of AR in captured prostate cancer cell lines PC3 and LNCaP and in captured spiked-in cells in normal donor blood. The method was also tested successfully in clinical blood samples from 11 patients with metastatic prostate cancer successfully. Our results demonstrated the potential of utilizing CTCs for diagnosis, prognosis and treatment selection of patients with metastatic malignancy.
Materials and methods
Materials. Prostate cancer cell lines PC3 and LNCaP were purchased from the American Type Culture Collection (ATCC; Manassas, VA, USA) and cultured in Dulbecco's modified Eagle's medium (DMEM) containing 5% fetal bovine serum (FBS) in a humidified incubator supplemented with 5% CO 2 . Upon confluency, cells were digested with 0.5% trypsin and passaged at a 1:4 ratio. Some cells were resuspended in culture medium, counted and used for experiments. Blood samples were obtained from healthy donors. Clinical blood samples from patients with metastatic prostate cancer were obtained at Henry Ford Health System (Detroit, MI, USA). The present study was approved by the Institutional Review Board (IRB) of the Henry Ford Health System and written consent was obtained from all patients. Following informed consent, blood samples from patients were acquired in 10 ml BCT Cell-Free DNA tubes (Streck, La Vista, NE, USA) or EDTA-coated Vacutainer ® tubes (BD Biosciences, Franklin Lakes, NJ, USA).
Removal of CD45-positive cells from blood samples and CTC enrichment and staining.
To test CTC enrichment efficiency using the Celsee PREP100 instrument, 250 PC3 and 50 LNCaP cells were spiked into 4 ml of blood from healthy donors. CD45-positive cells were removed using the RosetteSep Human CD45 Depletion Cocktail (Stemcell Technologies, Inc., Cambridge, MA, USA) following the manufacturer's protocol. The upper layer of plasma cells was collected and added into the inlet funnel of the Celsee PREP100. Cells were enriched in the microfluidic chip and stained for cytokeratins, CD45 and nuclei using anti-Pan cytokeratin antibody (1:100 dilution; cat. no. 914204), anti-CD45 antibody (1:100 dilution; cat. no. 368515; both antibodies were from BioLegend, Inc., San Diego, CA, USA) and DAPI, using Celsee PREP100 CTC Immunochemistry kit (Celsee Diagnostics, Plymouth, MI, USA). Cells enriched on the slides were counted using the Celsee Analyzer (14). Cytokeratins and DAPI-positive and CD45-negative cells were counted as CTC cells. The enrichment efficiency was calculated as the percentage of the enriched cells of the total spiked-in cells.
Cell retrieval using the Celsee PREP100 instrument. A different amount of PC3 and LNCaP cells were spiked into the priming buffer and retrieved in 2 ml phosphate-buffered saline (PBS) using the Celsee PREP100 instrument (Celsee Diagnostics) following the protocol provided by the manufacturer. The cells were then concentrated in 10-50 µl by centrifugation at 500 x g for 10 min, and counted using a hemocytometer. The retrieval efficiency was calculated as the percentage of the retrieved cells of the total spiked-in cells.
Retrieval of spiked-in PC3 and LNCaP cells in normal donor blood samples.
A different amount of PC3 and LNCaP cells were spiked into 4 ml of blood from healthy donors. After removal of the CD45-positive cells and enrichment of the spiked-in CTCs as aforementioned, cells enriched in the microfluidic chip were retrieved in 2 ml PBS using the Celsee PREP100 instrument (Celsee Diagnostics) following the manufacturer's protocol. These cells were then collected by centrifugation at 500 x g for 10 min and stored at -20˚C for future analysis by PCR amplicon sequencing.
Capturing and retrieval of CTCs in clinical blood samples.
An aliquot of 4 ml of blood sample was used to capture or retrieve CTCs using the Celsee PREP100 instrument (Celsee Diagnostics) following the protocol provided by the manufacturer. CTCs were monitored using an inverted fluorescence microscope. CTC enumeration following antibody labeling was performed manually. PanCK + /CD45nucleated cells were identified as CTCs. Positive and negative controls for antibody performance and staining were included in each experiment. After removal of CD45-positive cells and enrichment of the spiked-in CTCs as aforementioned, cells enriched in the microfluidic chip were retrieved in 2 ml PBS using the Celsee PREP100 instrument (Celsee Diagnostics) following the manufacturer's protocol. These cells were then collected by centrifugation at 500 x g for 10 min and stored at -20˚C for future analysis by PCR amplicon sequencing.
PCR amplicon sequencing. Mutations p.K139fs*3 of TP53 in PC3 and p.T877A of AR in LNCaP cells have been previously reported (15,16). To detect these mutations, nested PCR was employed. Tables I and II lists outer and inner primer sets designed to amplify each of the mutations. The retrieved cells were resuspended in 5 µl ddH 2 O and incubated at 4˚C for 10 min, and used as the template for PCR amplification. The PCR assay was set up in a 20-µl reaction containing 10 µl KAPA HiFi HotStart Ready Mix (Kapa Biosystems, Inc., Wilmington, MA, USA), 300 nM of each outer primer and 1 µl dimethyl sulfoxide (DMSO) at the following cycling conditions: 2 min at 95˚C, followed in turn by 3 cycles of 20 sec at 98˚C, 20 sec at 64˚C and 30 sec at 72˚C, 3 cycles of 20 sec at 98˚C, 20 sec at 61˚C and 30 sec at 72˚C, 3 cycles of 20 sec at 98˚C, 20 sec at 58˚C and 30 sec at 72˚C, 35 cycles of 20 sec at 98˚C, 20 sec at 57˚C and 30 sec at 72˚C and 10 min at 72˚C. One microliter of amplified PCR products with the outer primer set was used as the template for the PCR reaction using the inner primer set under the same conditions. The final PCR products were examined on 1% agarose gel and subjected to Sanger sequencing using the inner primers after purification.
Results
Mutation detection in PC3 and LNCaP cell lines. PC3 and LNCaP cell lines are established prostate cancer cells and have been widely used in studies on prostate cancer. It has been reported that PC3 cells possess a deletion in the TP53 gene, p.K139fs*3 and LNCaP possess a missense A to G mutation in the AR gene, p.T887A mutation (15,16). To detect these mutations, we designed primers (Table I) targeting the mutations, performed nested PCR and subsequently sequenced the amplicons by Sanger sequencing. To detect the p.T877A mutation of the AR gene in the blood samples of patients, a second set of inner primers were used to improve the sensitivity of the assay (Table II) Cell retrieval efficiency using Celsee PREP100. We next tested CTC enrichment efficiency of the Celsee PREP100 by inputting a total of 110, 220, 330, 440, 550, 880 and 1,100 PC3 cells, and 60, 300, 600, 900, 1,200 and 1,500 LNCaP cells, respectively. The results are shown in Table III. Overall, the efficiency for cell retrieval was ~70% with a range from 40 to 121% for PC3 and LNCaP cells. The observation that some of the cell recovery rates were >100% was due to the variation of cell counts by hemocytometer at low cell numbers.
CTC enrichment and retrieval efficiencies using spiked-in cells in normal donor blood samples.
To test the enrichment and retrieval efficiencies of CTC in blood samples, we spiked PC3 and LNCaP into 4 ml of whole blood samples from healthy donors and stained the enriched cells for cytokeratins, CD45 and nuclei. Fig. 3 (Table IV) reveals that ~40% of 250 spiked-in PC3 cells and 74% of 50 spiked-in LNCaP cells were retrieved after enrichment. Furthermore, the enriched PC3 and LNCaP cells accounted for ~40 and 25% of the total retrieved cells and the ratio of captured CTCs to the background cells reached 1:1.5 for PC3 and 1:2.9 for LNCaP cells, suggesting that the removal of blood cells by the Celsee PREP100 was nearly complete and the level of remaining leukocytes in the enriched sample was very low.
CTC enrichment and retrieval using clinical blood samples.
To test the enrichment and retrieval efficiencies of CTC the blood samples of patients, we stained the enriched cells for cytokeratins, CD45 and nuclei. Fig. 3 reveals typical images of the enriched cells in the microfluidic chip, where cytokeratins and DAPI-positive (green and blue) and CD45-negative cells were considered as CTC cells. For the patient samples that had CTCs, we processed another 4 ml of the blood samples and retrieved CTCs for subsequent mutation analysis.
Mutation analysis. In order to test whether or not the mutations p.K139fs*3 of TP53 in PC3 and p.T877A of AR in LNCaP cells could be detected in the enriched cells from blood samples, we spiked 50, 100, 300 and 1,000 PC3 cells, and 25, 50, 100 and 250 LNCaP cells into 4 ml of blood samples, depleted CD45-positive cells, enriched and retrieved the cancer cells using Celsee PREP100. The cells were then spun down, resuspended in water and used as templates for PCR amplification and subsequent Sanger sequencing. Fig. 4 reveals the PCR products. Fig. 5 shows the Sanger sequencing results of the PCR amplicons, indicating that both mutations could be successfully detected in the enriched cells. For the CTCs retrieved from 14 clinical blood samples, we successfully performed the mutation analysis of p.T877A. The AR mutation (heterozygous mutation) was identified in the 1 sample (CTC 37) tested (Fig. 6) and all other samples were negative for this mutation. Sixty-three CTCs were identified in sample CTC 37.
Discussion
Molecular characterization of CTCs has been hindered by low sensitivity and a high level of background leukocytes of currently available CTC enrichment technologies. We demonstrated that CTCs can be readily captured and further characterized with molecular markers using a simple device, Celsee PREP100. The protocol we report in the present study enriches and retrieves CTCs from blood samples based on the fact that tumor cells are larger and less deformable than normal blood cells. To evaluate the performance on cell enrichment and retrieval, we prepared prostate cancer cell lines PC3 and LNCaP and analyzed the captured cells by PCR amplicon sequencing. We were able to recover an average of 79% (ranging from 40 to 100%) of 110-1,100 PC3 cells, and 60-1,500 LNCaP cells and detect the p.K139fs*3 deletion of TP53 in PC3 cells and the p.T877A mutation of AR in LNCaP cells. We also tested these two types of cells spiked into normal donor blood samples, captured and retrieved the cells and analyzed the retrieved cells by PCR amplicon sequencing. We were able to capture ~40% of PC3 cells and 74% of LNCaP cells with the ratio of captured CTCs to the background leukocytes reaching 1:1.5 for PC3 and 1:2.9 for LNCaP cells. The p.K139fs*3 deletion and the p.T877A mutation were detected in the captured spiked-in PC3 and LNCaP cells, respectively. The method was also tested successfully in clinical blood samples from patients with metastatic prostate cancer. Our results demonstrated the potential of CTC molecular characterization for the diagnosis, prognosis and treatment selection of patients with metastatic malignancy. The unique design of the microf luidic chip and Celsee PREP100 allows separation of CTCs from the background leukocytes and retrieval of the captured CTCs in a simple fashion. The variability observed on the recovery rate of cell retrieval is mainly due to the nature of the manual operation of the protocol. For example, the pressure and speed of manual pausing and pumping to retrieve the captured cells could vary from experiment to experiment and operator to operator. The variable number of background leukocytes could come from different healthy donors of the blood samples. To improve the purity of captured cells and the consistency of the protocol performance, we are developing an automated pump that can be connected to the Celsee PREP100 to retrieve captured cells from the microfluidic chip for PCR and sequencing analyses.
Enrichment of circulating cells can enable a number of downstream molecular applications. In addition to Sanger sequencing analysis used in our study, RT-PCR and DNA array assays on gene expression profiling, as well as NGS analysis on several DNA and RNA based genomic applications have been explored with enriched CTCs. Fluorescence in situ hybridization (FISH) assay on CTCs has also been demonstrated for determining gene amplification and aberrant copy number changes in cancer cells. Molecular profiling of CTCs could produce insightful information towards understanding the heterogeneity and the complexity of cancer and shed further light on the mechanisms of tumor metastasis, thus delineating tumor cells that are relevant to prognosis and therapy choice. With improvements on automation and standardization, enrichment and characterization of CTCs could overcome the technical limitations of low sensitivity and high background leukocytes and become a routine diagnostic tool in clinical use. | 2018-04-26T19:49:07.648Z | 2018-03-30T00:00:00.000 | {
"year": 2018,
"sha1": "3490cc4a188b0248da46864afe8579420773b9eb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/or.2018.6342/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3490cc4a188b0248da46864afe8579420773b9eb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
243906259 | pes2o/s2orc | v3-fos-license | Precedent Phenomena in the Process of Creating a Comic Effect in Slovak Internet Memes
In this article, we examine the humorous strategies and mechanism in memetic texts of the Slovak most popular satirical Internet community Zomri, which has more than 700 thousand followers on Facebook and Instagram. When examining the degree of sophistication of humour, its openness in relation to its recipients, we were interested in how often the articulations refer to universal, international, or pop culture phenomena, and to phenomena whose interpretation requires a specific Slovak linguistic and cultural competence. With the intention of comparing quantity, we created a narrower corpus of texts published on the Zomri community page in 2019 and observed the approximate proportion of use of both universal precedent (global) and national precedent (Slovak) phenomena and the ways of creating the comic effect associated with their use. We have come to the conclusion that authors of Slovak memetic texts turn to universal precedent phenomena more often than to national ones, and that the main ways of comic creation in the analysed memes were the so-called ‘effect of deceived expectation and the unpredictability of the transformation and subsequent re-semantization of the verbal or visual component of the meme’.
INTRODUCTION
In the last decade, Internet memes have become a popular phenomenon that has literally taken over the Internet space, and social media in particular. This highly intertextual and rapidly spreading phenomenon, which is mainly a combination of a funny image and text (visual and verbal components), can reflect not only current social issues, but also people's feelings, thoughts and attitudes; it also gives them the opportunity to express themselves in the best possible way. It shapes the modern linguistic picture of the world and often sets the vector for the direction and character of public discourse.
Despite the variety of existing approaches to defining and describing memes, the term 'memetics' has been established as the name of the study of memes in international practice, which was greatly facilitated by the proposal of American science journalist Douglas Hofstadter, who in January 1983 in his column in Scientific American suggested calling the discipline that studies memes 'memetics' [1].
At the same time, we are witnessing an ongoing terminological debate in which memetic texts are now also referred to as multimodal, polycodal, and creolized. Internet memes, which nowadays make up a significant part of the content of the so-called new media, include mainly photoshopped photos, parodies, demotivators, messages, and media memes [2]. The topic of memetic texts and the comic effect associated with them are thoroughly researched by Russian linguist such as Yu. Shchurina, S. Kanashina, V. Shcherbin, V. Anisimov, Ye. Yuryeva, T. Popova, L. Duskayeva and others. In Slovakia, this issue has been dealt in detail by, for example, S. Šoltésová [3], J. Gallo [4,5], M. Stankova [6], A. Samelova [7] and other scholars whose memetic research is mainly conducted as an interdisciplinary research, integrating the perspectives from semiotics, media linguistics, cognitive linguistics, discourse analysis and text linguistics.
Our research was conducted in a similar vein; it will deal with examples of memetic texts published in 2019 on the most popular satirical website in Slovakia, Zomria community, which publishes humorous memetic texts, mainly on Facebook (368,000 followers) and Instagram (320,000 followers), but also on its own website (https://www.zomri.online/). In most cases, the authors use sharp social satire, and our research goal will be to observe the share of universal (global) and national (Slovak) intertextual references in the analyzed memes, as well as to try to further understand how they create a comic (or satirical) effect.
There are several types of Internet memes (text, video, pictures); the object of our study is solely polymodal Internet memes. In the Russian digital space, they are 'regarded as a subspecies of polymodal texts because they consist of two partsverbal (linguistic) and nonverbal' [8]. Speaking about the importance of our (and other, similar) research, it is also necessary to note the fact that although most memes are intertextual in nature and contribute to the transmission of cultural heritage from generation to generation, many researchers also note that so-called anti-memes are based on 'a set of certain tactics aimed at modeling mental mechanisms that change the moral beliefs and moral attitudes of the recipient, especially the younger generation, and thereby, in many ways, they shape a new picture of the world by using the cultural codes which are already embedded in a given area' [9]
Precedent Phenomena as a Manifestation of Intertextuality: Universal Precedent and National Precedent Phenomena
The main characteristics of Internet memes, which Kanashina singles out in her publication, include virality, replicability, emotionality, seriality, mimicry, minimalism of form, polymodality, relevance, wit (comicality), publicness, and fantasy [10] Based on our observations, we would like to point out their intertextual nature as a key characteristic; for example, our analysis last year of a large corpus of polymodal texts from the covers of a popular Slovak magazine, .týždeň, demonstrated that they were mainly of intertextual nature, and referred both in their verbal and visual components to popular films, paintings, posters, and famous photos, as well as to precedent situations, names, texts and expressions of both historical and literary nature, while out of 271 analyzed covers (within the period of 5 years) 90 covers were of intertextual nature [11]. Similar conclusions are also made by Slovak media linguists who state that 'memes can be characterized by several basic properties: recurrence, intertextuality, contextuality, narrativity, and directness' [6] and that memes as units of cultural evolution are characterized by a high degree of variability and heritability [12].
In research on political, media, advertising and other discourses, in which intertextuality is increasingly used to influence the recipient, a fundamental feature and manifestation of intertextuality for Russian scholars today is the precedent phenomenon, which is characterized by the fact that it is 'well-known to all representatives of the national linguistic and cultural community; it is topical in the cognitive and emotional space and the reference to it is constantly renewed in the speech of the representatives of the national linguistic and cultural community' [13]. According to several representatives of Russian linguistics and cultural studies, it is hyperonymically a concept which covers precedent situation, precedent text, precedent name and precedent expression.
In theory of precedence, it is also important to understand the sphere of action of particular precedent phenomena. According to the classification proposed by Krasnykh, 'there are social-precedent phenomena, well-known to a member of a particular social group, e.g., professional, confessional, generational; national precedent phenomena, known to every representative of the given linguistic-cultural community, and universal precedent phenomena, which are (hypothetically) known to every Homo sapiens and are part of the universal cognitive space of mankind' [14]. In the following study of concrete memetic texts, we will be interested in the question of the share of national precedent and universal precedent phenomena in them. The given share to a large extent characterizes the modern Slovak linguistic picture of the world and the way it reflects current social events by means of international and national linguistic-cultural codes.
Creating a Comic Effect in Memetic Texts
An integral part of Internet communication is the comic effect, which is connected with the most important function of polymodal Internet memesentertainment. In the modern media space, we observe a desire for entertainment, which is presented as emotional pleasure, aesthetic enjoyment and a distraction from everyday worries, which, in turn, can be considered a compensatory need of a person, as well as an important factor in the regulation of emotional life [15]. The comic effect is a complex social, biological and psychological phenomenon that arose simultaneously with the essential elements of the human psychelanguage and thinking. At the heart of the comic effect is a sharp discrepancy between the real essence of what the phenomenon represents, and what it pretends to be. In the perception of a comic text, the emotion of amazement prevails. According to Sigmund Freud, the sense of the comic appears because jokes and witticisms have the ability to bypass the so-called internal 'censors'; the barriers established by one's culture [16]. Therefore, most researchers state that the main ways of creating a comic effect in polymodal texts are through: 1) the effect of deceived expectation; 2) contradiction or inconsistency of visual (iconic) and verbal components of the meme; 3) the unpredictability of transformation and subsequent re-semantization of the verbal or visual component of the meme. In essence, and according to Russian media linguists, in each of the similar cases we are talking about the emergence of the comic effect and humor as the 'contradiction between essence and appearance of the phenomenon' [17].
In the following subchapters, we will take a closer look at both the peculiarities of the creation of the comic effect in the studied memes, as well as at what kind of precedent phenomena their creators most often turn to.
Universal Precedent Phenomena and National Precedent Phenomena in Memes Published on the Zomri Community Page in 2019 1
We find the most complicated humorous communicative acts to interpret are those which realize verbal humor and update idioms and titles of films, literary works, quotes from them, and signs known to a common speaker of a language and culture and which evoke in their consciousness a sum of connotations and associations. In the process of examining the degree of sophistication of humor and its openness in relation to recipients, we were interested in how often articulations refer to universal, internationally known phenomena and how often to the phenomena whose interpretation requires a specific linguistic competence. For the purpose of this quantitative comparison, we created a narrower corpus from the texts published on the Zomri community page in 2019.
A range of 91-132 texts was published per month on the monitored page in 2019, while the humorous multimodal texts labeled by their creators as 'memes' were prevailing. Of the aforementioned 91-132 texts published monthly on the page, at most about 20 of them referred to internationally known texts, mostly pop culture films and series. Based on our observations, we assume that the selection of internationally known texts may be stimulated by the phonetic form of their title and the aspect of the target subjected to comic interpretation (Terminator ⟶ Termenator, Ranger ⟶ Genger, Miss Peregrine ⟶ Miss Pellegrini), which is created by an external resemblance between a fictional character or between the plot and protoevent. In the case of scripts that are evoked in these ways, we can consider macro-positions 'national-non-national' and 'fictional-real', which, however, at a lower level, also include other oppositions, or contrasts, e.g. 'strong-weak', 'original-copy', 'good-bad'.
Updating internationally known texts does not provoke rejection from recipients and can also stimulate the continuation of a strategically coherent language game.
In the corpus, we single out individual sporadic statements referring to Czech films (in January 2019, for example, up to four), which are 1.
Other aspects of multimodal texts published on Zomri community page were further analyzed by the authors in the paper Uštipačnosť and correctness in Slovak online humor (in print).
intensively received due to their cultural and linguistic proximity in the Slovak space, and what is paradoxical is that, in contrast to Slovak films, their characters and statements are also frequently updated (especially films such as Pelíšky [Cosy Dens] and Tankový prapor [The Tank Battalion]), as well as texts referring to the period of socialism (developing topics through the concept of scarce goods), but also texts referring to popular films of the period of socialist Czechoslovakia ( . We present them in such detail that it is clear that their proper decoding requires first and foremost a familiarity not only with the cultural texts of status that we encounter when learning the Slovak language, but also the experience with texts that we are confronted with in school and preschool facilities when watching television shows and reading the press, and in everyday life in a given linguistic community and in a given language.
The familiarity is also necessary in terms of the associations that certain images evoke. We illustrate this with another example.
National Precedent Phenomena as a Limited-access Joke. Case: SkalickÉ Diplomy 2
The multimodal text Skalické diplomy (Diplomas from Skalica) 3 is a reaction to a 2.
This example was used and analysed by N. Cingerová also in the article Memetic text as a Fragment of Public Discourse (2020) to point out how memes function as a way of social criticism [19]. plagiarism scandal involving two Slovak politicians at the University of Central Europe in Skalica. It is constructed with the help of the modes of image (photoshopped photo) and writing (verbal commentary as a means of anchoring the image) and it evoke scripts [cf. 18] which are evoked by verbal and nonverbal signs: 1) a marketplace (a photoshopped photo depicting a food truck with the sign Skalické diplomy and the announcement Akcia: balíček diplom + dizertácia zľava 30% [Special offer package: diploma + dissertation thesis -30% discount]), where the saleswoman is selling master's theses instead of the expected Skalický trdelní ka traditional Slovak pastry made of yeast dough in the shape of a hollow (dutý) cylinder (the Slovak word dutý is also used in student slang in the sense of a student unprepared for class or an exam; someone who knows nothing) and 2) the university (which is represented and evoked by the theses stacked up on the food truck counter). The laughing reaction is provoked by the opposition between the high and the low (the university and the marketplace: the product of intellectual activity whose purpose is to satisfy cognitive needs, and the product which serves to satisfy physiological needs). We also observe the laughing reaction in the dialogue on the facebook page, for which it was the impetus, as well as the fact that, as in the aforementioned example, the communicants accept the play and further elaborate on the elements evoked by the scripts. Thus, the collective unfolding of a comic interpretation of a protoevent occurs. However, we consider the meme-message itself to be a limited-access joke. Indeed, in order to decode it, besides a familiarity with the protoevent (the plagiarism scandal), we also need to identify the phrase with which the articulated phrase Skalické diplomy enters into paradigmatic relations, and to use both linguistic competence and cultural competence, since the food truck from which the pastry is usually sold serves as a visual aid (an element from the evoked mental space). At the same time, it can also be viewed as an indexical feature evoking the environment where the food truck is often foundthe marketplace. These decodings also allow for the comic interpretation observed in the dialogue to unfold. The comic interpretation relies on a) the phonological resemblance with the word trdlo (a oprávňujú hrdo nosiť titul Skalické trdlo [which allows the plagiarists to proudly hold the title Skalické trdlo]; trdlo is a mild insult used to describe someone who is slow-witted) and with the word trtkať (a vulgar word which refers to sexual intercourse; the blend trtelní k is used in relation to Boris Kollá r, one of the politicians involved in the plagiarism scandal who, according to the tabloid press, is known to have 11 children with 10 women, an aspect which often represents him metonymically in humorous texts of the community) and b) cultural experience with the marketplace, leaflet advertising, life in small towns and villages, where advertising and news is announced by a PA system (Hlá senie miestneho rozhlasu [The announcement of the local PA system [...]).
As we can see from the aforementioned analysis, the knowledge of Slovak culture and social contexts is necessary for the adequate perception of the complex semantics of the multimodal text Skalické diplomy. The comic effect is created as a consequence of the opposition between the high and the low, and as an effect of defeated expectancy (selling diplomas instead of the traditional pastry), but all shades of humour are readable exclusively for representatives of the Slovak linguistic-cultural community.
CONCLUSION
The analysis of the corpus of the studied memetic texts has shown that Slovak authors of memes of satirical social topics turn to universal precedent phenomena more often than to their own Slovak national precedent ones. Out of the nearly 130 meme excerpts, approximately 20 referred to internationally known texts, mostly American or British pop culture films and series, musicians, famous computer games, but also buildings or literary and legendary figures. Only 10 memetic texts were linked to Slovak linguo-culturemes, and 4 memes referred to Czech films, which are not only intensively reciprocated but also perceived as 'proper' in the Slovak environment due to their cultural and linguistic proximity.
In this case, the conclusions of our previous studies were confirmed, in which we noted that 'in the process of the analysis of media texts, researchers encounter two cases of forming an intertextual joke: one is based on the humorous essence of the used precedent phenomena (Gargantua, Tartuffe, Charlie Chaplin) and another one is based on the emergence of the comic effect due to the inconsistency of the connotative meaning of the precedent phenomenon and the situation (context) in which it is used, based on the discovery of a contradiction between essence and appearance of the phenomenon. In particular, this involves the cases of creating a humorous effect in describing a situation that is, by its very nature, not comic. It is a frequent contrast between the seriousness of the subject and the look at it by the comic optics' [11]. The social satire that prevails in memes today articulates mostly serious social topics, but this does not prevent the emergence of a comic effect that is not linked to the humorous context of the thematized event, but is created as an effect of deceived expectation and the unpredictability of the transformation and subsequent re-semantization of the verbal or visual component of the meme.
AUTHORS' CONTRIBUTIONS
Nina Cingerova -formulation of scientific hypotheses, excerption, analysis and interpretation of data from the corpus of memetic texts from the zomri.sk website. Irina Dulebova -formulation of scientific hypotheses, analysis of the current state of the research problem, synthesis of the findings. | 2021-11-10T16:25:47.783Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e50efb231f3fb88b38a65921615b21ff570ed749",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125962029.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9e40583877aa7135c9c42ee7fd565cdace7a3a83",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
16635496 | pes2o/s2orc | v3-fos-license | Understanding community perceptions, social norms and current practice related to respiratory infection in Bangladesh during 2009: a qualitative formative study
Background Respiratory infections are the leading cause of childhood deaths in Bangladesh. Promoting respiratory hygiene may reduce infection transmission. This formative research explored community perceptions about respiratory infections. Methods We conducted 34 in-depth interviews and 16 focus group discussions with community members and school children to explore respiratory hygiene related perceptions, practices, and social norms in an urban and a rural setting. We conducted unstructured observations on respiratory hygiene practices in public markets. Results Informants were not familiar with the term "respiratory infection"; most named diseases that had no relation to respiratory dysfunction. Informants reported that their community identified a number of 'good behaviors' related to respiratory hygiene, but they also noted, and we observed, that very few people practiced these. All informants cited hot/cold weather changes or using cold water as causes for catching cold. They associated transmission of respiratory infections with close contact with a sick person's breath, cough droplets, or spit; sharing a sick person's utensils and food. Informants suggested that avoiding such contact was the most effective method to prevent respiratory infection. Although informants perceived that handwashing after coughing or sneezing might prevent illness, they felt this was not typically feasible or practical. Conclusion Community perceptions of respiratory infections include both concerns with imbalances between hot and cold, and with person-to-person transmission. Many people were aware of measures that could prevent respiratory infection, but did not practice them. Interventions that leverage community understanding of person-to-person transmission and that encourage the practice of their identified 'good behaviors' related to respiratory hygiene may reduce respiratory disease transmission.
Background
Many human respiratory pathogens including influenza, respiratory syncytial virus, and the coronavirus that causes severe acute respiratory syndrome (SARS) are spread by coughing or sneezing [1][2][3]. These respiratory viruses contribute importantly to the global respiratory disease burden [4]. When people are infectious and cough or sneeze, these viruses can be expelled through aerosols and large droplets and spread virus from person to person [2][3][4][5][6].
Transmission prevention is accorded a high priority in Bangladesh for two reasons: the high respiratory disease burden and high population density. Acute respiratory infection is a major cause of child mortality in Bangladesh, accounting for 21% of all deaths in children aged less than 5 years from 2000 to 2004 [7]. When pandemic influenza A (H1N1) occurred in Bangladesh in 2009, there was concern for its potential to rapidly spread due to the high population density and lack of respiratory hygiene. To contain or reduce transmission of viruses such as pandemic influenza the World Health Organization (WHO) recommended non-pharmaceutical interventions and public health messages on maintenance of respiratory hygiene/cough etiquette [8]. The Centers for Disease Control and Prevention (CDC) identified certain respiratory hygiene measures, including: covering the nose and mouth when coughing or sneezing; using tissues to contain respiratory secretions; disposal of used tissues in the nearest waste receptacle; and washing hands with soap after having contact with respiratory secretions and contaminated objects. CDC recommended that if persons do not have a tissue, they should cough or sneeze into their upper sleeve, not into their hands, to stop the spread of germs [1,6]. Nevertheless, a recently conducted quantitative study in a rural and an urban site in Bangladesh found that very few people in these areas currently practice respiratory hygiene. Of the 1,122 observed coughing and sneezing events, 907 (81%) people coughed or sneezed into the air, 119 (11%) into their hands and 83 (7%) into their clothing [9].
To design an effective intervention to improve respiratory hygiene in a community, it is important to understand the community's perceptions on respiratory infections and their perceived barriers to practice respiratory hygiene. Finding ways to communicate health messages that could prevent respiratory infections is difficult for several reasons: respiratory diseases tend to have multiple symptoms rather than a single symptom, e. g. coughing, fever, and difficulty breathing; symptoms of respiratory infections exist on a continuum from mild to severe; people perceive respiratory illnesses as "mild" events even among those who have severe signs and symptoms; some symptoms are related to non-infectious illnesses such as asthma or allergies; some respiratory viruses are spread through the air and thus not easily traced [10][11][12][13]. The inability to recognize the severity or understand transmission patterns of respiratory illness is a concern because when viral respiratory infections occurs within a community people may not respond or may take ineffective actions to reduce personto-person spread of the viruses. For example, they might try to avoid being exposed to cold weather or water rather than avoiding persons with respiratory symptoms. Improved respiratory hygiene, such as coughing or sneezing into the upper sleeve, could reduce the risk of respiratory illness, especially in densely populated countries like Bangladesh, because both crowding and poor ventilation facilitate respiratory infections spread quickly from person to person [3]. In low-income countries like Bangladesh, disposable tissues are considered a luxury and not commonly used.
In this study, we explored whether the determinants of respiratory hygiene in our study population matched our hypothesis. We hypothesized that persons in the community share traditional beliefs, culture, social norms and perceived risk of illness about the transmission of respiratory infections that may not compatible with germ theory. These determinants are important to address in a successful respiratory hygiene intervention. Our formative qualitative research study examined the above hypothesis and this paper reports our findings about community perceptions on respiratory infections, why they occur, how they are spread, and the preventive measures that people take to protect themselves and their families. We collected these data to facilitate the development of appropriate behavior change interventions to interrupt respiratory disease transmission.
Study sites
We conducted the study during December 2008 to September 2009 in an urban area of Mirpur, in the capital city Dhaka, and in rural Fulbaria, a sub-district of Mymensingh District, in north central Bangladesh. We selected these sites as they are typical of urban and rural areas in Bangladesh and both had other ongoing International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR, B) studies that allowed us to easily build rapport with the communities and collect data. The study site at Mirpur included three different socio-economic neighborhoods to increase variation of data. The study site at Fulbaria included two villages.
Sampling
We selected our participants purposively in both sites to include persons from different settings, ages, socio-economic status, and occupational, educational and religious backgrounds according to the criteria described below for each qualitative method. Data saturation occurred after 16 in-depth interviews in urban Mirpur area and 18 in-depth interviews in rural Fulbaria with adult males and females. We identified three different socio-economic neighborhoods in urban Mirpur and recruiting participants from every tenth household. In each of the two selected villages in Fulbaria, we selected informants from every 10th household in purposively identified areas near four schools. We also selected two informants from Hindu families in each village.
We conducted eight focus group discussions in each of the study sites of Mirpur and Fulbaria. Each of these eight focus discussions took part with a pre-selected group, including some of the most influential members of Bangladeshi society, to maximize the variety of data collected relating to message dissemination for hygiene practices uptake. The groups included adult males, adult females, school boys, school girls, male teachers, female teachers, paramedics and religious leaders. Eight to 11 people participated in each focus group discussion. Selecting focus group discussion participants, to ensure that there was no overlap with the in-depth interview informants within the same geographic area, we started the selection process where the in-depth interview sampling had ended. We then enrolled adult male and female informants from the next 10th households. We prepared a list of all schools situated within or contiguous to both study sites and randomly selected two schools from each site; we selected male and female students of level three to five from one school and male and female teachers from the other. We selected paramedics from among those whom the informants commonly sought treatment as noted during in-depth interviews. To recruit religious leaders, we identified all of the mosques within the two study areas where Friday prayers take place.
For our observations, we identified the three closest public markets in each of our study sites. We chose 6 h during the busiest times of trading, between 7 AM to 10 AM in the morning and 5 PM to 8 PM in the evening to observe and record the behavior of shopkeepers and customers.
Qualitative tools and data collection
We employed in-depth interviews and focus group discussions to explore community's perceptions, reported practices, and social norms related to respiratory infection and respiratory hygiene. Although the purpose of both of these qualitative tools overlapped, the focus group discussions covered a larger number of participants, and highlighted the vital role that each group could play in disseminating the messages. We used observations to explore and understand actual practice in natural settings. All three helped in triangulating our findings. A multidisciplinary team including public health specialists, social scientists, epidemiologists, and a communication specialist developed each of the qualitative guidelines for in-depth interview, focus group discussion and unstructured observation. Each tool was pretested in the field and revised (Appendix 1-3).
The data were collected by the first author, who is a sociologist, along with three anthropologists, who all had extensive experience in collecting qualitative data. Before data collection commenced we thoroughly reviewed the research objectives, research tools and specific data collection techniques to effectively and efficiently collect quality data. During in-depth interviews, lasting from 35 min to 60 min, and focus group discussions, ranging from 55 min to 90 min, we used an interview guide that started by asking people to list all of the illnesses they could think of that they associated with respiratory functions. We also asked about their perceptions on causes of respiratory illness, how respiratory infections are passed from one person to another, how disease transmission can be prevented, existing social norms, current practices related to respiratory hygiene, and for suggestions of messages and channels to promote respiratory hygiene behavior. We also conducted unstructured observations in markets to see respiratory hygiene practices in public places. During observation, we took notes about customers' and shopkeepers' behaviors linked to respiratory hygiene (Appendix 3). Nevertheless we did not count the events as we conducted unstructured observations in extremely crowded market places. Rather than quantify, we were more interested in observing people's actual respiratory hygiene practices including: whether people covered their mouths during coughing or sneezing; if so, how they covered their mouths (with hands/upper arm/ cloth); whether they turned away from others while coughing or sneezing; whether they spat on the ground; and whether they washed their hands after nasal cleaning or coughing or sneezing into hands.
Data analysis
We recorded the in-depth interviews and focus group discussions using audio recorders. We transcribed these audio recordings in Bengali, and to describe signs, symptoms and respiratory illness, we retained the local terminology. We manually coded the data according to our research objectives. After coding, we translated these data into English. We performed thematic content analysis in order to provide a descriptive of our result. Though we analyzed each in-depth interview and each focus group discussion separately, in the results we have drawn inferences collectively from both types of data. Moreover, indepth interviews and focus group discussions findings were compared for consistency and were triangulated. No attempt was made to quantify study findings, as the number of informants interviewed was small. We followed the similar process for the unstructured observation notes.
Ethical consideration and informed consent
Before taking part in the in-depth interviews and focus group discussions, we asked adult participants to provide written informed consent. We asked school students to provide assent with parental consent. This study protocol was reviewed and approved by ICDDR, B's Ethical Review Committee.
Results
The majority (22/34) of our informants for in-depth interviews were aged 18-35 years; approximately 1 quarter were illiterate or could only sign their name. Over two-thirds (13/17) of female informants described themselves as homemakers (Table 1). In total, 144 participants took part in our focus group discussions. There were 70 adult males and 32 adult females, and 20 school boys and 22 school girls. Adult male focus group discussion participants were mostly street vendors or laborers in urban sites, and in rural sites most of them were farmers. In both study sites, adult female focus group discussion participants were homemakers.
In the analysis of the 34 in-depth interviews, 16 focus group discussions, and six unstructured observations, there were no notable differences in the findings between the heterogeneous groups. We found similar responses and behaviors related to respiratory infections between urban and rural residents, different socio-economic groups, Muslims and Hindus, and school students and trained paramedics and lay persons. Therefore we have provided a summary of the findings aggregated across all study participants.
Local terms of 'respiratory infection'
Our informants were not familiar with the Bengali translation of the term "respiratory infection" and did not have a specific Bengali word for this term. When asked to explain respiratory infection, informants used the phrase 'shas-proshassh jonito sangkromon', which literally translates as 'breathing and exhalation related infection'. The research team had to give detailed explanations and use hand gestures to indicate the focal areas of the body related to respiratory infection. When we asked informants to name some diseases transmitted during either breathing or exhalation, informants gave a wide variety of answers. Some answers had no direct relation to respiratory functions such as a stomach ache or skin diseases (Table 2).
Perceived causes
Most of our informants associated transmission of respiratory infections with coming into close contact with a sick person's breath, cough droplets and spit; sharing a sick person's food or utensils; or allowing flies or mosquitoes that have had contact with human secreted substances (mucous/spit) to land on food. Focus group discussion data also supported these findings. A female teacher in a focus group discussion who was from the rural site stated; "I need to avoid close contact with a sick person. My sister-in-law had difficulty breathing and her daughter-in-law took care of her and the daughter-in-law became infected with difficulty breathing." Nevertheless, a few informants mentioned that respiratory infection could be contracted through blood transfusion, by genetic pre-disposition (heredity), by fate/luck and from contact with dead animals. A male teacher in a focus group discussion from the rural site attributed patterns of spread of infection to the direction the wind is blowing; "If someone doesn't cover his mouth and nose during coughing or sneezing, the germs come out from his body. Anybody can be infected with these germs if they are carried in the same direction as the wind. How far away he is does not matter. But if the wind is blowing in the opposite direction other people will not be affected." During in-depth interview, when we asked informants about how they caught cold during the previous 12 months, all of them linked catching a cold to imbalances of hot and cold, e.g. ambient temperatures or water temperatures. Specific responses included: exposure to excessive cold, change of weather from hot to cold, or vice versa; drinking cold water and bathing in cold water. A 36 year-old male informant from the urban site said; "Winter is going and summer is coming. It is hot during the day and cold at night and the mixing of hot and cold weather brings on a cold."
Perceptions on preventive measures
Most informants reported that respiratory infections could be prevented by keeping distance from a sick person and by avoiding sharing utensils and food of a sick person. Paramedics also told us that when patients came to them with a respiratory infection they advised patients to keep warm and stay away from cold water. After probing, four informants also mentioned that washing their hands could protect them from respiratory illness. Nevertheless, these same informants continued to link the idea of internal imbalances of the body with catching a cold. One 43 yearold female informant from an urban site said; "A person should not be in contact with cold water for a long time to avoid getting cold/cough."
Social norms and current practice
Informants from in-depth interviews and focus group discussions reported that there are no specific social norms related to respiratory hygiene. When we asked them about what they and their community considered as appropriate practices when someone has a cold in order not to spread it to other people, they mentioned a variety of actions that they felt were 'good behaviors'. These included covering the nose and mouth with hands or a handkerchief; turning the face away during sneezing or coughing; avoiding spitting, coughing, or sneezing into the environment; avoiding close contact with sick people; and keeping the body neat and clean. Religious leaders told us that hygiene was related to cleanliness before prayer time five times per day and that people should sneeze only into their left hand. In general they endorsed saying "Praise be to Allah" (Alhamdulillah) after sneezing. We found from our unstructured market observations that, almost universally, people did not practice these 'good behaviors' related to respiratory hygiene that they had identified during interviews and focus group discussions.
Reported barriers to prevention
When exploring the perceived feasibility and effectiveness of handwashing to prevent respiratory infection, informants commonly stated that, if they did not wash their hands after coughing or sneezing into their hands, they Table 2 Signs, symptoms, and illnesses that informants associated with respiratory disease could become ill. One adult informant from a rural site said; "Our hands are always dirty because of dust, which is everywhere. So without washing hands, anyone can be affected by any kind of respiratory disease." Informants thought that it was not feasible or practical to wash hands after every event of coughing or sneezing, especially when someone has a runny nose. A 30 year old female informant from the urban site commented; "People don't wash their hands after sneezing and coughing. Is it possible to wash hands frequently? If you sneeze 100 times, will you wash your hands 100 times? But we should wash our hands before taking a meal."
Messages and channels
After probing about how messages should be delivered and which channels of communication should be used, informants mentioned that awareness of respiratory hygiene behavior can be promoted by delivering messages related to their identified 'good behaviors, which could be disseminated through interpersonal communication, dramas, videos, television programs, during paramedic-patient interaction, and through the school curriculum. The teachers, imams and the paramedics told us that they can contribute to improving respiratory hygiene by disseminating related health messages. One religious leader in a focus group discussion from an urban site said; "We can deliver information among people during Friday prayers, the special prayer day for Muslims." A female teacher in a focus group discussion from a rural site said; "We can deliver respiratory hygiene related disease messages among our students and at the time of stipend when all guardians come to the school and during mothers assembly we can discuss with parents about respiratory hygiene."
Discussion and Conclusions
Our findings suggest that local understandings about who gets sick, why, when, how illness spreads and how it can be prevented are varied, and therefore could contribute to the transmission of respiratory infections. Responses from our participants highlighted a contrast in people's minds between contracting a respiratory illness and its transmission and prevention. We have linked local interpretations of disease transmission to both the cultural model of hotcold imbalance and the bio-medical understanding of transmission of respiratory infections. Understanding communities' perception of how an individual's behaviors can be linked to infectious disease transmission can help us engage in meaningful communication regarding behavior change. Well-designed and targeted communication interventions are more likely to be effective if they are theory-based or if they can be linked to a theoretical underpinning of established determinants of behavior and behavior change [14]. We could use these findings to develop culturally compelling behavior change interventions to interrupt respiratory disease transmission [15].
Informants related their individual experience of catching a cold with the hot-cold imbalance concept where health is maintained through equilibrium, or not having an internal imbalance in the body caused by an excess of either hot or cold. For example, keeping the body warm or avoiding cold elements can maintain internal equilibrium [16]. Our findings related to catching cold are similar to other anthropological and qualitative studies conducted on childhood acute respiratory infection during the 1990s. These studies suggested that people related the occurrence of acute respiratory infection to imbalances of hot and cold in the body, rather than to an infectious agent that can be transmitted from person to person [10,11,[17][18][19][20][21]. For example, Bangladeshi mothers avoid feeding cold foods to prevent child from catching pneumonia [10], and Filipino mothers withhold breast milk from their infants after they have been exposed to cold weather, to prevent the child from catching a cold [21]. Identifying and understanding examples of these cultural perceptions could help in the development of communication messages to develop positive social norms related to respiratory hygiene behavior.
For transmission and prevention, most of the informants' responses aligned more closely to the bio-medical understandings of transmission that originated in the germ theory [22]. For example, although informants may not know that Streptococcus pneumonae and Influenza virus are some of the organisms responsible for pneumonia and influenza, and that this is where the common names of the illnesses were derived from, our informants were aware that respiratory infections were contagious and mentioned avoiding the cough, breath, spit and blood from an infected person. Communication messages could build on our informants' understanding of effective prevention measures.
About the prevention of respiratory infections, the community members had some perceptions that matched with the cultural model of hot-cold imbalance. Nevertheless, community members also had perceptions that aligned with the bio-medical concept as well. Though they perceived some 'good behaviors' were related to respiratory hygiene that could prevent respiratory infections, we observed that they did not translate those into practice. Informants told us that covering nose and mouth or turning the face away was a way to prevent the spread of infection. Nevertheless, almost all shopkeepers and customers we observed in the markets coughed into the air. In another study, where school children's respiratory hygiene behavior was observed, 956 (85%) of 1126 events, children coughed or sneezed into the air [9]. We speculate that the people we observed were not aware or did not value how their behavior could affect their health and that of others. Also many adults and young people do not have the habit of practicing respiratory hygiene, perhaps due to longterm exposure to an unhygienic physical and social environment. Communication messages that remind people to practice these 'good behaviors' on a daily basis could make positive changes in the overall environment.
A limitation of this study is that we collected data from only one urban site and one rural site. Nevertheless, the sites were typical of Bangladeshi communities, and we enrolled a heterogeneous mixture of informants from a variety of social groups. As our groups of informants had similar perceptions related to prevention and transmission, our findings from this formative study provide a foundation to design communication materials promoting respiratory hygiene practices that might be applicable for both rural and urban settings. Although we could not recruit informants from all geographic areas in the study sites for in-depth interviews and focus group discussions with adult males and females, we systematically selected participants from every 10th household in order to select them to diminish the bias of sample selection. Focus group discussions were limited in scope to fully explore complex beliefs. Nevertheless, we also conducted numbers of in-depth interviews, which were more appropriate to explore community beliefs. In focus group discussions, some participants were not vocal, and while the group facilitator tried to encourage them, this may have led to bias. Nonetheless, we triangulated our findings from indepth interviews, focus group discussions and observations.
Implications
Our understanding regarding the perceptions about why a person catches a cold and how respiratory infections are transmitted can be used to frame communication messages. Health professionals could use local terms to explain transmission of respiratory infections from the bio-medical perspective to highlight how changes in behavior could prevent transmission. These messages could make people more conscious about respiratory hygiene and then motivate them to follow their identified 'good behaviors' on a regular basis. Since the informants indicated that it is not feasible or practical to wash hands after every event of coughing and sneezing, our communication message could be to ask people to cough and sneeze into their upper arm or sleeve. We suggest piloting these messages with a variety of communication approaches and channels with all community members to reduce respiratory infections. 4. Please tell us something about the present norms, cultures and practices related to respiratory hygiene/diseases of your society. 5. In your community, what are the current practices when people sneeze or cough?
6. Do you think there is any relationship between handwashing and respiratory diseases? Why or why not? 7. What type of messages should be delivered to motivate people in your community to positively change their respiratory hygiene practices and what type of channel should be used to disseminate these messages?
8. How can you contribute to improve respiratory hygiene practices in your home and community Only for paramedics 9. Do you give any advice to your patients to prevent respiratory infection? If yes, what do you advise?
10. If you don't already give advice to patients, would you be interested to share any message that could prevent transmission of respiratory infection? 11. What type of message would you feels comfortable sharing with your patients?
12. Do you use any posters or pamphlets in your clinic for communicating health messages?
Only for religious leaders 13. Is there any guidance or instruction given in Islam religion about hygiene in general or the transmission of respiratory illness or its prevention?
14. Do you give any other health messages in the community?
15. Would you be interested to share messages related to respiratory hygiene in the community? | 2014-10-01T00:00:00.000Z | 2011-12-04T00:00:00.000 | {
"year": 2011,
"sha1": "025b42d3cb3ed60ab200ee536a95e730eae19b77",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-901",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ed7e985c1bef92952d4626090ac1c31c9566a23",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225151696 | pes2o/s2orc | v3-fos-license | Global transcriptomic profiling of microcystin-LR or -RR treated hepatocytes (HepaRG)
The canonical mode of action (MOA) of microcystins (MC) is the inhibition of protein phosphatases, but complete characterization of toxicity pathways is lacking. The existence of over 200 MC congeners complicates risk estimates worldwide. This work employed RNA-seq to provide an unbiased and comprehensive characterization of cellular targets and impacted cellular processes of hepatocytes exposed to either MC-LR or MC-RR congeners. The human hepatocyte cell line, HepaRG, was treated with three concentrations of MC-LR or -RR for 2 h. Significant reduction in cell survival was observed in LR1000 and LR100 treatments whereas no acute toxicity was observed in any MR-RR treatment. RNA-seq was performed on all treatments of MC-LR and -RR. Differentially expressed genes and pathways associated with oxidative and endoplasmic reticulum (ER) stress, and the unfolded protein response (UPR) were highly enriched by both congeners as were inflammatory pathways. Genes associated with both apoptotic and inflammatory pathways were enriched in LR1000. We present a model of MC toxicity that immediately causes oxidative stress and leads to ER stress and the activation of the UPR. Differential activation of the three arms of the UPR and the kinetics of JNK activation ultimately determine whether cell survival or apoptosis is favored. Extracellular exosomes were enrichment of by both congeners, suggesting a previously unidentified mechanism for MC-dependent extracellular signaling. The complement system was enriched only in MC-RR treatments, suggesting congener-specific differences in cellular effects. This study provided an unbiased snapshot of the early systemic hepatocyte response to MC-LR and MC-RR congeners and may explain differences in toxicity among MC congeners.
Introduction
Microcystins (MCs) are hepatotoxic heptapeptides produced by numerous cyanobacteria species. Though MC toxicity is generally associated with hepatotoxicity (Yoshida et al., 1998), effects have also been observed in other tissues (Alverca et al., 2009;Lin et al., 2016). Human exposure is primarily through oral ingestion (Massey et al., 2018). MCs are then absorbed through the small intestines, travel via the hepatic portal system and accumulate in the liver. There are at least 279 congeners of microcystins that differ in their structure and toxicity (Bouaicha et al., 2019) and these often occur as mixtures in blooms (Graham et al., 2010). Congeners mostly differ by substitutions of L-amino acid at positions two and four (Harke et al., 2016). Due to their size and hydrophilic nature, most MC congeners require active transport through the organic anion transport proteins (OATP) (Fischer et al., 2005;Runnegar et al., 1995). However, more hydrophobic variants can potentially enter cells via direct diffusion across the cellular membrane (Vesterkvist and Meriluoto, 2003). Microcystin-LR (MC-LR) is the most common, well studied, and among the most toxic MC congeners. Less is known about microcystin-RR (MC-RR) which is also commonly found in cyanobacteria blooms (Diez-Quijada et al., 2019;Dyble et al., 2008) and co-occurs with MC-LR (Graham et al., 2010). MC-LR has a leucine and an arginine in the variable positions, while MC-RR has two arginines, resulting in hydrophobicity and toxicokinetic differences (Vesterkvist and Meriluoto, 2003).
Though the MCs have been shown to have multiple intracellular effects, their canonical intracellular mode of action (MOA) is the inhibition of protein phosphatases (PP) 1 and 2A (Yoshizawa et al., 1990). PP inhibition causes hyperphosphorylation of cellular proteins, including cytoskeletal proteins, resulting in disruption of the cytoskeletal architecture, a loss of cellular integrity (Batista et al., 2003), intrahepatic hemorrhaging and eventual death (Yoshida et al., 1998). MC-LR and -RR have been shown to have similar inhibitory effects on PP1 and PP2a (Fischer et al., 2010;Hoeger et al., 2007), yet they differ by an order of magnitude in acute toxicity (Gupta et al., 2003). This is at least partially explained by toxicokinetic differences (Fischer et al., 2010); however, recent experimental evidence suggests that MC-LR may have molecular targets outside of PP1 and PP2A (Chen et al., 2006;Pereira et al., 2013) suggesting the possibility that cellular targets may differ among MC congeners.
Many cellular responses to MC exposure are dependent on the de novo production of proteins resulting from differential expression of genes (Takumi et al., 2010). In order to better understand the molecular mechanisms underlying cellular responses, several studies have measured MC-dependent changes in gene expression. For the most part, expression-based studies have been conducted using MC-LR and have targeted pathway-specific genes or proteins as indicators of perturbation (Christen et al., 2013). Collectively, these studies have identified several affected pathways; however, as these studies were conducted in different model systems with different exposure parameters, a holistic picture of the transcriptional response to MC-LR is still lacking. Fewer studies have attempted to characterize the transcriptional response and intracellular effects induced by MC-RR. MC-LR and -RR often co-occur and their cellular targets, outside of PP inhibition, may differ, leading to uncertainty in the risk posed by mixture exposures. Thus, there is a need for a more comprehensive characterization of their individual cellular targets and effects. The objective of the current work is to characterize the transcriptional response of a human hepatocyte cell line (HepaRG) to MC-LR and -RR. HepaRG cells were selected because they retain intact liver functions, express a number of cytochrome P450 and nuclear receptors, as well as microcystin-associated transporters (OATP1B1 and OATP1B3), and respond similarly to human primary hepatocytes upon toxicant challenge (Higuchi et al., 2014;Josse et al., 2008;Szabo et al., 2013). RNA-seq was used to provide an unsupervised evaluation of global gene expression in cells exposed to three concentrations of either MC-LR or -RR. This approach provides a means to substantiate existing targeted experimental evidence, find potential linkages among affected cellular processes, and identify new potential MOA and intracellular targets.
Cell culture and exposures
Four independent exposure experiments were conducted. In order to maximize replicate number within each treatment group, all individual replicates from a given treatment group were used for gene expression analysis. The number of replicates used in gene expression by experiment is defined in Table 1. Using replicates across experiments incorporated technical and batch variability resulting from different cell and MC lot numbers, RNA isolation, library development, and sequencing runs.
Differentiated HepaRG (Life Technologies, Carlsbad, CA, USA) were thawed in HepaRG in Working Medium (WM; Williams' medium E supplemented with glutamine and HepaRG thaw, plate and generalpurpose medium supplement) according to the manufacturer's procedure (Life Technologies). Cells were resuspended in WM and cell viability was determined by Trypan Blue dye exclusion; cell suspensions were >85% viable.
Cells were seeded at 5 × 10 5 cells/well onto a pre-wetted sterile flat- bottom collagen coated 24-well plate (Life Technologies) and incubated overnight. WM was replaced with pre-warmed HepaRG Toxicity Working Medium (TM; Williams' medium E supplemented with glutamine and HepaRG toxicity medium supplement) and re-incubated until MC exposure on day 7. MCs (MC-LR or -RR; Calbiochem, La Jolla, CA, USA) were diluted in TM and used on the same day. Cells were exposed with 10, 100 and 1000 ng mL − 1 of MC-LR or MC-RR, or to solvent (1% methanol) in replicates for 2 h in a humidified 5% CO 2 , 37 • C incubator. Cells were harvested and washed with 1 mL Hank's Balanced Salt Solution (HBSS; Thermo Fisher Scientific, Waltham, MA, USA), lysed in QIAzol ™ Lysis Reagent (Qiagen, Germantown, MD, USA), and stored at − 80 • C until RNA isolation.
Cell cytotoxicity
In vitro cell cytotoxicity was assessed microscopically and biochemically. Cells exhibit cytopathic effects (CPE) prior to apoptosis and at subcytotoxic concentrations, suggesting microscopic examination is more sensitive; however, it yields only observational results. Cells were seeded at 4 × 10 4 cells/well onto a pre-wetted sterile flat-bottom collagen coated 96-well plate and cultured under a humidified environment at 37 • C overnight and treated as described above. Cells were exposed with 10, 100 and 1000 ng mL − 1 MC-LR or MC-RR (same lot used within 24-well plates). At 6 h post exposure to MCs, cells were examined for CPEs (cell rounding, swelling/enlarging, clumping/grouping, rounding, blebbing, detaching, having refractile and amorphous shape, increased granularity, and enlarged ghost cells).
Following 48 h of further incubation, cells (control, n = 15; MC samples, n = 5) were washed with pre-warmed HBSS 5X. After the addition of activated XTT substrate (XTT Cell Proliferation Assay, ATCC, Manassas, VA, USA), the plate was incubated for 4 h and mixed. Optical density was determined using a spectrophotometer (Molecular Devices SpectraMax M2, Sunnyvale, CA, USA). A linear mixed effects model (lme4 v 1.1-19 (Bates et al., 2015)) with experiment as a random effect was performed in R (Team, 2017) to determine statistical significance (p < 0.05); p values were obtained using the lmerTest package (Kuznetsova et al., 2017).
RNA isolation
Cells were thawed in Qiazol lysis buffer (Qiagen, Hilden, DE) and further homogenized (Bullet Blender Storm 24 mixer mill, Next Advance, Averill Park, NY, USA) using the manufacturer's recommended settings. Approximately 250 μl of chloroform:isoamyl alcohol (24:1; Sigma-Aldrich Chemical Co., St. Louis, MO, USA) was added to each tube, mixed by inversion, and vortexed and incubated at room temperature for 3 min and transferred to a 1.5 ml Heavy Phase Lock Gel microfuge tube (5Prime, Inc., Gaithersburg, MD, USA), incubated on ice (10 min) and centrifuged at 14,200 g (5 min, room temperature (RT)). The supernatant was transferred to a microcentrifuge tube (Eppendorf North America, Hauppauge, NY, USA) and mixed with an equal volume of 70% ethanol and put on a RNeasy® MinElute Clean Up kit 2.0 ml column (Qiagen) and processed according to the manufacturer's protocol with DNase treatment (RNase-free DNase kit, Qiagen) prior to final cleanup and elution. RNA was quantified and quality was confirmed using an Agilent RNA 6000 Nano kit with a 2100 Bioanalyzer (Agilent Technologies, Inc., Wilmington, DE). Typical RIN scores were above 9. RNA eluates were stored at − 80 • C.
Library preparation and sequencing
Sequencing libraries were prepared using the TruSeq Stranded mRNA Library Prep for NeoPrep kit (Illumina, San Diego, CA, USA), following the manufacturer's protocol. An input of 100 ng total RNA was used for each sample. Additional resuspension buffer was added to each library to obtain a final total volume of about 24 μL. The concentration of each library was determined using either the KAPA Library Quantification Kit for Illumina Platforms (Kapa Biosystems) or the Qubit dsDNA HS Assay kit (Thermo Fisher Scientific). Libraries were normalized to 10 nM and combined into pools of 8-16 samples. Dilutions of libraries for quantitation and pooling was done using 10 mM Tris-HCl, pH 8.5.
Prior to sequencing at Michigan State University Research Technology Support Facility (MSU), the quality and quantity of library pools were determined by MSU using a combination of Qubit dsDNA HS, either Caliper LabChipGX HS DNA or Agilent Bioanalyzer High Sensitivity DNA and the Kapa Illumina Library Quantification qPCR assays. Each pool was loaded onto one lane of an Illumina HiSeq 4000 flow cell and sequencing performed in a 1 × 50bp single read format using HiSeq 4000 SBS reagents. Base calling was done by Illumina Real Time Analysis (RTA) v2.7.7 and output of RTA was demultiplexed and converted to FastQ format with Illumina Bcl2fastq v2.19.1.
RNA-seq data analysis
Raw sequencing data were quality checked, using FastQC (Brown et al., 2017) for read quality, GC content, presence of adaptors, overrepresented k-mers and duplicated reads derived from sequencing issues, PCR bias, or contaminations. Reads then were mapped, using BWA (Li and Durbin, 2009), to the human genome transcript references (GRCh38) from the GENCODE (Harrow et al., 2012). GENCODE comprehensive gene set was used for this analysis as it has more exons, greater genomic coverage and more transcript variants than the NCBI RefSeq in both genome and exome datasets (Frankish et al., 2015). Our in-house RNA-seq analysis pipeline based on the improved version of EpiCenter (Huang et al., 2011) was then used to quantify abundance levels of individual transcripts and identification of DEGs. Read count data were normalized so that the average number of total mapped reads was the same for all replicates across all groups. The SVA package (Leek et al., 2018) was used to remove batch effects before differential gene expression analysis. To remove batch effects from the four different experiments, we first filtered out lowly expressed transcripts with normalized read count <50 in all individual groups. We then converted discrete read count data into continuous data by log2 transformation. To deal with potential zero count data issue, we added 1 to the normalized counts before taking the log2 transformation. We then use the ComBat function from the SVA package to remove batch effect. The batch-removed data from ComBat were then converted back to discrete read count data by the inverse log2 function, i.e., the exponential function. All sequencing and meta data have been deposited in the NCBI GEO database (GSE147999O). A 5% false discovery rate (FDR) was used to as the cutoff for statistically significant transcripts. Functional annotation and enrichment analysis of DEG lists was conducted using both the Database for Annotation, Visualization and Integrated Discovery (DAVID) v6.8 (Huang da et al., 2009) and Ingenuity Pathway Analysis (IPA; QIAGEN Inc., https://www.qiagenbioinformatics. com/products/ingenuitypathway-analysis). An FDR cutoff of 0.05 was used to determine significance. Due to the unusually high replicate number, the fact that biological relevance does not track with the magnitude of expression (Evans, 2015;St Laurent et al., 2013), and to better fulfill the stated objectives of the study to identify genes, pathways, and processes affected by MC-LR and MC-RR, no fold change cutoff for DEGs was used. That being said, the discussion of our results is largely limited to functional enrichment analysis, which should mitigate the effects of including false positives.
Cell viability
The LR1000 treatment group, and to a lesser degree, the LR100 and A.D. Biales et al. RR1000 treatments displayed morphological changes consistent with cytotoxicity. The LR1000 and LR100 treatments demonstrated a dosedependent reduction in survival using the XTT assay (p < 0.05).
Expression among MC treated cells
In all treatments, the majority of differentially expressed genes (DEGs) were up-regulated (Table 1). A much greater transcriptional response was observed with the LR1000 treatment relative to either of the other LR treatments. In contrast, a comparable number of DEGs was identified in the RR1000 and RR100 treatments. Few genes were identified in the RR10 treatment, suggesting that this concentration had little effect. Due to the low number of DEGs in the RR10 group, it was excluded from further analysis; however, complete lists of DEGs for all treatment groups can be found in Supplementary Table 1.
The magnitude of expression increased with treatment level for both the LR and RR treatments (Supplementary Table 1). This was most notable in the LR1000 group which had 84 genes with a > 2-fold change compared to 3 and 0 for the LR100 and LR10, respectively. The greatest fold change in the MC-RR groups was 1.7-fold in the RR1000 group.
Only eight DEGs overlapped among all three LR treatments (Fig. 1). The transcriptional response between the LR1000 and LR100 treatment groups was highly consistent with 59% of LR100 DEGs in common with LR1000. Five of the 10 most highly expressed genes in the LR1000 and LR100 gene lists were the same, with the top four being in the same relative order, further underscoring the consistency of the response. Despite observed cytotoxicity in the LR100 treatment group, relatively few DEGs were identified, suggesting greater variability among replicate exposure experiments. The gene expression response of the LR10 treatment group was markedly different from the other LR treatments displaying minimal overlap with either of the other LR treatments, with 10% and 8% of LR10 DEGs overlapping with LR1000 and LR100 respectively.
In order to provide a more cohesive picture of the cellular response to MC-LR and MC-RR, enrichment analysis was conducted using both the DAVID for gene ontology (GO) terms (Table 2) and IPA for functional categories (Supplementary Table 2) and canonical pathways (Table 3). Enriched categories largely overlapped between MC-LR and MC-RR treatments and suggested the production of reactive oxygen species (ROS) is a key mediator of both MC-LR and MC-RR toxicity. With the exception of the enrichment of complement-associated categories by MC-RR, no strong evidence of congener-specific responses was observed.
Discussion
There has been a significant amount of research aimed at characterizing the drivers of MC-induced toxicity in hepatocytes. Much has focused on specific pathways or processes, resulting in a somewhat myopic view of the hepatocyte response to MCs and leading to conflicting interpretations. We employed whole transcriptome analysis with the goal of providing a comprehensive and non-targeted snapshot of early hepatocyte responses to MC exposures. Because harmful algal blooms (HABs) are generally composed of mixtures of MC congeners, identifying similarities and differences in cellular targets and effects may have some implications in estimating the full risk of MC mixtures. In order to identify potential differences among MC congeners, transcriptomic responses of hepatocytes exposed to either MC-LR or -RR congeners were evaluated.
To some degree, the comparisons of congeners were confounded with differences in the cytotoxicity of the LR and RR concentrations. The LR10 group may serve as a means of discriminating toxicity-related from congener-specific responses, as no cytotoxicity was observed with the treatment. Comparisons of enriched functional and canonical pathways, as well as overlaps in identified DEGs and principal component analysis, suggest that the LR10 treatment is similar to both the MC-LR and -RR treatments.
Complement system
Complement-related canonical pathways were identified as enriched in both DAVID and IPA analysis but not in MC-LR treatments (Tables 2 and 3). Some dose-dependence was suggested based on the number complement-related genes (27 genes in RR1000 group vs 14 in RR100; Supplementary Table 1). Twelve genes were identified in both treatments, suggesting a consistent response. Up-regulation of the complement system may be related to the acute phase response (APR), which was the most enriched canonical pathway in the MC-RR groups (Table 3); however, the APR was also enriched in MC-LR groups in the absence of complement-related genes, suggesting this may not be the case. Though generally associated with immunity and defense (Sarma and Ward, 2011), complement also plays important roles in the response to toxin-induced liver damage. Members of the complement system have been shown to protect hepatocytes from apoptosis during post-surgical liver regeneration (Markiewski et al., 2009). The protective functions of complement may partly explain the large difference in cytotoxic potential of MC-LR and MC-RR.
Extracellular exosomes
Extracellular exosome was among the most significantly enriched categories identified in DAVID analysis in the MC-RR treatment groups and was also observed, though less prominently, in the LR10 and LR1000 treatments. The consistency of exosome-related transcriptional response across treatment groups suggests it may be a common and important hepatocyte response to MC exposure. To our knowledge, the MC-dependent release of exosomes has not been previously reported, though exosome release has been linked to many of the key processes associated with MC toxicity, such as liver injury and oxidative stress
Table 2
Functional enrichment of MC-LR or MC-RR treated HepaRG cells. Functional enrichment of differentially expressed genes was conducted using DAVID. Results were trimmed to remove redundant entries (shaded rows). Values are Benjamini adjusted p-values. (Cho et al., 2018), lipotoxicity (Cazanave et al., 2011), ER stress and the UPR (Cazanave et al., 2011), and inflammation (Kakazu et al., 2016). The functional relevance of this is unclear, however, exosomes are known to act as pro-inflammatory extracellular signals, suggesting a potential mechanism for the immunostimulatory activity of MC. Alternatively, increases in exosome release following ER-stress is dependent on the several mediators of the unfolded protein response (UPR) (Kanemoto et al., 2016), suggesting exosomes may act to attenuate ER-stress (discussed below), potentially by offloading misfolded proteins.
Regulation of protein serine/threonine phosphatase related genes
The inhibition of PP activity is among the most well-characterized effects of MC, and congener-specific differences in PP targeting and inhibition have been previously reported (Honkanen et al., 1994;Pereira et al., 2013;Prickett and Brautigan, 2006). This is reflected in the transcriptional response as the term "phosphoprotein" was consistently among the most enriched terms across all treatments in DAVID analysis (Table 2). Little overlap in differentially expressed PP was observed between the LR and RR groups (Table 4). In MC-LR groups, a fairly consistent dose-dependent up-regulation of genes associated with PP1 and PP2, well known targets of MC-LR, was observed. In contrast, genes associated with five different PP family members were differentially expressed in MC-RR groups. The toxicological relevance of these observed differences is unclear given that PPs regulate numerous cellular functions, however, it is possible that the differential targeting of PP between MC congeners may play a role in differences in the toxicological response among congeners (Olsen et al., 2006). It is unclear if the differential expression of PP-related genes is a direct response to PP inhibition by MC or an indirect downstream effect. However, that these are very early responses (2 h) suggests these may be direct effects of MC/PP interactions.
Oxidative stress
In both congeners transcriptional responses consistently pointed to the production of ROS and subsequent oxidative stress. This is consistent with previous studies that observed increases in ROS in hepatocytes following exposure to several MC congeners (Cazenave et al., 2006;Kujbida et al., 2008;Weng et al., 2007). Pre-treatment with antioxidants effectively eliminates MC-LR-induced hepatocyte apoptosis, intrahepatic bleeding and serum markers of liver damage (Weng et al., 2007), suggesting ROS and oxidative stress are important factors in MC effects. In LR1000, LR100 and RR1000 treatments, the Nrf-2 canonical pathway was highly enriched (Table 3). Nrf-2 is a b-zip transcription factor that is activated following the production of ROS from diverse stimuli (Ma, 2013). Once activated, Nrf-2 binds to antioxidant response elements (ARE) and initiates a transcriptional response that includes up-regulation of genes involved in xenobiotic detoxification and antioxidant defense. Nrf-2 is considered a xenobiotic activated receptor (XAR) and its target genes overlap to a large degree with other members of this group that were also shown to be enriched across treatments, such as the AhR, FXR, and peroxisome proliferator-activated receptor (PPAR) (Ma, 2008). Nrf-2 target genes involved in maintaining intracellular GSH levels were up-regulated by both MC congeners (GCLC and GCLM, Supplementary Table 2); these proteins are known to detoxify MCs (Pflugmacher et al., 1998). Other Nrf-2 target genes involved in the oxidative stress response, such as the glutathione peroxidases (GPX) and several aldehyde dehydrogenases were also up-regulated across treatments, consistent with previous studies .
Though enrichment of the Nrf-2 oxidative response was observed in the RR1000 group, it was not among the top 30 enriched pathways, nor was it enriched in the RR100 or LR10 groups, suggesting some relationship to cytotoxicity. Interestingly, in both the RR100 and the LR10 groups, many cellular pathways associated with cytoskeletal structure were found to be enriched (Table 3). Disruption of cellular architecture is one of the most well characterized effects of MC exposure (Gehringer, 2004). The enrichment of cytoskeletal architecture-related pathways in the absence of evidence of strong oxidative stress response suggests that it either precedes or occurs at very low levels of oxidative stress. Further, the lack of cytotoxicity in these treatments suggests that changes in structural integrity are not directly responsible for cell death or occur very early in the cytotoxic pathway.
ER stress
The development of ER stress has previously been observed following MC exposure and has been suggested to be an alternative MC MOA (Menezes et al., 2013). In the current study, ER-and ER stress-related terms and transcripts were enriched by both congeners (Tables 2 and 3). ER stress results when the accumulation of newly translated proteins outpaces the folding capacity of the ER. This condition is toxic to cells and is mitigated via the UPR which is controlled by three transmembrane receptors, IRE-1, PERK, and activating transcription factor 6 (ATF6) that monitor the status of protein folding. MC-LR has been shown to activate all three UPR response arms (Christen et al., 2013;Zhang et al., 2020), though the direct cause of MC-induced ER stress is unclear, ER stress is known to be induced by oxidative stress (Liu et al., 2018;Malhotra and Kaufman, 2007).
Gene expression changes detailed here, as well as those reported elsewhere, consistently point to the development of ER stress and activation of the UPR as significant determinants of cellular fate following MC exposure. There is conflicting evidence as to the consequence of MCinduced ER-stress, with some studies suggesting a pro-inflammatory response (Christen et al., 2013), while others a pro-apoptotic response (Menezes et al., 2013;Qin et al., 2010). This likely results from a significant degree of crosstalk among downstream pathways, overlap in pathway components, and the concurrent activation of multiple pathways downstream of different arms of the UPR. Consistent with this, evidence for both inflammatory and apoptotic responses was observed among treatments in the current study. Many inflammation-related canonical pathways were enriched across treatments (Table 3). Activation of the APR may provide a link between the immune response and ER-stress, as it has been shown to be via the ATF6 arm of the UPR (Zhang and Kaufman, 2008). Concurrent with inflammation, strong evidence of an apoptotic response was observed in the LR1000 and LR100 treatments. The PERK/CHOP arm of the UPR was highly enriched in the LR1000 treatment. CHOP initiates apoptosis through both the mitochondrial and death receptor pathways, as well as through the up-regulation of protein phosphatase 1 regulatory subunit 15A (PP1R15A/GADD34), which acts to intensify pro-apoptotic stimuli by re-initiating general transcription (Enyedi et al., 2010;Han et al., 2013;Marciniak et al., 2004;Yang et al., 2017). Up-regulation of CHOP target genes associated with both the mitochondrial (Phorbol-12-myristate-13-acetate-induced protein 1 (NOXA/PMAIP1), p53 up-regulated modulator of apoptosis (PUMA/BBC3)) and the death receptor (DR5) pathways, as well as GADD34, was observed in the LR1000 treatment. Further, ATF3, JUN and FOS, all components of the AP-1, a pro-apoptotic transcription factor activated downstream of ER stress, ROS production and MC-LR exposure, were among the most highly and consistently up-regulated genes in the cytotoxic LR treatments. The co-occurrence of indicators of inflammatory and apoptotic responses across treatments suggests that cells may simultaneously pursue multiple cellular programs and may help explain the conflicting evidence.
Ultimately, cellular fate is dependent on the severity and the duration of MC exposure and the differential activation of the three arms of the UPR. Several lines of evidence suggest that both the proinflammatory and -apoptotic programs converge on the stress activated c-Jun N-terminal kinase (JNK), suggesting it is the key determinant of cellular fate. MC-LR induces JNK activation in a time and dosedependent manner (Sun et al., 2011) and JNK inhibition results in decreased responses associated with apoptosis, such as caspase activation, AP-1 binding, DNA fragmentation (Wei et al., 2008). Under cytotoxic conditions, JNK is activated as part of the ER-dependent apoptotic response via the IRE-1 pathway of the UPR (Urano et al., 2000). Additionally, JNK activation of CHOP, up-regulated via the PERK arm, results in up-regulation of DR5 and the extrinsic apoptotic pathway (Guo et al., 2017). The transition between pro-and anti-apoptotic signaling is linked to the kinetics of JNK activation, with apoptosis associated with sustained JNK activation and cell survival with transient activation. This is regulated via the pro-inflammatory factor NF-κβ which inhibits sustained JNK activation resulting in increased survival (Tang et al., 2002). Activation of NF-κβ has been demonstrated downstream of MC-LR induced ER stress concomitant with up-regulation of pro-inflammatory factors such as INF-α, as well as TNF-α, which is also associated with apoptosis (Christen et al., 2013;Zhang et al., 2013). Together, this suggests a mechanism whereby MC exposures promote ROS production and oxidative stress leading to ER stress and activation of JNK and NF-kB downstream of the UPR (Fig. 2). Under non-cytotoxic conditions, an inflammatory and pro-survival response is favored via NF-κβ inhibition of sustained JNK activation and up-regulation of pro-inflammatory genes. However, under conditions where ER stress cannot be mitigated, apoptosis is induced via IRE-1 activation of JNK through MAPK and p38 signaling and through PERK/IRE-1/ATF-6 up-regulation of CHOP and subsequent activation of apoptotic pathways. The suggestion of NF-kB/JNK as a pro/anti-apoptotic switch is supported by evidence demonstrating that NF-kB activation and DNA binding is dependent on MC concentration Zhang et al., 2013).
Conclusions
Using a non-targeted transcriptional approach, we have provided a holistic picture of the early hepatocyte response to MC-LR and -RR exposure. Many of the enriched processes and pathways, as well as the specific DEGs have been previously reported, underscoring the validity of our approach and interpretation. We demonstrate differences in the transcriptional profiles of PP-related genes between MC congeners, which may underlie congener-specific effects. Through functional enrichment analysis we have identified several previously unreported effects and congener differences. Among the most striking is the enrichment of complement-related genes in MC-RR treatments, but not MC-LR treatments, which may partially explain differences in toxicity between congeners. Genes related to extracellular exosomes were highly enriched by both congeners, suggesting a previously unreported pathway of MC-dependent extracellular signaling. For both congeners, the majority of evidence either directly or indirectly points to the induction of oxidative stress. We propose a model of MC toxicity that begins with oxidative stress and leads to ER stress and the initiation of the UPR. All three arms of the UPR converge on the activation of JNK which, depending on the severity of the MC toxicity, ultimately determines cellular fate through its interactions with NF-κB. Though this model has not yet been tested, it is supported by previous studies and may explain conflicting interpretations of the hepatocyte response to MC exposure.
Ethics statement
No human or animal subjects were used for this study.
Funding
This research received no external funding
Disclaimer
The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of the U.S. Environmental Protection Agency. Any mention of trade names, products, or services does not imply an endorsement by the U.S. Government or the U.S. Environmental Protection Agency (EPA). The EPA does not endorse any commercial products, services, or enterprises.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 2.
A model of the hepatocyte response to MC exposure. MC initiate the production of ROS, resulting in the accumulation of mis and non-folded proteins in the lumen of the ER. Accumulation of these proteins results in ER stress and initiation of all three arms of the UPR. Depending on the severity of the MC exposure cellular fate is ultimately decided by the kinetics of JNK activation. Sustained JNK activation leads to PERK/CHOP mediated apoptosis via both the mitochondrial and DR pathway. Lower levels of MC exposure result in the upregulation of NF-κB which initiates a pro-survival/inflammatory response that inhibits sustained JNK activation. | 2020-10-28T18:54:52.885Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "185fe9f3e81c9ee18e6ef98a24daeff04ee68fe2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.toxcx.2020.100060",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0cdce59fcd11c85cd3e02bdd9bb78f815326462",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
134469321 | pes2o/s2orc | v3-fos-license | BATS (CHIROPTERA) IN THE COLLECTION OF THE ZOOLOGICAL MUSEUM OF LVIV UNIVERSITY, UKRAINE
Bats (Chiroptera) in the collection of the Zoological Museum of Lviv University, Ukraine. — I. Shydlovskyy, A. Zatushevsky, O. Kusnezh. — The theriological collection of the Zoological Museum of Ivan Franko National University of Lviv was amassed during 142 years. It has over 3,800 exhibits, among which 343 (9 %) are bats (Chiroptera) that belong to 32 species. The bat collection is represented by study skins (254 exhibits) which are being preserved in 22 special boxes. Stuffed and almost all fluid-preserved specimens of bats are presented only in the exposition. Besides, there are 6 skeletons and 11 skulls in the collection, which belong to at least 9 species of bats. The main part of the collection was gathered in Western Ukraine, and only a few samples originate from the South and East of Ukraine. In particular, the type series of Khazarian serotine Eptesicus lobatus Zagorodniuk, 2009 was brought from Eastern Ukraine. Tropical species are also presented in the collection: Hipposideros caffer, Epomophorus labiatus, Pteropus vampyrus, and species of the genera Neoromicia and Rousettus. The bat collection was amassed during 1900–2015. It widely represents the bat fauna of the Carpathian region and Transcarpathian lowland.
Introduction
The Zoological Museum of Ivan Franko National University of Lviv (ZMD) with its collections is a unique asset of natural science, which makes possible to accumulate materials that cannot be collected in a short period of time. It also ensures the preservation of collection samples of different age, as well as animals collected in different parts of their range. At the same time, it allows processing materials related to rare and endangered species, which populations are too small to be investigated even by temporal extraction of individuals from the natural environment. Due to this feature, museums (collections) provide the opportunity to fully study and compare the samples that cannot be obtained or collected during a scientific course study or a research project, which are often limited by a small timeframe, poor funding, or the objects to reach are in different and often remote parts of the range.
One of these animal groups are the chiropterans that carry out a hidden, nightfall or night activity. Moreover, the places of their reproduction are so concealed that special equipment is often needed to finding them. In addition, all representatives of this group of the mammalian fauna of Ukraine are listed in the Red Data Book of Ukraine (2009).
The purpose of this paper is to show the quantitative and qualitative changes in the collection of bats that occurred during the 15-year period since the publication of its first synopsis (Bashta, Shydlovskyy, 2001) and the characterization of the collection published during the study of rare representatives of this order (Kusnezh, 2014). The electronic database of mammalian collections is being continuously updated for analysis (since 2010 in Excel format). It is a convenient tool for processing various data on individual museum objects and their series, as well as for searching and summarizing of specific information.
The history of the collection's content and volume
Reflecting on the history of mammalian species composition, it should be mentioned that the theriological collection of ZMD has been enriched for 142 years and it includes over 3,800 museum specimens, 343 or 9 % of which are bats (Chiroptera).
Analyzing the collection of bats, we can claim its gradual enrichment in both quality and quantity. In particular, compared to 2001, when the revision of this collection was carried out, its abundance increased from 125 study skins and mounted specimens of 14 species to 194 museum samples of 25 species in 2010 (Zatushevskyy at al., 2010), and to 343 samples of 32 species by the end of 2016. This is without consideration of two unidentified specimens of Chiroptera, three unidentified specimens of the genus Neoromicia, two unidentified juveniles of Pipistrellus sp., and one specimen of Rousettus sp.
The collection is represented mainly by study skins that are stored in special cardboard boxes sorted by species (Fig. 1 a). In particular, museum samples of this type are represented by 258 study skins, among which 104 with skulls, in 22 boxes.
In addition, a small amount of mummies and fluid-preserved specimens are also stored in the stock collection (Table 1).
Collection from Ukraine
The majority of specimens in the bat exhibition were collected in Ukraine. In particular, bats from 7 of 24 oblasts and the Autonomous Republic of Crimea are represented. However, the quantitative distribution of samples' origin is not quite homogeneous. The largest collection came from Lviv and Zakarpattia oblasts (42.3 % and 31.1 %, respectively), whereas such administrative units as the Autonomous Republic of Crimea and Ivano-Frankivsk oblast are represented only by a single specimen each. There are no samples from Chernivtsi and Khmelnytskyi oblasts. Concluding on the collection's record localities, the bulk of the collection is represented by specimens from the western regions and only few specimens from the southern (e.g., AR Crimea) and eastern regions of Ukraine (e.g., 8 item from Luhansk oblast). However, specimens from Luhansk oblast scientifically represent the most valuable part of the collection, including type series of Eptesicus lobatus (Zagorodniuk, 2009; Eptesicus lobatus Zagorodniuk, 2010) with a holotype and five paratypes (table 2).
Chronology and geography of the collection
Chronologically, the museum's collection of bats was amassed during 1902-2015. The very first specimen of the collection, a greater noctule bat, is dated to December 1902 ( Fig. 1 b). This is the only specimen dated to that time. This very specimen has repeatedly drawn attention of local scientists since Prof. K. Tatarinov (1956) wrote about it with the label "Opillya" stored in the collection of Lviv University. However, on the stand of the only copy available in our collection, on the left is a black ink indication, which states the name of the species "Vesperugo vespexus noctula" and its origin "18/12/1902 Dobrudza" and the composition "czaszka osobno" -the skull separately (in Polish). To the right of this indication is a stock-keeping unit, the Latin name "N. siculus," and the signature of K. Tatarinov made with pencil, apparently later than the original inscription. Between the Latin name and the professor's signature there is an illegible Polish inscription with small and narrow font, made by purple ink ("Polonensi?"). Therefore, the fact of discovery of the greater noctule bat in the West of Ukraine remains obscure.
In the monograph "The Mammals of Western Ukraine" (Tatarinov, 1956), at the end of the essay the author writes on the greater noctule bat the following: "Apparently, this kind of noctule bat in the western regions of the Ukrainian SSR is very rare, but in Dobrudja, according to verbal evidence that we have, it is [a] common [species]". In our opinion, this is another confirmation that our specimen of the greater noctule bat comes from the territory of Romania, namely from Dobrudja, and therefore it was not discovered in western Ukraine. During 1909During -1910 mummies of the Sundevall's leaf-nosed bat were received, brought by Jerzy Wodzicki from the Great Rift Valley, Kenya. The next enrichment of the collection took place only in 1935-1936, when Professor Jan Hirschler brought four specimens of Neoromicia sp. from Liberia, three of which are still unidentified, and one identified as Neoromicia nanus stampflii. Later on, the museum received a specimen of Nathusius' pipistrelle (Pipistrellus nathusii) collected during World War II by an unknown person in September 1942 in Greece.
The first collection of bats in the museum began to form in 1947 after World War II. However, by 1960 specimens were received occasionally. T. D. Maznova first started systematically collect Chiropterans in the caves of Ternopil oblast, in particular, near Korolivka, Bilche Zolote, and Uhryn villages in the late 1960's. After 1961, other collectors have joined the collection process and the bat collection increased in amount to 88 specimens of 10 specimens within ten years. Although, after 1970, there was a significant suspension. The museum did not receive new samples of bats until the late 20th century (namely until 1999).
In terms of geographical distribution, as it was already noted, the bat collection is represented, mainly by Ukrainian samples. Only 35 specimens were collected in other countries (continents), in particular 29 in Africa, 1 in Southeastern Asia, 3 in Western Europe, 1 in Central Europe, 1 in Madagascar, and 292 in Eastern Europe (Table 3).
The new "era" in the bat collection's enrichment dates back to the beginning of the 21st century, when students began to conduct their coursework and graduation theses on bats under the guidance of Associate Professors Eugenia Srebrodolska and Igor Dykyy. The activity in this area increased and, as a result, dead bats found by students eventually were transferred to the museum's collection. Evidently, after 1999-2000 only two specimens arrived to the collection -a Brandt's bat (Myotis brandtii) from the southern suburbs of Lviv city and a common pipistrelle (Pipistrellus pipistrellus) from Haivka village of Shatsk raion, Volyn oblast.
During 2001-2015, after the first revision of the collection (Bashta, Shydlovskyy, 2001), its volume increased significantly with 135 new specimens of 20 species.
Review on new samples
New samples obtained during 2001-2007 are characterized by a small amount -up to fourfive specimens per year. However, after 2008, the number of bat biomaterials that enriched the museum's collection increased to an average of 13 specimens per year, with the highest number in 2014. The annual enrichment of the museum's bat collection for the past 15 years is presented on Fig. 2.
The largest number of study skins (47 items) belongs to specimens of the common noctule. One of them was passed to the museum in 2002, and it was collected by V. Mysyuk at the Biological and Geographical Research Station of Ivan Franko National University of Lviv, which is located within Shatsk National Nature Park. Three other specimens were transferred from Uzhhorod in 2011 by Y. Zizda, while the rest of specimens (43) was collected by I. Ivashkiv, O. Kusnezh, andM. Skyrpan in Lviv during 2011-2014. These researchers discovered a place of mass death of bats that occured at the beginning of their spring migration. This place was a hole on the roof of an old house entrance through which the bats would depart. It was located above a pipe from which the funnel of rain and melt-water usually fell after winter. That was the place where young bats used to fall into, while the bottom of the pipe had still been frozen. Chiropterologists had repeatedly talked to the residents of this house about changing the shape of the drainage funnel, which was made only three years after.
However, during this time more than 100 specimens of the common noctule died. Part of this material is still being stored in the freezer chamber of the museum.
The second largest enrichment was by specimens of the serotine bat (Eptesicus serotinus). During the period of studies, 28 specimens were obtained, almost half of which (13) was collected in 2010 by E. Stetskiv and I. Ivashkiv. One or two other specimens of this species, collected in Rivne and Lviv oblasts, were added to the collection as well.
In 2009, the museum's chiropterological collection was enriched with barbastelle specimens (five mummies) prepared by A.-T. Bashta and I. Ivashkiv and collected in the outskirts of Tarakaniv village, Dubensky district, Rivne oblast.
In 2012, one of the authors of this article (O. Kusnezh) enriched the collection of bats with four specimens of Bechstein's bat (Myotis bechsteini), three of which were collected in Ternopil oblast (Kalaharivka village, Gusyatyn district) and one in the Roztochya Biosphere Reserve (Ivano-Frankove, Yavorivsky district, Lviv oblast).
Valuable samples
Thus, the bat collection of ZMD widely represents the bat fauna of the Ukrainian Carpathians and Transcarpathian lowland. Among valuable samples of the collection, it is worth listing the materials from the exposition and stocks that evidence the presence of certain species of bats in Western Ukraine in the middle of the 20 th century. Among them are the whiskered bat (Myotis mystacinus), the lesser mouse-eared bat (Myotis blythii), and the lesser horseshoe bat (Rhinolophus hipposideros) collected in 1947-1960, as well as the common bent-wing bat (Miniopterus schreibersii) (47 specimens) and a number of other species collected in 1961-1962 in Zakarpattia oblast.
Scientifically valuable is the collection of type series of Eptesicus lobatus, which may serve for future examinations of the genus Eptesicus, which is widespread in Eastern Europe, as well as for morphological studies of the post-calcarial lobe of bats and other bat fauna research of the Azov and Donetsk Uplands too.
Equally interesting are collection specimens that represent the tropical fauna and are real evidence of certain species' presence within their range. These findings also provoke new questions for the researchers. In particular, regarding to representatives of the genus Neoromicia, one of which, according to our considerations, belongs to the species N. nanus stampflii. However, on the label of our specimen the species is listed as N. stampflii, i.e. the species is identified in the "Mammal Species of the World" (Wilson, Reeder, 2005) as extinct. Nevertheless, our specimens may belong to other three species or subspecies. | 2019-04-27T13:12:33.864Z | 2018-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "ee61df24a71fc6df4d726bda6dc053d08511c60b",
"oa_license": "CCBYSA",
"oa_url": "http://terioshkola.org.ua/library/pts16-bats/pts16-15-shydlovsky-bats-zmd.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f032ce3ed77d63c19d49e9d38d6eefc2509e901f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
55025615 | pes2o/s2orc | v3-fos-license | DEVELOPING BUT DISCRIMINATIVE THEORIES ON POLITICAL PARTIES
This article reviews and summarises the trends in the development of theories on political parties. In particular, the inherent and overwhelming trend of Western-focused will be addressed. Through the discussion on the contemporary trends in theories of political parties, the article describes the one-sidedness of the theories and the oversight against parties from other regions. However, because of the establishment of these theories, the trend will continue as the theories will be used as benchmark for the development of parties in other parts of the world.
The main argument here is that theories on political parties are largely based and focused on Western countries.This trend is understandable as the early and arguably most quoted works on theories on political parties, such as the ones by Duverger (1954), Neumann (1956) and Michels (1959), were based on European countries.Although attention has been given to parties from other parts of the world; the study of political parties is still very much Western-dominated (as explained in the subsequent sections).
The first section of this essay focuses on the main and contemporary themes of theories on political parties -it is explained there that contemporary discussions on political parties are heavily based on European countries.The second section elaborates on the literature on political parties, to demonstrate that the existing literature is heavily Western-based and only very limited portion focuses on parties from other parts of the world.The third section focuses on party organization, and the fourth section talks about various typologies of parties.Finally, the fifth section concludes the argument.
Literature for this essay is derived from books and journals.Aside from books focusing on political parties, I also consult books on democratisation theoriesgiven the positive correlation between democratic transition and the functioning of parties.I rely heavily on the works of Gunther, Montero, Linz and Richard Hofferbert -prominent political scientists whose latest works on political scientists seem to accumulate the latest trends of the study.I also refer to the "older" works on political parties to look at the early and "traditional" theories.The journal that I refer to is Party Politics.
CONTEMPORARY ISSUES ON POLITICAL PARTIES
As Gunther, Montero and Linz argue, political parties are crucial for democracy as they are the "principal mediators between the voters and their interest" (Gunther, Montero, and Linz, 2002: 58).The parties are also the vehicles for voters to gather with others who have similar interest, and thus strengthen (although sometimes only in numbers) their position to demand their will.This is why the study of political parties is "an essential contribution to the study of democracy", and theories on political parties can enrich theories on democracy (Gunther, Montero, and Linz, 2002: 58).As democracy became increasingly popular with the fall of non-democratic regimes around the globe -particularly after the fall of Communism in the Easter Europe in the 1980s, the study of political parties became more popular as well.
Democratisation supports and enhances the demand for political participation.The rise of new political parties, as a result of democratisation is an interesting additional aspect to the study of democracy.Klingemann and Fuchs state that one of the possible consequences of this demand is the diminishing significance of voting as the most institutionalised form of participation; which is strengthened when citizens do not feel that the contending parties are not sensitive to new demands (Klingemann and Fuchs, 1995: 18).However, the same feeling is advantageous for emerging parties, as they can claim to be more accommodative to the new demands and needs.Pridham notes that these parties, can be the determining factor for consolidated democracy (Pridham,1995: xii).
At the same time, another trend has emerged among the more established democracies, notably in Western states.This trend, dubbed the "decline of parties" is characterised mainly by the diminishing linkage between parties and the voters.Scholars have argued that there is waning partisanship and membership in parties, mainly in Europe (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 95).There is possibly a downturn in the strength of party identification as a result of, among others, mass education, the expansion of mass media, and the emergence of other forms of societal participation -or, to sum up in one word: modernisation (Schmitt and Holmberg, in Klingemann and Fuchs, 1995: 96).Thus, there has emerged a new generation or class of "new citizens" who need less political organization and therefore have less ties with political parties (Schmitt and Holmberg, in Klingemann and Fuchs, 1995: 96).This trend is seen as a nightmare for the more traditional political scientists who glorify partisanship and party membership as a strong sign of democracy.
The argument that parties are facing a crisis has been supported by other scholars.Hans Daalder distinguishes the crisis of party as: the denial of party (the body of thought that denies the need for party and believe that parties are threat to good society), the selective rejection of party (among those who think certain types of parties as "good" and others "bad"), the selective rejection of party systems (the argument that particular party systems are "good" and others "bad"), and the redundancy of party (the notion that parties are matters of the past) (Daalder in Gunther et al (eds.), 2002: 39).Daalder argues that the different kinds of sentiment against parties have developed since the time of the formation of parties in Europe, and that critics of parties should exercise care when launching their criticisms (Daalder in Gunther et al., 2002: 39-41, 54-56).Jean Blondel points to the 1990s as the time during which "serious questions were raised about political parties, in many, if not all, Western European countries" -although he believes that the trend started in 1970s (Blondel in Gunther et al., 2002: 233).
Despite the decreasing support for political parties, there is significant amount of conviction that parties truly are important; as other scholars believe that they will remain inevitable.As Bryce note, "no free country has been without them, and no one has shown how representative government could work without them" (Gunther et al., 2002: 3).Schattschneider argues that "modern democracy is unthinkably save in terms of political parties", and Stokes claims that "parties are endemic and unavoidable to democracy" (Gunther et al., 2002: 3).Gunther et al also sighted the outburst of comparative study on parties and the birth of the journal Party Politics as a sign that the importance of parties is in fact getting stronger, and that studies of parties should remain important in political science (Gunther et al., 2002: 3).
Contemporary studies of political parties have noted other "modern" trends.One of the strongest trend supports the notion that parties are somewhat decreasing in power -because increasingly people are becoming floating voters.Linz argues that the new democracies will have "fewer voters with a strong party identification, people are actually freer to choose but only the degree of loyalty is questionable" (Linz in Hadenius, 1997).Thus, it is arguably harder for parties to attract members, because there is decreasing closeness in relationship between being a party member and voting -voters feel they do not have to be a member to vote for a party.Furthermore, by not being a particular party member, voters feel more freedom in voting for another party in the next election, if they choose to do so.
The rapid growth of media has also formed a different kind of interaction between party and its voters.Arguably promotion and campaigners have been able to reach more audience thanks to radio, television, and the internet.Katz and Mair investigate the changes in party press caused by the evolution of media -they note that television has required parties to choose their best personalities to be given broadcasting time, and these "newsworthy" party personnel can communicate directly with the pubic without the intervention or need for party organization (Katz and Mair in Gunther et al., 2003: 131).Thus, these new pattern of communication requires new varieties and levels of professional skills in political parties (Katz and Mair in Gunther et al., 2003: 131).
LITERATURE ON POLITICAL PARTIES
As mentioned at the beginning of the first section, one important theme for literature on political parties is the fact that parties and party institutionalisation are central in the study of democratisation, and thus there is a significant part that parties occupy in the literature on democratisation.Huntington points to election as the backbone of democracy (Huntington, 1991: 109, 174), which means that there must be full-functioning parties in a democracy.The same argument was made by Linz and Stepan, who believe in the need for free and inclusive electoral contestation in a consolidated democracy (Stepan and Linz, 1996: 14).Mainwaring and Scully are also convinced that "building democracies is about building democratic institution", an institutionalised party system is a must for democratic consolidation, and that parties competing in the election must be properly organised (Mainwaring and Scully, 1999: 1, 4-5, 21, 23, 27).
Gunther et al started their book by discussing whether literature on parties is enough for students of this field.They quoted a statement that since 1945 there are 11,500 books on parties and party systems in Western Europe alone, in 1988 Gunther, 2002: 2).They did admit that there was a waning of scholarly interest in political parties during the early 1990s, but claim that there was an "outburst" of comparative studies on parties, and the appearance of the Party Politics journal, which more than reawakened the somewhat slugging attention in the field (Gunther, 2002: 3).They believe that the study of political parties, although accused of being old-fashioned and out-of-date, is instead a challenging branch that has attracted a lot of renewed interest.
The list of literature presented by Gunther et al shows that this study is very much Western-based.All the names they mentioned in the list were Europeans, demonstrating the less attention received by other parts of the world when it comes to study on political parties.Indeed, Western countries are seen as the "more established" and have more experience with democracy -thus making them the first also to encounter most contemporary "party issues", while the countries that are still new to democracy are also still learning about it.The waves of democratisation (borrowing Huntington"s term), although also hitting parts of Asia, have attracted less attention compared to, for example, Eastern and Southern Europe (Stepan and Linz, 1996).
The works based on the effects of democratisation in Asia have been countrybased and do not explore much of the political parties (Uhlin, 1997).The scholarly works on parties in Asia tend to focus on either individual countries or parties, and there is very little effort, if any, to distinctly theorise or categorise the parties in Asia as scholars did for Europe.Gunther et al have acknowledged that parties in East Asia have played important role in democratising their countries (Gunther et al., 2002: 58), but so far there are only limited number of books on parties in, for example, Korea and Japan (see for example the works of Mikiso Hane and Junichiro Wada).Furthermore, there is limited works on parts outside Western countries in terms of efforts to generalise a group of political parties in the region, compared to other regions (see for example Linz and Stepan"s work in Stepan and Linz, 1996).
ORGANIZATION OF PARTIES
Benjamin Constant described a party as "a group of men professing the same political doctrine" in 1816 (Duverger, 1954: xiv).Initially these men will be drawn to a particular party by its programme, but later the party organization would also play an important part in uniting these supporters (Duverger, 1954: xiv-xv).Duverger, whose book was published in 1954, presented the basic elements in a party as: the caucus (can also be called a committee or a clique) -the small, exclusive, powerful group of notabilities who are only active around the election period; the branch that continually extends its membership and remains active outside election period; the cell -which has an occupational basis but smaller than the branch; and the militia or the army-trained support group (Duverger, 1954: 17-40).
A more modern and complicated organisation of political parties is described by Kenneth Janda and Tyler Colman, who believes that an ideal party "engage in activities that have functions for society" -activities are what the parties do, while functions are what scholars see as the social consequences of those activities (Janda and Colman in Hofferbert, 1998: 195).They measured the organisation of party by its electoral success, breadth of activities, and its cohesion (Janda and Colman in Hofferbert, 1998: 194).Electoral success is measured from votes won, seats won, and governments formed, while the "breadth of activities" is measured from the aspects of propagandizing ideas and programs, and providing for members" welfare (Janda and Colman in Hofferbert, 1998: 194-195).The propaganda aspect involves: passing resolutions and platforms, publishing position papers, operating party schools, and operating mass communication media (Janda and Colman in Hofferbert, 1998: 195).The welfare aspect contains: providing food, clothing and shelter to members from party resources; running employment services; interceding with government on members" behalf; providing basic education in addition to political education; and providing recreational facilities or services (Janda and Colman in Hofferbert, 1998: 195).
Janda and Colman also give detailed indicators of how to measure a party organisation -which is by looking at four components: complexity, centralization, involvement, and coherence (Janda and Colman in Hofferbert, 1998: 196).They break down the components further.Complexity is measured by looking at: structural articulation, intensiveness of organisation, extensiveness of organisation, frequency of local meetings, maintaining records, and pervasiveness of organization (Janda and Colman in Hofferbert, 1998: 196).Centralisation is measured by: nationalisation of structure, selecting national leader, selecting parliamentary candidates, allocating funds, formulating policy, controlling communications, administering discipline, and leadership concentration (Janda and Colman in Hofferbert, 1998: 196-197).Involvement involves: membership requirements, membership participation, material incentives, purposive incentives, and doctrinism (Janda and Colman in Hofferbert, 1998: 197).Coherence or factionalism involves: ideology, issues, leadership, and strategies or tactics (Janda and Colman in Hofferbert, 1998: 197).
One of the aspects of party organisation that has attracted a significant portion of scholars" attention is partisanship.The debate on partisanship has again concentrated on Western parties.Duverger differentiates the degree of participation as: electors (easily measurable by the number of votes), supporters ("something more than an elector and something less than a member"), militants (active members, the executives of party) (Duverger, 1954: 91-116).A large portion of study on this area has focused on the comparison between American and Western parties, as described by Schmitt and Holmberg, and on how there are differences between the origins of partisanship in both regions, which required that there should be different systems to measure partisanship in the different areas (Schmitt and Holmberg in Klingemann and Fuchs, 1995:98).The concerns over the decline of partisanship (as briefly mentioned in the first section) has been intensified by the argument that such trend could result in political apathy, greater sentiment against the prevailing political order, political cynicism and distrust -but at the same time it provides opportunities for new parties (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 100).
The degree of partisanship can be roughly measured by looking at the size, age, the ideological position, and the political family of the party (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 118).Scholars initially found that identification with larger parties grew stronger as people age whereas bonds with smaller parties get weaker and tend to diminish as people age (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 118).This is because it is easier to switch from a small party to another than from big party to another (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 118).However, Schmitt and Holmberg found that there is only weak support for the age argument, and that young parties do not necessarily have less time or opportunity to build support (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 119).In terms of the age of parties, because it takes time to build a stable ties with a party, older parties tend to have more stable group of identifiers (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 118).Ideological position also determines the degree of attachment, as the more extreme the ideology of a party the more likely they attract "true believers" (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 118).In terms of political family, there are indications suggesting that parties with traditional working-class or middle-class electorate (socialist or Christian-democratic parties, for example) receive greater support as people identify emotionally with them (Schmitt and Holmberg in Klingemann and Fuchs, 1995: 118).
Lawson identifies different party linkages as: participatory, electoral, clientilistic, and directive (Widfeldt in Klingemann and Fuchs, 1995: 135).Directive linkage is relevant only to dictatorship or authoritarian regime, and electoral and clientilistic linkage can be summarised as representative linkages (Widfeldt in Klingemann and Fuchs, 1995: 135).When a party form linkages by getting ordinary people interested or actively involved in politics, the linkages is called participatory linkage (Widfeldt in Klingemann and Fuchs, 1995: 135).When the linkage is performed by representing citizens, it is called representative linkage (Widfeldt in Klingemann and Fuchs, 1995: 135).
One of the most important indicators of the linkage status of a party is its membership strength -no parties could claim to have a healthy linkage status if it does not attract members and parties with only few members are limited in functioning as a participatory linkage (Widfeldt in Klingemann and Fuchs, 1995: 136).To define someone as a party member is not easy as parties have different definitions, but generally a member has a certain degree of active interest and is formally enrolled as a member (Widfeldt in Klingemann and Fuchs, 1995: 136).To continue on the notion that there is a decline of parties (notably in Europe), Richard Katz and Peter Mair note that the elements of the parties outside of public office is disappearing -meaning the "faces" of parties in central office and party on the ground are withering away (Katz and Mair in Gunther et al., 2002: 126).Yet another supporting fact for the notion that parties are declining is that among thirteen long-established democracies in Western Europe party membership has fallen from an average of almost 10 percent in 1980 to less than 6 percent at the end of 1990s (Katz and Mair in Gunther et al., 2002: 126).However, at the same time, there seems to be indications that party memberships, although declining in percentage, are in fact empowered by parties opening-up decision making procedures, including candidate and leadership selection process to general party members (Katz and Mair in Gunther et al., 2002: 126-127).The trend of internal democratisation has been one of the subjects of interest for other scholars such as Rachel Gibson and Robert Harmel (Gibson and Harmel in Hofferbert, 1998: 211-228).Other research has also indicated that there is a "widening participation" in the process of candidate and leadership selections in parties, and general members have greater say in the processes (Hasan in Le Duc, Niemi and Norris, 2002: 117, 123).
If it is true that the organisation of party is crucial for its members to maintain support for it, then it is essential for parties to develop an organisational system that is preferred by members and supporters.After all, if party loyalty is declining in Europe, the parties there need to be "reminded" that their members are the determinant in their survival.As Hofferbert claims, "much of democratic control rests not only on voters" ability to make meaningful electoral choices predictive of policy performance, but also on the ability of voters to inflict retrospective electoral punishment for party failure" (Hofferbert, 1998: 7).That applies for both party in the office and party on the ground, meaning that not only the party leaders have to be successful in representing the members" interest, party on the ground has to be well-organised as well to appeal to voters.
PARTY TYPOLOGIES
Wolinetz argue that classifying parties is "surprisingly" difficult -and "students of political parties have typically worked around as with classificatory schemes, employing them where they are useful and ignoring or omitting them where they are not" (Wolinetz in Gunther et al, 2002: 138).He argues that party classification has been so limited because comparative study of political parties is a mainly West European venture, and efforts to include parties from other parts of the world have not been comprehensive (Wolinetz in Gunther et al, 2002: 138).Furthermore, there is more attention for party systems than parties, their organisation, or the ways in which they should be classified (Wolinetz in Gunther et al, 2002: 138).
Richard Gunther and Larry Diamond stated two reasons of why the existing typologies are still not comprehensive enough.Firstly, along the same line with Wolinetz, Gunther and Diamond believe that the problem is caused by the fact that the typologies were based on studies of West European parties over the past century and a half (Gunther and Diamond, 2003: 168).The parties in other parts of the world were faced by different "social and environmental situations" and thus have different features -even parties in the US do not fit in the typologies (Gunther and Diamond, 2003: 168).The second problem with the typologies is that they are based on criteria that are too wide, and there has been no effort "to make them more consistent and compatible with one another" (Gunther and Diamond, 2003: 169).Duverger (1954) presented one of the first typologies, which differentiated parties as mass parties and cadre parties.In terms of members, mass parties concentrate on the number of them; while for cadre parties what matters is the quality of the members (Duverger, 1954: 63-64).In mass parties there is a formal machinery of enrolment for members, which entails tasks and subscriptions to pay; while in cadre parties admission is accompanied by no official formalities, the periodic subscription is replaced by occasional donations (Duverger, 1954: 71).For cadre parties, "the adherent"s activity within the party can determine the degree of participation" (Duverger, 1954: 71).Mass parties depend on the number of members to accumulate funds for their operations, while cadre parties rely on the quality of their candidates to attract financiers (Duverger, 1954: 64).Neumann (1956) presented a differentiation between parties of individual representation and parties of democratic (mass) integration.Neumann thinks that while initially parties were mainly the vehicle of individual"s interests; modern individuals who are part of the "rising, self-conscious middle class that fought for liberation from the shackles of a feudal society" were soon striving to re-integrate into the new society, and the parties are the means of that process (Neumann, 1956: 403-404).Membership activity for parties of individual representation is limited to voting and nothing else between election periods, and candidates selected for public office has no accountability and responsibilities to the party members (Neumann, 1956: 403-404).On the other hand, parties of integration have an increasing influence over all aspects of individual"s life (Neumann, 1956: 403-404).Panebianco (1988) pointed to Kirchheimer"s catch-all party classification, which claims that Duverger"s mass parties have evolved into an organisation that has opened up to a much wider range of social groups as members (Panebianco, 1988: 263).Panebianco believes that Kirchheimer"s analysis actually brought to surface the increasing professionalism of party organizations (Panebianco, 1988: 264).He then builds upon this fact to construct his typology of mass bureaucratic parties (membership party, strong vertical organizational ties, financing through membership), and electoral-professional parties (central role of the professional tasks, electoral party, weak vertical ties, financing through interest groups and public funds) (Panebianco, 1988: 264).
Koole updated and modernised the definition of cadre parties with his modern cadre parties.Koole (1992Koole ( , 1994) ) drew his definition of characteristics of the modern cadre parties from the Dutch political parties, who enlist only a small percentage of their supporters as members.The characteristics of the parties are, among others: a low member/voter ratio, maintain the structure of mass party, and the reliance for financial resources on a combination of both public subsidies and the fees and donations of members (Wolinetz in Gunther et al, 2002: 141-142).
However, what has been the focus of a lot of attention is Katz and Mair"s typology of cartel party.Wolinetz argues that Katz and Mair"s typology is an addition of Neumann"s typology (Wolinetz in Gunther et al, 2002: 148).Cartel parties" characteristics are defined by the relationship between the party and the state -after parties find themselves "vulnerable to the vagaries of the electorates who have detached themselves from previous political moorings" (Wolinetz in Gunther et al, 2002: 148).As the members" loyalty decreases, parties open themselves to state subsidies for financial support.This in turn influenced party leaders to limit competition and concentrate on the new resources (Gunther and Diamond, 2003: 169).The subsidies of parties and special access to stateregulated channels of communications are major help for the parties to maintain their position, instead of the sheer size and membership commitments (Klima in Hofferbert, 1998: 84).Amidst the significant extent of discussion on cartel party, Michal Klima however argues that no pure cartel party exists (Klima in Hofferbert, 1998: 84).
Steven Wolinetz himself presented his typology that is based on the orientation of parties, whether they are policy-, vote-, or office-seeking.He emphasised that the orientation or goal is not exclusive of one another, and that it is more about the priority of the particular party (Wolinetz in Gunther et al, 2002: 150).The characteristics of the different types are shown in the table 1.
David Olson presented a typology specific for new democracies in Europe.He used Duverger"s typology of cadre parties, but argues that for the leaders of the cadre parties in new democracies democracy is a "preferred condition"; and that cadre appeals more than the mass party as they need to concentrate on votes rather than building a mass organization (Olson in Hofferbert, 1998: 23).As a strategy for elections, parties in new democracies tend to differentiate themselves between historical and post-transition parties (Olson in Hofferbert, 1998: 24).The historic parties tend to have an existing group of supporters and a structured relationship between voter and leader, while the new parties tend to appeal to floating electorate and have no pre-existing organization (Olson in Hofferbert, 1998: 24).Gunther and Diamond assessed and criticised the existing typologiespresenting their typology, which is based on three criteria: formal organization, programmatic commitments, and strategy and behavioural norms (tolerant and pluralistic, or proto-hegemonic in its objectives) (Gunther and Diamond, 2003: 171).Their thirteen party types are grouped into: elite-based parties, mass-based parties, ethnicity-based parties, electoralist parties, and movement parties (Gunther and Diamond, 2003: 172).The elite-based parties refer to "those whose principal organizational structures are minimal and based upon established elites and related interpersonal networks within a specific geographic area", and the types of parties that belong to this group are the traditional local notable party (the first type of party to emerge, simple in organisation, and based on traditional personal relationships), and clientelistic party (parties that emerged in industrialised and urbanised societies; is a confederation of notables with geographically, functionally, or personalistically based support; and has a weak organisation) (Gunther and Diamond, 2003: 175-176).The mass-based parties group emerged as a manifestation of the political mobilisation of the working class in many European polities", consisting of: denominational (have large base of duepaying members, hierarchically structured, based programmes on religious beliefes), fundamentalist (seek to reorganise party around strict religious doctrines), pluralist-nationalist (have mass membership, extensive organisation, supporters belonging to a distinct national group), ultranationalist (prioritise the nation above individuals, admires the use of force, selective recruitment), Leninist (aims to overthrow existing system and have a closed structure based on semisecret cell), and class-mass (the centre of power in executive committee of secretariat, have an open and tolerant stand) parties (Gunther and Diamond, 2003: 178-183).Ethnicity-based group consist of: the purely ethnic party (seeking to gather votes based on a particular ethnic group), and congress party (an alliance or coalition of ethnic parties, that appeal to national unity and integration) (Gunther and Diamond, 2003: 183-184).The electoralist group consists of: catchall party (shallow organisation, vague ideology, and overwhelmingly electoral orientation), programmatic party (modern, pluralist, thinly organised, main function is the conduct of election campaigns, has coherent ideological agenda), and personalistic party (only to provide a vehicle for personalities to win elections) (Gunther and Diamond, 2003: 185-188).The group of movement parties consists of: left-libertarian (reject the high status of economic issues, open membership, strong commitment to direct participation), post-industrial extreme right (emphasise self-affirmation, informality, and libertarianism) parties Gunther and Diamond, 2003: 188-189).
TOWARDS A MORE INCLUSIVE DISCUSSION
The study of political parties is definitely a Western-dominated area.Since the early development of theorisation on political parties, Western scholars have focused their attention on West European parties.Countries in Western Europe have more experience with democracies and parties and thus became the examples for less-developed countries.For scholars who are interested in studying parties from other parts of the world -although there are inevitable differences in circumstances where the parties have developed, there are aspects of Western studies of political parties that can be utilised, as I mention below.
Political parties are inevitable in democracies.The full-functioning of a free electoral system is crucial for the functioning of democracy.Although partisanship and membership has arguably declined as a result of modernisation, there is also a growing internal democratisation within the parties, whereby members are given more power to influence decisions in the parties.This trend is likely to continue as citizens increasingly become floating voters and they may make the decision of which party to support based on which party gives them the most influence.
It would also be useful to draw from the experience of Western countries in terms of party organisation.Because they are more advanced, the Western parties are arguably more sophisticated, compared to their counterparts from other regions in the world.The range of activities and indicators of party organisation presented by Janda and Colman can be used for any party to determine how developed it is.The degree of partisanship and indicators to measure it, presented by Schmitt and Holmberg, are also useful for every party or party enthusiasts to utilise.To categorise parties has been difficult.There are various aspects to consider when developing a typology, thus most of the typologies are not comprehensive and cannot be used for parties from other regions.The elaborate typology presented by Gunther and Diamond looks promising for including parties from other regions.
Although discussion on political parties is still Western-dominated, more attention needs to be shifted to include other regions.The recent development of political parties in Indonesia, for example, deserves more attention from political scientists.Western scholars are not in denial about the fact that they have been too Western-oriented, and the typology by Gunther and Diamond is promising in terms of the effort to develop a more comprehensive approach of political parties.It remains to be seen, however, how much more attention that will be given to parts outside the Western countries to study political parties. | 2018-12-12T11:55:04.891Z | 2010-03-02T00:00:00.000 | {
"year": 2010,
"sha1": "f791b0efecbd6636143dcf0005492cfaa61604f4",
"oa_license": "CCBYSA",
"oa_url": "http://jurnal.unpad.ac.id/sosiohumaniora/article/download/5441/2803",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f791b0efecbd6636143dcf0005492cfaa61604f4",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
64442372 | pes2o/s2orc | v3-fos-license | Enhanced Performance of Isolated Wind-Diesel (IW- D) Hybrid System feeding Heavy Load under various Operating Conditions
Objectives: To check /validate the stiffness of IW-D Hybrid System by putting heavy load of 150 hp at various loading conditions. And, to implement the remedies to the ill-conditions/operating conditions like frequency runaway and varying wind speed and to study the effect of these conditions on overall system as well as on heavy load. Further, to improve system performance, a suitable controller is to be incorporated with pitch angle control system. Method/Statistical Analysis: The electromechanical dynamics of various large electrical machines are represented by their full order models. The models of synchronous machine (7th order) of diesel genset, SEIG (5th order) and heavy load (5th order) are simulated to obtain power and voltage dynamics. The system dynamics consist of higher order differential equations, which are solved by converting into simpler algebraic equations related to current and voltage that are solved in short time by using phasor simulation in MATLAB/Simulink environment. Findings: The hybrid power systems are becoming popular because of greater efficiency and balance of energy supply. Due to many advantages like ruggedness, inexpensiveness and requirement of less maintenance in contrast to other electrical machines, heavy load like 3-φ squirrel cage induction motor that shares a major part of the total electrical portion on any power system. From the exhaustive study of, it is found that very few authors have worked on such system and the performance of the heavy load is either not considered of if taken into account it is not considered in detail. In this paper, an IW-D Hybrid Power System feeding heavy load is considered to analyze/ check the stiffness of IW-D Hybrid System. Few ill-conditions like frequency runaway and varying wind speed affects the system functioning and have considerable impact on the heavy load is reported in few papers but its remedy is not implemented or if given, the ill effect of these problems on the heavy load is not considered which needs great attention. Therefore, the solutions of above problems are mentioned in this work. Application/Improvement: The dynamic performance of modified IW-D Hybrid System has been validated and checked in context with ruggedness by putting heavy load of 150 hp at normal and overload condition. The dynamic behavior of IW-D System as well as heavy load has been improved for frequency runaway. Further, a PI controller including pitch servo is implemented to control the output of SEIG driven by wind turbine in case of varying wind speed, which improves the dynamic performance of the system.
Introduction
The depletion of conventional resources has led to opt the non-conventional energy sources like wind, solar, etc. required for the generation of clean energy thereby reducing the level of pollution. Most of the developing countries rely on these resources because of its availability in abundance and in grid isolated areas 1,2 .
The electrical energy harnessed from wind has been used since ancient times for various purposes such as water pumping in flour mills and for many other purposes. Wind is useful resource for power generation from the viewpoint that it is cheap, easily available in remote areas and does not cause any environmental problem which is its main advantage. The grid isolated areas are more benefited by connecting diesel generating units in parallel with wind generation thus ensuring continuity of supply [3][4][5] .
The generation of electricity using wind turbines is suitable in remote areas which are generally grid isolated. Grid isolated power system is much advantageous over grid connected system because in grid connected systems, wind power generators should stay connected during the fault conditions to the grid according to grid codes which create disturbance in the wind turbines. The wind turbine may also get damaged due to the stress on rotor shaft because the electromagnetic torque oscillates at double the grid frequency 6,7 .
A lot of work is going on in the area of FACTS devices 8 for reactive power compensation but there are many drawbacks of devices such as, it introduces harmonics in the power system which affect the sensitive equipments and also it deteriorates the performance of heavy loads connected in the power system, whereas diesel genset provides reactive power without such problems that are caused by FACTS devices 9,10 .
The best suited generator for wind turbine energy generation in remote grid isolated areas is Self-Excited Induction Generator (SEIG) as they are less expensive, robust, have better transient performance and require low maintenance. A wind turbine drives the three-phase induction machine which is operated in self excitation mode by connecting a capacitor bank in parallel to its stator terminal [11][12][13] . The diesel generator consisting of synchronous generator and diesel engine will keep the balance of power supply when there is change in load and in speed of the wind which ultimately changes the wind turbine generation 14,15 .
The loads are of different types such as industrial load or heavy loads like water pumps, compressors which are responsible for causing voltage and frequency fluctuations in power system. Other types of loads are domestic loads such as lighting, heating that do not get affected much by the disturbance in the power system. Another type of loads are electronic loads such as radar, television and many more that are much sensitive to disturbance in the power system 16 . Many authors have worked on the grid connected system but little work has been done on isolated Hybrid System feeding heavy load.
Many shortcomings like frequency runaway, variable wind speed have been raised, but no technical solution is implemented to overcome the mentioned problems. The impact of frequency runaway and variable wind speed on the diesel generator have not been taken into consideration. Further, the performance of heavy load has either not considered or if taken into account, it is not considered in detail during these ill-conditions 16,17 .
In this paper, the performance of an IW-D Hybrid System is analyzed which is feeding heavy load (150 hp) used for water pumping or in industries. The local loads such as residential or light loads are also considered in the form of static load. The system performance and especially, the performance of heavy load under various loading conditions are assessed. The problem of frequency runaway is generated and its remedy to overcome this problem is discussed in this paper. Increased wind speed increases the output of SEIG which needs to be controlled by some suitable controller which is presented in detail [16][17][18] .
Modeling of Wind Turbine
The mechanical output power is given by 19
Modeling of 3-ϕ SEIG
The wind turbine drives the SEIG and the reactive power required for starting is provided by the shunt excitation capacitor bank. Several papers have already been published on the modeling of SEIG [21][22][23][24] . Therefore the mathematical modeling of SEIG is not considered in detail in this paper.
Modeling of Diesel Genset
The diesel genset consists of a prime mover i.e. diesel engine and synchronous generator. The diesel engine comprised of a governor and exciter shown in Figure 1. For keeping the frequency constant, the governor maintains the rotor speed constant by supplying the rated power. The governor of diesel genset maintains the balance of real power in the system. The synchronous generator with exciter provides output voltage control which is affected by the time constant of field winding, DC power supplied to the field winding and its response. The excitation control system maintains the balance of reactive power thereby controlling its output voltage 25 .
As the wind speed and load never remains constant therefore frequency and its voltage will not remain constant under transient conditions, thus the diesel genset will follow change in wind speeds and loads 14,15,26 .
Modeling of Heavy Load
The modeling of heavy load in d-q axis stationary reference frame is expressed as 18 The phase currents (i pa , i pb , i pc ) of the motor are obtained by 2-ϕ to 3-ϕ transformation and subsequently, the line currents are given as: i am = i pa − i pc , i bm = i pb − i pa , and i cm = i pc -i pb (5) The electromagnetic torque developed by heavy load is given by: The load torque (T ld ) is given by: The speed derivative of heavy load is expressed as: Where the heavy load have Number of poles = P m Speed of motor = w m in rpm Moment of inertia = J m in kg per m 2
Modeling of Dummy Load
The dummy load comprised of eight sets of 3-ϕ resistors connected in series with GTO thyristor switches. The rated power of each set of resistors follows a binary progression which provides the access to vary the easily in steps 9 . The power consumption of dummy load can be given as: .
where D STEP is the power corresponding to lower significant bit. The load can be varied from 0 to 204 kW in steps of 0.8 kW (T ref = 0 to 204 kW and D STEP = 0.8 kW). The value of k j is '0' when the associated GTO is turned off and '1' when the associated GTO is turned on.
System Description and Validation
The modified schematic diagram of IW-D Hybrid System feeding static load and heavy load 16 is shown in Figure 1. It consists of a 480 V, 275 kVA SEIG driven by prime mover i.e. wind turbine. The excitation to SEIG is provided by a 75 kVAR capacitor bank by connecting in parallel with the SEIG stator terminal. The system shown is also comprised of a 400 kW diesel genset 16 which is used for maintaining the balance of power supply. The hyrid power system feeds a heavy load of 150 hp along with the static load of 200 kW.
The IW-D hybrid power system feeding heavy load under various loading conditions is analyzed through modeling and simulation in MATLAB/Simulink environment 27 .
Initially the diesel genset is running. The SEIG driven by wind turbine is switched on at t = 0 s. The two generating sources is feeding static load (200 kW) and heavy load runs at no-load at t = 0s, 125% of rated load at t = 3 s, rated load at t = 6 s and no-load at t = 8 s. Figure 2 shows the phase A voltage (v A ) and current (i A ) at main load terminal (consisting of both the static load and heavy load). At t = 0 s, the Hybrid System feeds the static load (200 kW) which remains same throughout the simulation and heavy load runs at no load. At this moment, the SEIG driven by wind turbine generates 206 kW but due to losses it produces 200 kW and diesel genset does not contribute any power shown in Figure 3, thus maintaining the voltage at unity after 0.88 s of starting, the value of current is less than 1 p.u. The total real power is absorbed by the main load shown in Figure 3.
At t = 3 s, the load on heavy load is increased to 125% of the rated load. Now the SEIG produces its nominal power and the diesel genset contributes by generating power to meet increased load demand as shown in Figure 3, thus, stabilizing the system after 1 s as shown in the Figure 2. The total real power generated from WD system is shown across the main load in Figure 3. The phase A current at main load terminal is increased to 1.7 p.u.
At t = 6 s, the load on the heavy load is decreased to its rated value and again diesel genset generates the power and maintain the system stability after 0.30 s. The phase A current at main load terminal is reduced to 1.4 p.u. shown in Figure 2. At t = 8 s, the load is reduced to zero on heavy load i.e. no-load, the SEIG produces nominal power which is sufficient to meet the load demand, the diesel genset produces no real power as shown in Figure 3. The system becomes stable after 0.55 s and the phase A current across main load is reduced to less than 1 p.u. as shown in Figure 2.
As the power generated by SEIG driven by wind turbine and diesel genset is equal to the power (real) absorbed by the load, it proves the validity of the model and also the system is validated by comparing the characteristics with base paper 16 .
Results and Discussions
The performance of IW-D system feeding heavy load is analyzed in this section. The effect of normal load and overload on the heavy load is critically analyzed. The Vol 9 (40) | October 2016 | www.indjst.org Narinder Kaur and Vivek Pahwa effect of operating conditions like frequency runaway and varying speed on the entire system and heavy load and the remedies are implemented to overcome such problems.
Impact on Heavy Load
The impact of various loading condition on heavy load is taken into account. The static load (200 kW) along with heavy load is being fed from W-D generation. The heavy load runs at no load, rated load and overload which is 125% of rated load 18 . Figure 4 shows the 1. Stator current of phase a (i spa ), 2. Rotor current of phase a (i rpa ), 3. Speed (w m ) and 4. Electromagnetic torque (T em ) of heavy load. At t = 0 s, the stator current and rotor current of phase a heavy load during no load is shown in Figure 4(a) and (b). The stator current during this period is 0.5 p.u. And the rotor current is zero. The speed is 1 p.u. and the electromagnetic torque is zero as shown in Figure 4(c) and (d).
At t = 3 s, heavy load runs at 125% of rated load thus, the stator current and rotor current of phase a heavy load is increased to 1.5 p.u. shown in Figure 4. The speed of heavy load gets reduced and the electromagnetic torque is increased as shown in Figure 4 At t = 6 s, the heavy load is made to run at rated load. The stator and rotor currents get reduced, the speed of the heavy load is increased to 1 p.u. and the electromagnetic torque is reduced to 1 p.u. shown in Figure 4. Further, at t = 8 s the load is decreased to no-load and the impact on the characteristics is shown in Figure 4.
The speed and torque characteristics of the heavy load under above conditions are shown in Figure 5. Steady state value at rated load from t = 6s to t = 8s Steady state value at no-load from t = 8s to t = 10s Steady state value at no-load from t=0s to t = 3s Steady state value at 125% ofrated load from t = 3s to t = 6s
Frequency Runaway
Frequency runaway is the problem which occurs when the load fed by wind-diesel system is decreased. This results in over sizing of wind turbine with respect to the load, thus the diesel governor loses its speed control and synchronous generator behaves as a motor by absorbing the real power 16 . This problem is corrected by connecting the dummy load across load terminal of 0-204 kW shown in Figure 6. The modeling of dummy load is mentioned in mathematical modeling section. Figures 7-12 illustrates the behavior of the IW-D system without and with dummy load. The operation of the system remains the same as explained in system description and validation section but at t = 8 s, the heavy load runs at half load rather than no-load. The static load of 200 kW is removed at t = 10 s when heavy load is running at half load. At this instant, the rotor speed of the SEIG will shoot to a very high value resulting in over sizing of wind turbine but when dummy load is connected, the speed of SEIG is maintained at unity as shown in Figure 7. The frequency of the synchronous generator of diesel genset without dummy load increases but with the dummy load, it remains at nominal value i.e. 60 Hz as shown in Figure 8. The instantaneous voltage (v A ) and current of phase A (i A ) at main load terminal is shown in Figure 9. The voltage remains constant whereas the current decreases without dummy load. The current of phase A attains the value near unity with the switching of dummy load as shown in Figure 9(c) and (d). Figure 10 shows the real power at wind turbine, diesel genset and main load terminal. At the instant of load removal the SEIG driven by wind turbine tries to generate nominal power but the load available is less, thus it provides only 100 kW and becomes oversized in comparison to load as shown in Figure 10 and the diesel genset absorbs the real power as shown in Figure 10(b) thus turns into motor. With switching on the dummy load at the same instant the excess power is absorbed by the dummy load and the diesel genset behaves as generator as shown by red line Figure 10.
The stator current (i spa ) and rotor current (i rpa ) and electromagnetic torque (T em ) of heavy load does not vary much due to frequency runaway and remain the same as in the normal operation but the speed (w m ) of heavy load shoots to a high value when this condition occurs. With application of dummy load, the speed attains the value of 1 p.u as shown in Figure 11. The speed and torque characteristics of heavy are shown in Figure 12
Pitch Angle Control including Pitch Servo
Speed of wind never remains constant. It keeps on changing every time depending upon the atmospheric pressure. The pitch angle of blades is zero when there is no change in the wind speed (10 m/s). When the speed of the wind is increased the wind turbine generates more power compared to the nominal power. Thus in order to limit the power generation by SEIG driven by wind turbine to nominal value, pitch angle is controlled with the help of PI controller including pitch servo which takes input power from SEIG driven by wind turbine and nominal power therefore, processing the error with PI controller and increase the pitch angle to limit the output power (P WT) to rated value as shown in Figure 13. The value of k p is taken 5 and that of k i is 25 of PI controller. Pitch servo is a non-linear device which rotates almost all the blades of wind turbine along the longitudinal axis. As shown in Figure 13 the pitch servo consists of an integrator with the time constant in a closed loop.
The modeling of pitch servo is given by: The pitch angle β ranges from 0 to 45º which varies at the rate of ± 2º/s. The rate of change of pitch angle of blades affects the performance power regulation of SEIG. The response of pitch servo is represented by rate limiter and because the pitch rate is faster the dynamic performance of PI controller is better.
The behavior of the system without and with pitch angle control is described. During the simulation, the heavy load runs at no load. With the incorporation of PI controller, the pitch angle increases from 0 to 2.20 degrees when the wind speed increases from the nominal value i.e. 10 m/s to 11 m/s at t = 5 s as shown in the Figure 14.
The real power generated by the SEIG increases to around 240 kW without PI controller but with PI controller, it attains the nominal value i.e. 200 kW in 1.3 s as shown in the Figure 15(a). At the same instant, the diesel genset absorb the real power without PI controller and behaves as a motor but with the PI controller it absorbs zero real power and behaves normally after 1.4 s as illustrated in Figure 15. The total real power is fed to the main load shown in Figure 15 The voltage (v A ) and current (i A ) of phase A at main load terminal remains almost same due to smooth pitch angle control as illustrated in Figure 16.
The stator current (i spa ) and rotor current (i rpa ) of heavy load does not vary much with and without PI controller. The speed (w m ) is increased at t = 5 s without PI controller. But it remains constant with the implementation of PI controller. The electromagnetic torque (T em ) almost remains constant shown in Figure 17. Figure 18 shows the speed torque characteristics of heavy load in which blue line indicates that with the increase in the speed of wind, the speed of heavy load also increases and reaches a high whereas the torque does not vary much which results in electrical and mechanical damage to the heavy load but with the help of PI controller and pitch servo, the output of SEIG is maintained to its nominal value much faster and it ensures proper functioning to the heavy load without any damage and fatigue.
Conclusions
The IW-D Hybrid System feeding heavy load under various loading conditions has been validated. The impact on the heavy load being fed from two sources has been analyzed. The main focus of the work is to overcome the shortcomings like frequency runaway and controlling pitch angle of wind turbine in case of increasing wind speed due to which diesel genset absorbs real power and behaves as a motor. Vol 9 (40) | October 2016 | www.indjst.org Narinder Kaur and Vivek Pahwa Technical solutions have been provided like dummy load is connected across the main load which absorbs excess power when load is reduced to overcome frequency runaway problem.
Another solution for pitch angle control is provided with the help of PI controller including pitch servo to limit the power produced by SEIG driven by wind turbine to nominal value when the speed of wind increases the base value (10 m/s).
Thus in both the solutions diesel genset is prevented from turning into motor and major chunk of energy is harnessed from wind generation ultimately saving fuel consumption and promoting more use of pollution free energy.
At last, the effect of frequency runaway and the varying wind speed on the heavy load has been analyzed to check the robustness of the system. | 2019-02-16T14:31:35.874Z | 2016-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "6e84e6d3cd1db379094f5cea5c82454d9261b80c",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2016/v9i40/100599",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0f62675080d175b82fe5b960c02ab09dbd5c86a2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
2649196 | pes2o/s2orc | v3-fos-license | The Implication of the Polymorphisms of Cox-1, Ugt1a6, and Cyp2c9 among Cardiovascular Disease Patients Treated with Aspirin
Purpose. Enzymes potentially responsible for the pharmacokinetic variations of aspirin include cyclooxygenase-1 (COX-1), UDP-glucuronosyltransferase (UGT1A6) and P450 (CYP) (CYP2C9). We therefore aimed to determine the types and frequencies of variants of COX-1 (A-842G), UGT1A6 (UGT1A6*2; A541G and UGT1A6*3; A522C) and CYP2C9 (CYP2C9*3; A1075C) in the three major ethnic groups in Malaysia. In addition, the role of these polymorphisms on aspirin-induced gastritis among the patients was investigated. Methods: A total of 165 patients with cardiovascular disease who were treated with 75-150 mg daily dose of aspirin and 300 healthy volunteers were recruited. DNA was extracted from the blood samples and genotyped for COX-1 (A-842G), UGT1A6 (UGT1A6*2 and UGT1A6*3) and CYP2C9 (CYP2C9*3; A1075C) using allele specific polymerase chain reaction (AS-PCR). Results: Variants UGT1A6*2,*3 and CYP2C9*3 were detected in relatively high percentage of 22.83%, 30.0% and 6.50%, respectively; while COX-1 (A-842G) was absent. The genotype frequencies for UGT1A6*2 and *3 were significantly different between Indians and Malays or Chinese. The level of bilirubin among patients with different genotypes of UGT1A6 was significantly different (p-value < 0.05). In addition, CYP2C9*3 was found to be associated with gastritis with an odd ratio of Conclusion: Screening of patients with defective genetic variants of UGT1A6 and CYP2C9*3 helps in identifying patients at risk of aspirin induced gastritis. However, a randomised clinical study of bigger sample size would be needed before it is translated to clinical use. What This Study Adds: Genotyping Screening for UGT1A6*2, UGT1A6*3 and CYP2C9*3 helps to identify patients at risk of aspirin induced gastritis. This article is open to POST-PUBLICATION REVIEW. Registered readers (see " For Readers ") may comment by clicking on ABSTRACT on the issue's contents page.
INTRODUCTION
Aspirin with a dose of 75-150 mg daily has been found to reduce risk of vascular events by approximately 32% in patients with cardiovascular disease (CVD) [1].However, a substantial number of patients do not respond optimally to aspirin treatment [2].The inter-individual variability of patients' responses is due to variation either in pharmacokinetic (PK) or pharmocodynamic (PD) properties of aspirin [3].Those polymorphic enzymes potentially affecting the PK-PD of aspirin include cyclooxygenase-1 (COX-1), UDPglucuronosyltransferase 1A6 (UGT1A6) and CYP2C9 enzymes.
enzymes are reported to be associated with altered enzyme function which affects aspirin metabolism and efficacy. .The amino acid changes for two variants of UGT1A6 (UGT1A6*2; A541G and UGT1A6*3; A522C) result in a 30%-50% reduced enzyme activity compared to enzyme encoded by the wild-type allele [4].Both variants of UGT1A6*2 and *3 are in complete linkage disequilibrium [5].The variant alleles for CYP2C9*3 also produce enzymes bearing some 5%-30% of the activity of the wild-type enzyme [6].All these polymorphic enzymes might modulate the therapeutic effect of aspirin in treatment or prevention of CVD.Carriers of variant alleles are more prone to develop acute gastrointestinal problems when they receive aspirin as compared to non-carriers [7,8].
Several biochemical tests are routinely screened among the patients in order to assess the risk of early stage CVD.Altered Lipid profiles which includes HDL, LDL, trigylcerides and total cholesterol are common risk factors for CVD.The higher the cholesterol level, the higher the risk of CVD disease.However, HDL is "good cholesterol" which protects one against the disease by removing cholesterol and excess fat from blood vessel walls and transports them to the liver to be removed from the body.Besides, there is an apparent protective effect of bilirubin which is in similar magnitude as HDL.A study shows that a 50% decrease in total bilirubin was associated with a 47% increased risk of having more severe coronary artery disease (p=0.02)[9].Since, UDP-glucuronosyltransferase (UGT1A6) is highly expressed in liver and plays a major role in the metabolisms of bilirubin, their genetic polymorphisms and bilirubin levels are potentially important risk factors of CVD.
The aims of this study were to investigate the relationship between the clinical parameters and genetic polymorphisms of COX-1, UGT1A6 and CYP2C9 in cardiovascular patients.This allows the assessment of the impact of genetic polymorphisms of COX-1, UGT1A6 and CYP2C9 on gastritis or other gastrointestinal symptoms in cardiovascular patients treated with aspirin.
Subjects
The protocol of the study was approved by the local Research and Ethics Committee.The study comprised of 165 patients and 300 healthy volunteers aged 18 to 65 years old.All patients were diagnosed with cardiovascular disease and treated with aspirin (acetylsalicylic acid, 75-150 mg daily dose); while the healthy volunteers were unrelated individuals not receiving any treatment.All the participants were healthy mentally and physically, understand the study protocol and willing to sign the consent form.
Data Collection
Medical records for all patients were reviewed.The clinical data include medical history, biochemical test result (i.e.liver function test, renal function test, full blood count, lipid profile and blood glucose), dosage regimen, INR measurement, concurrent drugs intake and adverse effects experienced by patients were recorded.Sign and symptoms of gastritis such as gastric pain, vomiting and loss of appetite were reviewed and diagnosed by the medical doctors in charged.
Genotyping
Five milliliters of blood samples were drawn into tubes which contained tri-sodium citrate (4%).The blood samples were used for DNA extraction and genotyping for variants of UGT1A6, COX-1 and CYP2C9.The DNA was extracted using lysis method that has been optimized previously [10].The extracted DNA was dissolved in 1x TE buffer and kept at -20 ºC freezer until use.Allele specific PCR (AS-PCR) method was developed for detection of each variant.This technique is based on single-nucleotide variations which introduced destabilizing mismatch at the 3' end of the allelespecific primers [11].
Data Analysis
The genotypes and allelic frequencies of COX-
RESULTS
Three hundred healthy volunteers, 100 each of Malays, Chinese and Indians were recruited.A total number of 165 patients with cardiovascular disease (CVD) who met the inclusion and exclusion criteria of the study were also recruited.Most of the CVD patients were male (84.52%) and the remaining were female (15.48%).About 86.67% of the total patients were overweight and most of them were smokers (59.39%); 75.15% of the patients have dyslipidaemia (Table 1).Biochemical results for triglycerides, TC, HDL, LDL, PT and INR of the patients were significantly different between the three ethnic groups (Kruskal-Wallis test; p-value < 0.05).The variables which were significantly different were further analyzed using Mann-Whiney tests.
A total of 300 healthy volunteers participated in this study were successfully screened for COX-1 (A-842G), UGT1A6 (A541G and A522C), and CYP2C9*3 (A1075C) polymorphisms.For COX-1 (A-842G), no variants were detected in healthy volunteers.Out of the 300 healthy volunteers, 245 and 55 were carriers of wild-type and heterozygous genotypes for CYP2C9*3, respectively.No homozygous CYP2C9*3 genotype was detected.The genotypes were in Hardy-Weinberg Equilibrium (HWE).The genotype frequencies were significantly different between Indians and Malays as well as between Indians and Chinese; the Indians carry 14% of CYP2C9*3 allele as compared to 2.5% and 3% in Malay and Chinese respectively (Table 2).The genotype frequencies for both UGT1A6*2 and *3 among the healthy volunteers The patients were classified according to different UGT1A6*2 and *3 genotypes and the patients' bilirubin levels show significant differences when compared according to genotypes with p-value less than 0.05 (Table 3).The association between genotype and gastritis event among patients who took aspirin as their anticoagulant drug were calculated using chisquare test.The odds ratio (OR) was used to examine the risk of each genotype in association with the development of gastritis event.Referring to the odds ratios in Table 4, individual who were heterozygous CYP2C9* 3 genotype shows 6.8 times more likely to have gastritis when compared to the individual with homozygous wild-type.25.00 75.00 a Chi-square test , OR: Odd Ratios, * Statistical significance (P-value < 0.05).
Haplotypes was analysed using haploview which was downloaded from http://www.broadinstitute.org/haploview.This software was used to observe the linkage between two UGT1A6 variants (*2 and *3).This analysis was done for the three major ethnic groups in Malaysian population which are the Malays, Chinese and Indians.A hundred samples from each ethnic group were used to study the degree of linkage between the three ethnic groups.As shown in Table 5, Indians have a strong degree of linkage disequilibrium with D' value of 1.Both variants were found in different percentage of linkage disequilibrium for different ethnic groups, which are 95%, 81% and 100% in Malay, Chinese and Indian ethnic groups, respectively (Table 5).
DISCUSSION
Most of the smokers were males and this is in line with the high number of male smokers (2.61 million) in Malaysia compared to 120,000 female smokers [13,14].The latest projections indicate that approximately 3.1 million smokers were between 25 and 64 years in Malaysia.The Malays constitute the largest number of smokers in Malaysia, with 60.9% [15].This situation might provide one of the reasons why the Malays tends to have higher incidences of CVD at earlier age compared to the Chinese.
Among 165 patients with CVD in this study, 86.67% were overweight and 75.15% have dyslipidemia.Singh et al. [16] have suggested that obesity and dyslipidemia are related and play major roles in the development of CVD.In addition, this study also showed that there was a significant difference in the lipid profile between the Malays and two other ethnic groups, which was in agreement with the results of Khoo et al. [17].Further, there was also a significant association between body weight and ethnicity (p = 0.007), where the BMI of the Malays were higher.This finding is supported by Malaysia's National Health and Morbidity Survey 2 (1996 -1997) which found that obesity was significantly associated with ethnicity.This observation may be explained by differences in dietary habits or lifestyle, or by genetic factors between the three ethnic groups.
PT and INR values of the Malay patients were found to be significantly different when compared with the values observed in the Indian and Chinese (p values of 0.016 and 0.014, respectively).The differences of PT and INR may be affected by daily diet.According to Rombout et al. [18], an increase in the amount of food rich in vitamin K can lower the PT and INR of an individual.Green and leafy vegetables such as broccoli, lettuce and spinach are part of the routine diet of the Chinese ethnic group in Malaysia and therefore it is not surprising for the Chinese to have lower PT and INR values [19].
Halushka et al. [20] have found that heterozygocity of COX-1 (A-842G) gene significantly shows greater inhibition in the formation of prostaglandin H 2 by acetylsalicylic acid compared with common homozygous wildtype allele.Unfortunately this variant is not detected in this study, even though it was detected at a high percentage among the Caucasians (19% heterozygous and 1% homozygous) [20,5].Nagar et al., (2004) reported that the frequencies of UGT1A6*2 and UGT1A6*3 were 5.7% and 5.2% in Caucasian and, 10% and 5% in African-American [21].The percentage frequency of UGT1A6*2 and UGT1A6*3 for the Chinese population in China were 22% and 24.7%, respectively [22].Similarly, in the Japanese, UGT1A6*2 and UGT1A6*3 were present in 21.8% and 2.26%, respectively in the population, [23].Limenta et al. [24] reported that UGT1A6 variants were found in 19% of the Thailand population.The percentage of UGT1A6 variants reported in this study was somewhat consistent with findings among the Chinese in China and Japanese population.Both variants of UGT1A6 were reported to be in complete linkage-disequilibrium of >98% [5]; while in this study, the linkage was found to be 94%.
The significant association between the concentration of the patients' bilirubin and UGT1A6*2 and *3 was in accordance with a previous finding by Peters et al. [25].From the existing evidence, the concentration of bilirubin in patients with CVD was associated with the enzyme activity of UGT1A6.As shown in our results, higher concentration of bilirubin was found in patients who possess variants allele of UGT1A6 with slow metabolizing capacity.In other studies by Djousse et al. [26] and Lin et al. [27], high levels of bilirubin was associated with decreased risks of coronary heart disease (CHD) and cardiovascular disease (CVD).Similarly in the Framingham Offspring Study, higher serum bilirubin concentrations were associated with decreased risk of CVD, CHD, and myocardial infarction (MI) [28].The association between the genetic polymorphisms of UGT1A6 and the concentration of bilirubin are interesting finding and a case-control study with longer follow-up time might be useful to confirm the relationship.
Homozygous genotype of CYP2C9*3 was reported to present at low frequency of only 1% in the Malay and Indian population [29], but no homozygous CYP2C9*3 was found in this study.There were also no subjects with homozygous CYP2C9*3 in the 115 Han Chinese as well as 218 Japanese and 98 Taiwanese [30][31][32].The genotype frequency of CYP2C9*3 in this study is in agreement with the reported rare frequency in most population worldwide.
Similar with findings from Van Oijen et al. [5], no significant association between the occurrence gastric events and polymorphisms of UGT1A6 gene in aspirin treated patients were detected.However, there was a significant association between the slow metabolizer (SM) of heterozygous CYP2C9*3 with the gastritis events among the CVD patients treated with aspirin.There were 25% occurrences of gastritis in patients with heterozygous CYP2C9*3 while only 4.67% of gastric event among patients with homozygous wild type.Martin et al., [33] revealed that 30% of NSAID treated subjects experienced ulceration event in non-wild type CYP2C9 genotypes.The calculated odd ratios (OR) in this study indicated that individuals who were taking aspirin and with heterozygous CYP2C9*3 genotype have 6.8 times higher risk to get gastritis as compared to the individual with homozygous wild-type (p-value 0.033).
CONCLUSION
In conclusion, we have successfully developed allele specific PCR (AS-PCR) methods for detection of COX-1 (A-842G), UGT1A6*2 and *3 and CYP2C9 (A1075C) alleles.Besides, we also have successfully genotyped all the 165 patients and 300 healthy volunteers.
There was an association between the bilirubin concentration of CVD patients and variants of UGT1A6*2 and *3 gene.In addition, this study also found an association of CYP2C9*3 variant allele (slow metabolizer of aspirin) with the occurrence of gastritis event among CVD patients treated with aspirin.
This study provides support to recommend genotyping of CVD patients for UGT1A6*2 and *3 and CYP2C9*3.However, further studies are needed to confirm the association between the genetic polymorphisms of UGT1A6 with the concentration of bilirubin.A case-control study with longer follow-up period might be useful to provide answers to the relationship.In addition, the potential of bilirubin as therapeutic target or tool in monitoring the disease of CVD patients, a clear cut-off concentration or range of normal bilirubin concentration need to be established.Case controlled studies using bigger sample size is required.
Table 1 .
Demographic and Clinical Data of the CVD Patients (Categorical variables) a Chi-square test, b Risk Factor, n: Sample Size, %: Percentage, BMI: Body Mass Index
Table 5 .
Analysis of linkage disequilibrium of UGT1A6*2 and UGT1A6*3 in Malaysian Population n: Sample size | 2017-04-03T14:15:44.494Z | 2015-09-30T00:00:00.000 | {
"year": 2015,
"sha1": "7ae49a314ac372967c405f2c04851d0798dfc26b",
"oa_license": "CCBY",
"oa_url": "https://journals.library.ualberta.ca/jpps/index.php/JPPS/article/download/24226/18908",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "7ae49a314ac372967c405f2c04851d0798dfc26b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56074665 | pes2o/s2orc | v3-fos-license | On the Implementation of the Canonical Quantum Simplicity Constraint
In this paper, we are going to discuss several approaches to solve the quadratic and linear simplicity constraints in the context of the canonical formulations of higher dimensional General Relativity and Supergravity developed in our companion papers. Since the canonical quadratic simplicity constraint operators have been shown to be anomalous in any dimension D>2, non-standard methods have to be employed to avoid inconsistencies in the quantum theory. We show that one can choose a subset of quadratic simplicity constraint operators which are non-anomalous among themselves and allow for a natural unitary map of the spin networks in the kernel of these simplicity constraint operators to the SU(2)-based Ashtekar-Lewandowski Hilbert space in D=3. The linear constraint operators on the other hand are non-anomalous by themselves, however their solution space will be shown to differ in D=3 from the expected Ashtekar-Lewandowski Hilbert space. We comment on possible strategies to make a connection to the quadratic theory. Also, we comment on the relation of our proposals to existing work in the spin foam literature and how these works could be used in the canonical theory. We emphasise that many ideas developed in this paper are certainly incomplete and should be considered as suggestions for possible starting points for more satisfactory treatments in the future.
Introduction
In [1,2], gravity in any dimension D + 1 ≥ 3 has been formulated as a gauge theory of SO (1, D) or of the compact group SO(D +1), irrespective of the spacetime signature. The resulting theory has been obtained on two different routes, a Hamiltonian analysis of the Palatini action making use of the procedure of gauge unfixing 1 , and on the canonical side by an extension of the ADM phase space. The additional constraints appearing in this formulation, the simplicity constraints, are well known. They constrain bivectors to be simple, i.e. the antisymmetrised product of two vectors. Originally introduced in Plebanski's [10] formulation of General Relativity as a constrained BF theory in 3+1 dimensions, they have been generalised to arbitrary dimension in [11] and were considered in the context of Hamiltonian lattice gravity [12,13]. Moreover, discrete versions of the simplicity constraints are a standard ingredient of the Spin Foam approaches to quantum gravity [14,15,16], see [17,18] for reviews, and recently were also used in Group Field theory [19,20,21] as well as on a simplicial phase space [22,23], where also their algebra was calculated. Two different versions of simplicity constraints are considered in the literature, which are either quadratic or linear in the bivector fields. The quantum operators corresponding to the quadratic simplicity constraints have been found to be anomalous both in the covariant [24] as well as in the canonical picture [25,3]. On the covariant side, this lead to one of the major points of critique about the Barrett-Crane model [14]: The anomalous constraints are imposed strongly 2 , which may imply erroneous elimination of physical degrees of freedom [26]. This triggered the development of the new Spin Foam models [27,28,15,24,16,29], in which the quadratic simplicity constraints are replaced by linear simplicity constraints. The linear version of the constraints is slightly stronger than the quadratic constraints, since in 3 + 1 dimensions the topological solution is absent. The corresponding quantum operators are still anomalous (unless the Immirzi parameter takes the values γ = ± √ ζ, where ζ denotes the internal signature, or γ = ∞). Therefore, in the new models (parts of) the simplicity constraints are implemented weakly to account for the anomaly. Also, the newly developed U(N ) tools [30,31,32] have been recently applied to solve the simplicity constraints [33,34,35].
In this paper, we are first going to take an unbiased look at them from the canonical perspective in the hope of finding new clues for how to implement the constraints correctly. Afterwards, we will compare our results to existing approaches from the Spin Foam literature and outline similarities and differences. We stress that will not arrive at the conclusion that a certain kind of imposition will be the correct one and thus further research, centered around consistency considerations and the classical limit, has to be performed to find a satisfactory treatment for the simplicity constraints. Of course, in the end an experiment will have to decide which implementation, if any, will be the correct one. Since such experiments are missing up to now, the general guidelines are of course mathematical consistency of the approach, as well as comparison with the classical implementation of the simplicity constraints in D = 3, where the usual SU(2) Ashtekar variables exist. If a satisfactory implementation in D = 3 can be constructed, the hope would then be that this procedure has a natural generalisation to higher dimensions. Since parts of the very promising results developed from the Spin Foam literature are restricted to four dimensions, we will restrict ourselves to dimension independent treatments in the main part of this paper.
The paper will be divided into three parts. We will begin with investigating the quadratic simplicity constraint operators which have been shown to be anomalous in [3]. It will be illustrated that choosing a recoupling scheme for the intertwiner naturally leads to a maximal closing subset of simplicity constraint operators. Next, the solution to this subset will be shown to allow for a natural unitary map to the SU(2) based Ashtekar-Lewandowski Hilbert space in D = 3 and we will finish the first part with several remarks on this quantisation procedure. In the section 3, we will analyse the strong implementation of the linear simplicity constraint operators since they are non-anomalous from start. The resulting intertwiner space will be shown to be one-dimensional, which is problematic because this forbids the construction of a natural map to the SU(2) based Ashtekar-Lewandowski Hilbert space. In contrast to the quadratic case, the linear simplicity constraint operators will be shown to be problematic when acting on edges. We will discuss several possibilities of how to resolve these problems and finally introduce a mixed quantisation, in which the linear simplicity constraints will be substituted by the quadratic constraints plus a constraint ensuring the equality of the normals N I and n I (π). In section 4, we will compare our results to existing approaches from the Spin Foam literature. Finally, we will give a critical evaluation of our results and conclude in section 5.
A Maximal Closing Subset of Vertex Constraints
In our companion papers [1,2,3,4,5,6], a canonical connection formulation of (D + 1)dimensional Lorentzian General Relativity was developed, using an SO(D + 1)-connection A aIJ 2 Strongly here means that the constraint operator annihilates physical states,Ĉ |ψ = 0 ∀ |ψ ∈ H phys 3 and its conjugate momentum π aIJ as canonical variables. Here, a, b, . . . = 1, . . . , D are spatial tensorial indices and I, J, . . . = 0, . . . , D are Lie algebra indices in the fundamental representation. A key input of the construction are the (quadratic) simplicity constraints π a[IJ π b|IJ] ≈ 0, (2.1) which enforce, up to a topological sector present in D = 3, that π aIJ ≈ 2n [I E a|J] , where E aJ is an SO(D + 1) valued vector density, a so called hybrid vielbein, and n I is the unique (up to sign) normal defined by n I E aI = 0. Fixing the time gauge n I = (1, 0, . . . , 0), one arrives at the ADM (extended phase space) formulation of General Relativity with SO(D) gauge invariance, see [2] for details. The second class constraints which normally arise as stability conditions on the simplicity constraints are absent in our connection formulation, since they can be explicitly removed by the process of gauge unfixing after performing the Dirac analysis, see [2]. Essentially, they are gauge fixing conditions for the gauge transformations generated by the simplicity constraint, which change a certain part of the torsion of the A aIJ . The square of this part of the torsion is included in a respective decomposition of the Palatini action and thus results in the second class partner for the simplicity constraint [2]. A quantisation of the simplicity constraint using loop quantum gravity methods results in a complicated operator, since π aIJ becomes a flux operator which acts as the sum of all right invariant vector fields associated to the different edges at a vertex. In order to facilitate the treatment of this quantum constraint, it has been shown in [3] that the necessary and sufficient building blocks of the quadratic simplicity constraint operator acting on a vertex v are given by where R e IJ is the right invariant vector field associated to the edge e, f γ is the a cylindrical function defined on an adapted graph γ, e.g. a spin network, v is a vertex of γ, E(γ) is the set of edges of γ and b(e) denotes the beginning of the edge e. The orientations of all edges are chosen such that they are outgoing of v. We note that these are exactly the off-diagonal simplicity constraints familiar from spin foam models, see e.g. [11,24].
Since not all of these building blocks commute with each other, i.e. the ones sharing exactly one edge, we will have to resort to a non-standard procedure in order to avoid an anomaly in the quantum theory. The strong imposition of the above constraints, leading to the Barrett-Crane intertwiner [14], was discussed in [11]. A master constraint formulation of the vertex simplicity constraint operator was proposed in [3], however apart from providing a precise definition of the problem, this approach has not lead to a concrete solution up to now.
In this paper, we are going to explore a different strategy for implementing the quadratic vertex simplicity constraint operators which is guided by two natural requirements: 1. The imposition of the constraints should be non-anomalous.
2. The imposition of the simplicity constraint operator in D = 3 should, at least on the kinematical level, lead to the same Hilbert space as the quantisation of the classical theory without a simplicity constraint. More precisely, there should exist a natural unitary map from the solution space of the quadratic simplicity constraint operators H simple to the Ashtekar-Lewandowski Hilbert space H AL in D = 3.
The concept of gauge unfixing [7,8,9] which was successfully used in order to derive the classical connection formulation of General Relativity [1,2] used in this paper was originally developed in the context of anomalous gauge theory, where it was observed that first class constraints can turn into second class constraints after quantisation [36,37,38,39,40]. This is 4 however precisely what is happening in our case: The classically Abelian simplicity constraints become a set of non-commuting operators due to the regularisation procedure used for the fluxes. The natural question arising is thus: How does a set of maximally commuting vertex simplicity constraint operators look like?
generates a closed algebra of vertex simplicity constraint operators. Under the assumption that no linear combinations with different multi-indices M = M 1 M 2 . . . M D−3 are allowed 3 , the set is maximal in the sense that adding new vertex constraint operators spoils closure.
Proof. Closure can be checked by explicit calculation. In order to understand why the calculation works, recall that right invariant vector fields generate the Lie algebra so(D + 1) as [3] R e IJ , R e KL = and thus infinitesimal rotations. The commutativity of (2.3) has been discussed in [3]. Further, we see that every element of (2.4) operates on (2.3) as an infinitesimal rotation. The same is also true for the elements in (2.4): Taking the ordering from above, every constraint operates as an infinitesimal rotation on all constraints prior in the list. Since the commutator is antisymmetric in the exchange of its arguments, closure, i.e. commutativity up to constraints, of (2.4) follows.
To prove maximality of the set we will show that, having chosen a subset of simplicity constraints as given in (2.3) and (2.4), adding any other linear combination of the building blocks (2.2) spoils the closure of the algebra. To this end, we make the most general Ansatz for an N -valent vertex. Note that the diagonal terms (i = j) are proportional to (2.3) and therefore do not have to be taken into account in the above sum, and that R N = N −1 i=1 R i can be dropped due to gauge invariance. Moreover, α ij can be chosen such that for fixed j not all α ij (i < j ) are equal. Otherwise, with α ij := α j we find the term α j IJKLM R IJ 1...(j −1) R KL j in the sum, which can be expressed as a linear combination of (2.3) and (2.4) and therefore can be dropped. Consider where we dropped terms proportional to (2.3) in the first and in the second step. For a closing algebra, the right hand side of (2.7) necessarily has to be proportional to (a linear combination of) simplicity building blocks (2.2). Terms containing R j (j ≥ 3) have to vanish separately (In general, one could make use of gauge invariance to "mix" the contributions of different R j . However, in the case at hand this will produce terms containing R N , which do not vanish if the contributions of different R j s did not already vanish separately).
We start with the case D = 3. The summands on the right hand sides of (2.7) are proportional to Whatever multi-index E we might have chosen in the Ansatz (2.6), we can always restrict attention to those simplicity constraints in the maximal set which have the same multi-index M = E. Then, the same calculation as in the case of D = 3 shows that the antisymmetrisations of the indices [ABIJ], [ABKC] and [IJKC] vanish. Therefore, the only possibilities are (a) the trivial solution α 1j = α 2j = 0 or (b) α 1j = α 2j ( = 0), which implies that the terms on the right hand side of (2.7) are a rotated version of The second option (b) is, for j = 3, excluded by our choice of α ij and we must have α 13 = α 23 = 0. Next, consider j = 4 and suppose we have α 14 = α 24 := α = 0. Then, we can define α 34 := α 34 − α and find the terms in (2.6). The first term again is already in the chosen set, which implies we can set α 14 = α 24 = 0 w.l.o.g. by changing α 34 → α 34 (We will drop the prime in the following). This immediately generalises to j > 4, and we have w.l.o.g. α 1j = α 2j = 0 (3 ≤ j < N ).
Suppose we have calculated the commutators of IJKLM R IJ 1...i R KL 1...i (i = 2, ..., n) with (2.6) and found that for closure, we need α ij = 0 for 1 ≤ i ≤ n and i < j < N . Then, which, by the reasoning above, again is not a linear combination of any simplicity building blocks for any choice of α (n+1)j , and therefore only the trivial solution α (n+1)j = 0 (n + 1 < j < N ) leads to closure of the algebra.
The Solution Space of the Maximal Closing Subset
In order to interpret this set of constraints recall from [3] that the constraints in (2.3) are the same as the diagonal simplicity constraints acting on edges of γ and can be solved by demanding the edge representations to be simple. The remaining constraints (2.4) can be interpreted as specifying a recoupling scheme for the intertwiner ι at v: Couple the representations on e 1 and e 2 , then couple this representation to e 3 , and so forth, see fig. 1 and thus that the representation on e 12 has to be simple, i.e. Λ 12 = (λ 12 , 0, ..., 0) λ 12 = 0, 1, 2, ...
(2.12)
Using the same procedure, all intermediate representations are required to be simple and the intertwiner is labeled by N − 3 "spins" λ i ∈ N 0 . We call an intertwiner where all internal lines are labeled with simple representations simple. is unitary (with respect the scalar products induced by the respective Ashtekar-Lewandowski measures, see [3]). The motivation for the factor 1/2 comes from the fact that Λ = (1, 0) in D = 3 corresponds to the familiar j + = j − = 1/2 and the area spacings of the SO(4) and the SU(2) based theories agree using this identification, cf [3].
Remarks
1. Since the choice of the maximal closing subset of the simplicity constraint operators is arbitrary, no recoupling basis is preferred a priori. On the SU(2) level, a change in the recoupling scheme amounts to a change of basis in the intertwiner space and therefore poses no problems. On the level of simple Spin(D + 1) representations however, a choice in the recoupling scheme affects the property "simple", since the non-commutativity of constraint operators belonging to different recoupling schemes means that kinematical states cannot have the property simple in both schemes.
2. There exist recoupling schemes which are not included in the above procedure, e.g., take N = 6 and the constraints R 12 R 12 = R 34 R 34 = R 56 R 56 = 0 and couple the three resulting simple representations. The theorem should however generalise to those additional recoupling schemes.
3. It is doubtful if the action of the Hamiltonian constraint leaves the space of simple intertwiners in a certain recoupling scheme invariant. To avoid this problem, one could use a projector on the space of simple intertwiners in a certain recoupling scheme to restrict the Hamiltonian constraint on this subspace and average later on over the different recoupling schemes if they turn out to yield different results. The possible drawbacks of such a procedure are however presently unclear to the authors and we refer to further research. The construction of such a projector can be seen as a quantum analogue of the gauge unfixing process familiar from our companion paper [2]. A possible strategy to find a Hamiltonian constraint operator which leaves the solution space of a first class subset invariant is to construct a gauge unfixing projector which adds vertex simplicity constraints which are not in the first class subset to the Hamiltonian constraint such that it commutes with the first class subset.
4. It would be interesting to check whether the dropped constraints are automatically solved in the weak operator topology (matrix elements with respect to solutions to the maximal subset).
5. The imposition of the constraints can be stated as the search for the joint kernel of a maximal set of commuting generalised area operators Notice, however, that for D > 3 these generalised area operators, just as the simplicity constraints, are not gauge invariant while in D = 3 they are 6. In D = 3 we have the following special situation: We have two classically equivalent extensions of the ADM phase at our disposal whose respective symplectic reduction reproduces the ADM phase space. One of them is the Ashtekar-Barbero-Immirzi connection formulation in terms of the gauge group SU(2) with additional SU(2) Gauß constraint next to spatial diffeomorphism and Hamiltonian constraint, and the other is our connection formulation in terms of SO(4) with additional SO(4) Gauß constraint and simplicity constraint. Both formulations are classically completely equivalent and thus one should expect that also the quantum theories are equivalent 8 in the sense that they have the same semiclassical limit. Let us ask a stronger condition, namely that the joint kernel of SO(4) Gauß and simplicity constraint of the SO(4) theory is unitarily equivalent to the kernel of the SU(2) Gauß constraint of the SU(2) theory. To investigate this first from the classical perspective, we split the SO(4) connection and its conjugate momentum (A IJ , π IJ ) into self-dual and anti-selfdual parts A j ± , π ± j ) which then turn out to be conjugate pairs again. It is easy to see that the SO(4) Gauß constraint G IJ splits into two SU(2) Gauß constraints G ± j , one involving only self-dual variables and the other only anti-selfdual ones which therefore mutually commute as one would expect. The SO(4) Gauß constraint now asks for separate SU(2) gauge invariance for these two sectors. Thus a quantisation in the Ashtekar-Isham-Lewandowski representation would yield a kinematical Hilbert space with an orthonormal basis T + s + ⊗T − s − where S ± are usual SU(2) invariant spin networks. The simplicity constraint, which in D = 3 is Gauß invariant and can be imposed after solving the Gauß constraint, from classical perspective asks that the double density inverse metrics q ab ± = π a j± π b k± δ jk are identical. This is classically equivalent to the statement that corresponding area functions Ar ± (S) are identical for every S. The corresponding statement in the quantum theory is, however, again anomalous because it is well known that area operators do not commute with each other. On the other hand, neglecting this complication for a moment, it is clear that the quantum constraint can only be satisfied on vectors of the form T + s + ⊗ T − s − for all S if s + , s − share the same graph and SU (2) representations on the edges because if S cuts a single edge transversally then the area operator is diagonal with an eigenvalue ∝ j(j + 1) and we can always arrange such an intersection situation by choosing suitable S. By a similar argument one can show that the intertwiners at the edges have to be the same. But this is only a sufficient condition because in a sense there are too many quantum simplicity constraints due to the anomaly. However, the discussion suggests that the joint kernel of both SO(4) and simplicity constraint is the closed linear span of vectors of the form T + s ⊗ T − s for the same spin network s = s + = s − . The desired unitary map between the Hilbert spaces would therefore simply be T s → T + s ⊗ T − s . This can be justified abstractly as follows: From all possible area operators pick a maximal commuting subset Ar ± α using the axiom of choice (i.e. pick a corresponding maximal set of surfaces S α ). We may construct an adapted orthonormal basis T ± λ diagonalising all of them 4 such that Ar ± α T ± λ = λ α T ± λ . Now the constraint Thus the question boils down to asking whether a maximal closing subset can be chosen such that the eigenvalues λ are just the spin networks s. We leave this to future research. 7. In D = 3 the afore mentioned split into selfdual and anti-selfdual sector is meaningless and we must stick with the dimension independent scheme outlined above. An astonishing feature of this scheme is that after the proposed implementation of the simplicity constraints, the size of the kinematical Hilbert space is the same for all dimensions D ≥ 3! By "size", we mean that the spin networks are labelled by the same sets of quantum numbers on the graphs. Of course, before imposing the spatial diffeomorphism constraint these graphs are embedded into spatial slices of different dimension and thus provide different amounts of degrees of freedom. However, after implementation of the diffeomorphism constraint, most of the embedding information will be lost and the graphs can be treated almost as abstract combinatorial objects. Let us neglect here, for the sake of the argument, the possibility of certain remaining moduli, depending on the amount of diffeomorphism invariance that one imposes, which could a priori be different in different dimensions. In the case that the proposed quantisation would turn out to be correct, that is, allow for the correct semiclassical limit, this would mean that the dimensionality of space would be an emergent concept dictated by the choice of semiclassical states which provide the necessary embedding information. A possible caveat to this argument is the remaining Hamiltonian constraint and the algebra of Dirac observables which critically depend on the dimension (for instance through the volume operator or dimension dependent coefficients, see [1,2]) and which could require to delete different amounts of degrees of freedom depending on the dimension.
This idea of dimension emergence is not new in the field of quantum gravity, however, it is interesting to possibly see here a concrete technical realisation which appears to be forced on us by demanding anomaly freedom of the simplicity constraint operators. Of course, these speculations should be taken with great care: The number of degrees of freedom of the classical theory certainly does strongly depend on the dimension and therefore the speculation of dimension emergence could fail exactly when we try to construct the semiclassical sector with the solutions to the simplicity constraints advertised above. This would mean that our scheme is wrong. On the other hand, there are indications [41] that the semiclassical sector of the LQG Hilbert space already in D = 3 is entirely described in terms of 6-valent vertices. Therefore, the higher valent graphs which in D = 3 could correspond to pure quantum degrees of freedom, could account for the semiclassical degrees of freedom of higher dimensional General Relativity. Since there is no upper limit to the valence of a graph, this would mean that already the D = 3 theory contains all higher dimensional theories! Obviously, this puzzle asks for thorough investigation in future research.
8. The discussion reveals that we should compare the amount of degrees of freedom that the classical and the quantum simplicity constraint removes. This is a difficult subject, because there is no well defined scheme that attributes quantum to classical degrees of freedom unless the Hilbert space takes the form of a tensor product, where each factor corresponds to precisely one of the classical configuration degrees of freedom. The following "counting" therefore is highly heuristic and speculative: In the case D = 3, the classical simplicity constraints remove 6 degrees of freedom from the constraint surface per point on the spatial slice. In order to count the quantum degrees of freedom that are removed by the quantum simplicity constraint when acting on a spin network function, we make the following, admittedly naive analogy: We attribute to a point on the spatial slice an N -valent vertex v of the underlying graph γ which is attributed to the spatial slice. This point is equipped with degrees of freedom labelled by edge representations and the intertwiner. Every edge incident at v is shared by exactly one other vertex (or returns to v which however does not change the result). Therefore, only half of the degrees of freedom of an edge can be attributed to one vertex. We take as edge degrees of freedom the D+1 2 Casimir eigenvalues of SO(D + 1) labelling the irreducible representation. The edge simplicity constraint removes all but one of these 10 Casimir eigenvalues, thus per edge D−1 2 edge degrees of freedom are removed. Further, a gauge invariant intertwiner is labelled by a recoupling scheme involving N − 3 irreducible representations not fixed by the irreducible representations carried by the edges adjacent to the vertex in question, which are fully attributed to the vertex (there are N − 2 virtual edges coming from coupling 1,2 then 3 etc. until N but the last one is fixed due to gauge invariance). We take as vertex degrees of freedom these N − 3 irreducible representations each of which is labelled again by D+1 2 Casimir eigenvalues. The vertex simplicity constraint again deletes all but one of these eigenvalues, thus it removes (N − 3) D−1 2 quantum degrees of freedom. We conclude that the quantum simplicity constraint removes quantum degrees of freedom per point (N -valent vertex) where N −3 accounts for the vertex and N/2 for the N edges counted with half weight as argued above. This is to be compared with the classical simplicity constraint which removes D 2 (D − 1)/2 − D degrees of freedom per point. Requiring equality we see that vertices of a definitive valence N D are preferred in D spatial dimensions which for large D grows quadratically with D. Specifically for D = 3 we find N 3 = 6. Thus, our naive counting astonishingly yields the same preference for 6-valent graphs in D = 3 as has been obtained in [41] by completely different methods.
From the analysis of [41], it transpires that N 3 = 6 has an entirely geometric origin and one thus would rather expect N D = 2D (hypercubulations) and this may indicate that our counting is incorrect.
Regularisation and Anomaly Freedom
In [5], the connection formulation sketched at the beginning of the previous chapter was altered in that it contains linear simplicity constraints IJKLM N I π aJK ≈ 0 and an independent normal N I as phase space variables. The normal Poisson-commutes with both the connection A aIJ and its momentum π aIJ and has its own canonical momentum P I . The necessity for this independent normal did not stem from the anomaly encountered when looking at the quadratic quantum simplicity constraints, but from the observation that it was needed to extend the connection formulation to higher dimensional supergravities.
Since the linear simplicity constraint is a vector of density weight one, it is most naturally smeared over (D − 1)-dimensional surfaces. The regularisation of the objects where S b denotes the linear simplicity constraint, S a D − 1-surface, and b LM an arbitrary semianalytic smearing function of compact support, therefore is completely analogous to the case of flux vector fields. The corresponding quantum operator 2) has to annihilate physical states for all surfaces S ⊂ σ and all semianalytic functions b IM of compact support, where p γ denotes the cylindrical projection and γ S is a graph adapted to the surface S. Since we can always choose surfaces which intersect a given graph only in one point, this implies that the constraint has to vanish when acting on single points of a given graph. In [3], it has been shown that the right invariant vector fields actually are in the linear span of the flux vector fields. Therefore, it is necessary and sufficient to demand that for all points of γ (which can be be seen as the beginning point of edges by suitably subdividing and inverting edges). SinceN I acts by multiplication and commutes with the right invariant vector fields, see [5] for details, the condition is equivalent to 5 R IJ e · f γ = 0, (3.4) i.e. the generators of rotations stabilising N I have to annihilate physical states. Before imposing these conditions on the quantum states, we have to consider the possibility of an anomaly. Classically and before using the singular smearing of holonomies and fluxes, both, the linear and the quadratic simplicity constraints are Poisson self-commuting. The quadratic constraint is known to be anomalous both in the Spin Foam [24] as well as in the canonical picture [25,3] and thus should not be imposed strongly. Also the linear simplicity constraint is anomalous when using a non-zero Immirzi parameter (at least if γ = 1 in the Euclidean theory. But γ = 1 is ill-defined for SO (4), see e.g., [42]). Surprisingly, in the case at hand and without an Immirzi parameter in four dimensions, we do not find an anomaly. However that is just because the generators of rotations stabilising N I form a closed subalgebra! Direct calculation yields, choosing (without loss of generality) γ SS to be a graph adapted to both surfaces S, S , where the operator in the last line is in the linear span of the vector fieldsŜ b (S). The classical constraint algebra is not reproduced exactly (the commutator does not vanish identically), but the algebra of quantum simplicity constraints closes, they are of the first class. Therefore, strong imposition of the quantum constraints does make mathematical sense. Note that up to now, we did not solve the Gauß constraint. The quantum constraint algebra of the simplicity and the Gauß constraint can easily be calculated and reproduces the classical where we used R AB N : It follows that the simplicity constraint operator does not preserve the Gauß invariant subspace (in other words, as in the classical theory, the Gauß constraint does not generate an ideal in the constraint algebra). This implies that the joint kernel of both Gauß and simplicity constraint must be a proper subspace of the Gauß invariant subspace. It is therefore most convenient to look for the joint kernel on in the kinematical (non Gauß invariant) Hilbert space.
Solution on the Vertices
where M v contracts the indices corresponding to the endpoints of the edges e i and represents the rest of the graph γ. These states span the combined Hilbert space for the normal field and the connection H T = H grav ⊗ H N (cf. [5]) and they will prove convenient for solving the simplicity constraints. Choose the surface S such that it intersects a given graph γ only in the vertex v ∈ γ . The action ofŜ b (S ) on the vertex v of a spin network T γ , l, i (A, N ) implies with (3.4) thatŜ where τ IJ π le here denote the generators of SO(D + 1) in the representation π le of the edge e and the bar again denotes the restriction to rotational components (w.r.t. N I ). The above equation implies that the intertwiner i v , seen as a vector transforming in the representation π le dual to π le of the edge e, has to be invariant under the SO(D) N subgroup which stabilises the N I . By definition [43], the only representations of SO(D + 1) which have in their space nonzero vectors which are invariant under a SO(D) subgroup are of the representations of class one (cf. also appendix A), and they exactly coincide with the simple representations used in Spin Foams [11]. It is easy to see that the dual representations (in the sense of group theory) of simple representations are simple representations. Therefore, all edges must be labelled by simple representations of SO(D + 1). Moreover, SO(D) is a massive subgroup of SO(D + 1) [43], so that the (unit) invariant vector ξ le (N ) in the representationπ le is unique, which implies that the allowed intertwiners i v (N ) are given by the tensor product of the invariant vectors of all n edges and potentially an additional square integrable function . Going over to normalised gauge invariant spin network functions implies that F v (N ) = 1, and the resulting intertwiner space solving the simplicity and Gauß constraint becomes one-dimensional, spanned by I v (N ) := ξ le 1 (N ) ⊗ ... ⊗ ξ le n (N ). We will call these intertwiners and vertices coloured by them linear-simple. For an instructive example of the linear-simple intertwiners, consider the defining representation (which is simple since the highest weight vector is Λ = (1, 0, . . . , 0), cf. appendix A). The unit vector invariant under rotations (w.r.t. N I ) is given by N I and for edges in the defining representation incoming at v we simply contract h IJ e N J . If the constraint is acting on an interior point of an analytic edge, this point can be considered as a trivial two-valent vertex and the above result applies. Since this has to be true for all surfaces, a spin network function solving the constraint would need to have linear-simple intertwiners at every point of its graph γ, i.e. at infinitely many points, which is in conflict with the definition of cylindrical functions (cf. [44]). In the next section, we comment on a possibility of how to implement this idea.
Edge Constraints
As noted above, the imposition of the linear simplicity constraint operators acting on edges is problematic, because it does not, as one might have expected, single out simple representations, but demand that at every point where it acts, there should be a linear-simple intertwiner. The problem with this type of solution is that all intertwiners, even trivial intertwiners at all interior points of edges, have to be linear-simple, which is however in conflict with the definition of a cylindrical function, in other words, there would be no holonomies left in a spin network because every point would be a N -dependent vertex.
It could be possible to resolve this issue using a rigging map construction [45,46,47] of the type η(T γ, l, l N , i )[T γ , l , l N , i ] := lim Pγ pγ →∞ C p γ , T γ , T γ T pγ γ, l, l N , i , T γ , l , l N , i kin , (3.9) where P γ is the set of finite point sets p of a graph γ, p = {{x i } N i=1 |x i ∈ γ ∀ i, N < ∞}. P γ is partially ordered by inclusion, q p if p is a subset of q, so that the limit is meant in the sense of net convergence with respect to P γ . By the prescription T pγ γ, l, l N , i we mean the projection of T γ, l, l N , i onto linear-simple intertwiners at every point in p and C p γ , T γ , T γ is a numerical factor. Assuming this to work, consider any surface S intersecting γ . We (heuristically) find since the intersection points of S with γ will eventually be in p γ andŜ b (S) is self-adjoint. We were however not able to find such a rigging map with satisfactory properties. It is especially difficult to handle observables with respect to the linear simplicity constraint and to implement the requirement, that the rigging map has to commute with observables. It therefore seems plausible to look for non-standard quantisation schemes for the linear simplicity constraint operators, at least when acting on edges. Comparison with the quadratic simplicity constraint suggests that also the linear constraint should enforce simple representations on the edges, see the following remarks as well as section 3.5 for ideas on how to reach this goal.
Remarks
The intertwiner space at each vertex is one-dimensional and thus the strong solution of the unaltered linear simplicity constraint operator contrasts the quantisation of the classically imposed simplicity constraint at first sight. A few remarks are appropriate: 1. One could argue that the intertwiner space at a vertex v is infinite-dimensional by taking into account holonomies along edges e originating at v and ending in a 1-valent vertex v . Since e and v are assigned in a unique fashion to v if the valence of v is at least 2, we can consider the set {v, e , v } as a new "non-local" intertwiner. Since we can label e with an arbitrary simple representation, we get an infinite set of intertwiners which are orthogonal in the above scalar product. This interpretation however does not mimic the classical imposition of the simplicity constraints or the above imposition of the quadratic simplicity constraint operators.
2. The main difference between the formulation of the theory with quadratic and linear simplicity constraint respectively is the appearance of the additional normal field sector in the linear case. Thus one could expect that one would recover the quadratic simplicity constraint formulation by ad hoc averaging the solutions of the linear constraint over the normal field dependence with the probability measure ν N defined in [5]. Indeed, if one does so, then one recovers the solutions to the quadratic simplicity constraints in terms of the Barrett-Crane intertwiners in D = 3 and higher dimensional analogs thereof as has been shown long ago by Freidel, Krasnov, and Puzio [11]. A similar observation has been made in [48]. Such an average also deletes the solutions with "open ends" of the previous remark by an appeal to Schur's lemma. Since after such an average the N dependence of all solutions disappears, we can drop the µ N integral in the kinematical inner product since µ N is a probability measure. The resulting effective physical scalar product would then be the Ashtekar-Lewandowski scalar product of the theory between the solutions to the quadratic simplicity constraints. Such an averaging would also help with the solution of the edge constraints, since a 2-valent linear-simple intertwiner is averaged as thus yielding a projector on simple representations.
3. It can be easily checked that the volume operator as defined in [3], and therefore also more general operators like the Hamiltonian constraint, do not leave the solution space to the linear (vertex) simplicity constraints invariant. A possible cure would be to introduce a projector P S on the solution space and redefine the volume operator asV := P SV P S . Such procedures are however questionable on the general ground that anomalies can always be removed by projectors.
4. If one accepts the usage of the projector P S , calculations involving the volume operator simplify tremendously since the intertwiner space is one-dimensional. We will give a few examples which can be calculated by hand in a few lines, restricting ourselves to the defining representation of SO(D + 1), where the SO(D) N invariant unit vector is given by N I .
Having direct access to N I , one can base the quantisation of the volume operator on the classical expression (3.12) In the case D +1 uneven, this choice is much easier than the expression quantised in [3]. In the case D + 1 even, the above choice is of the same complexity 6 as the one in [3], but leads to a formula applicable in any dimension and therefore, for us, is favoured. Proceeding as in [3], we obtain for the volume operator Note that the operatorq e 1 ,...,e D is built from D right invariant vector fields. Since these are antisymmetric,q T e 1 ,...,e D = (−1) Dq e 1 ,...,e D . In the case at hand, we have to use the projectors P S to project on the allowed one-dimensional intertwiner space, the operator P Sq P S therefore has to vanish for the case D + 1 even (an antisymmetric matrix on a onedimensional space is equal to 0). However, the volume operator depends onq 2 , and P Sq 2 P S actually is a non-zero operator in any dimension, though trivially diagonal. Therefore, alsô V is diagonal.
The simplest non-trivial calculation involves a D-valent non-degenerate (i.e. no three tangents to edges at v lie in the same plane) vertex v where all edges are labelled by the defining representation of SO(D + 1) and thus the unique intertwiner which we will denote by N A 1 . . . N A D . We find (3.17) i.e. for those special vertices, the volume operator preserves the simple vertices. For vertices of higher valence and/or other representations, we need to use the projectors. Of special interest are the vertices of valence D + 1 (triangulation) and 2D, where every edge has exactly one partner which is its analytic continuation through v (cubulation). We find The dimensionality of the spatial slice now appears as a quantum number like the spins labelling the representations on the edges and it could be interesting to consider a large dimension limit in the spirit of the large N limit in QCD.
5.
When introducing an Immirzi parameter in D = 3 [2], i.e. using the linear constraint KL , the linear simplicity constraint operators become anomalous unless γ = ± √ ζ, the (anti)self-dual case, which however results in non-invertibility of the prescription (γ) . Repeating the steps in section 3.1, we find that these anomalous constraints require IJKL N I (R KL In order to figure out the "correct" quantisation, one can try, in analogy to the strategy for the quadratic simplicity constraints, to weaken the imposition of the constraints at the quantum level. The basic difference between the linear and the quadratic simplicity constraints is that the time normal N I is left arbitrary in the quadratic case and fixed in the linear case. In order to loose this dependence in the linear case, one could average over all N I at each point in σ, which however leads to the Barrett-Crane intertwiners as described above. In analogy to the quadratic constraints, we could choose the subset for each N -valent vertex plus the edge constraints. As above, the choice of the subset specifies a recoupling scheme and the imposition of the constraints leads to the contraction of the virtual edges and virtual intertwiners of the recoupling scheme with the SO(D) N -invariant vectors ξ le i (N ) and their complex conjugatesξ le i (N ), see fig. 2. Gauge invariance can still be used at each (virtual) vertex in this calculation in the form iR e i = 0, which is sufficient since onlyR e i appears in the linear simplicity constraints. If we now integrate over each pair of ξ le i (N ) "generated" by the elements of the proposed subset of the simplicity constraint operators separately, we obtain projectors on simple representations for each of the virtual edges in the recoupling scheme. The integration over N I for the edge constraints yields projectors on simple representations in the same manner. Finally, we obtain the simple intertwiners of the quadratic operators in addition to solutions where incoming edges are contracted with SO(D) N -invariant vectors ξ le i (N ). A few remarks are appropriate: 6. Although this procedure yields a promising result, it contains several non-standard and ah-hoc steps which have to be justified. One could argue that the "correct" quantisation of the linear and quadratic simplicity constraints should give the same quantum theory, however, as is well known, classically equivalent theories result in general in non-equivalent quantum theories, which nevertheless can have the same classical limit.
7. It is unclear how to proceed with "integrating out" N I in the general case. For the vacuum theory, integration over every point in σ gives the Barrett-Crane intertwiner for the edges contracted with SO(D) N -invariant vectors. This type of integration would also get rid of the 1-valent vertices and thus allow for a natural unitary map to the quadratic solutions as already mentioned above.
8. When introducing fermions, there is the possibility for non-trivial gauge-invariant functions of N I at the vertices which immediately results in the question of how to integrate out this N I -dependence. Next to including those N I in the above integration or to integrate out the remaining N I separately, one could transfer this integration back into the scalar product. Since the authors are presently not aware of an obvious way to decide about these issues, we will leave them for further research.
Mixed Quantisation
Since the implementation of the quadratic simplicity constraints described above yields a more promising result than the implementation of the linear constraints, we can try to perform a mixed quantisation by noting that we can classically express the linear constraints for even D in the form 1 4 IJKLM π aIJ π bKL ≈ 0, N I − n I (π) ≈ 0. (3.20) The phase space extension derived in [5] remains valid when interchanging the linear simplicity constraint for the above constraints. The reason for restricting D to be even is that we have an explicit expression for n I (π), see [1,2]. Since a quantisation of n I (π) will most likely not commute with the Hamiltonian constraint operator, we resort to a master constraint. Note that the expression which is the densitised square of N I − n I (π), can be quantised aŝ when using a suitable factor ordering, where a quantisation of √ q 3−2D is described in [3]. The solution space is not empty since the intertwiner s(e 1 , ..., e D ) is annihilated byM N , which can be easily checked when using the results of the volume operator acting on the solution space of the full set of linear simplicity constraint operators. In order to turn the expression into a well defined master constraint operator, we have to square it again and to adjust the density weight, leading tô which is by construction a self-adjoint operator with non-negative spectrum. We remark that it was necessary to use the fourth power of the classical constraint for quantisation, because the second power, having the desired property that its solution space is not empty, does not qualify as a well defined master constraint operator in the ordering we have chosen. There exists however no a priori reason why one should not take into account master constraint operators constructed from higher powers of classical constraints [49]. Curiously, the quadratic simplicity constraint operators as given above do not annihilate the solution displayed. Clearly, the calculations will become much harder as soon as vertices with a valence higher than D are used, since the building blocks of the volume operator will not be diagonal on the intertwiner space. This type of quantisation is further discussed in section 4.3, where a possible application to using EPRL intertwiners is outlined. In contrast to the earlier assumption of D being even in order to have an explicit expression n I (π), we can also perform the mixed quantisation using n I n J ≈ 1 D−1 π aKI π aKJ − ζη I J for Euclidean internal signature ζ = 1 and the constraint N J (n I n J (π) − N I N J ) ≈ 0. For the application proposed, we will only need that the corresponding master constraint can be regularised such that it vanishes when not acting on non-trivial vertices, which can be achieved as before.
Comparison to Existing Approaches
In this section, we are going to comment on the relation of existing results from the spin foam literature to the proposals in this paper. In short, the main conclusion will be that in the case of four spacetime dimensions, many results from the spin foam literature can be used also in the canonical framework. However, they fail to work in higher dimensions due to special properties of the four dimensional rotation group which are heavily used in spin foams. We will not comment on results based on coherent state techniques [16,33,34,35] since we do not see a resemblance to our results which do not make use of coherent states in any way. Nevertheless, similarities could be present as the relation between the EPRL [24] and FK [16] models show.
Continuum vs. Discrete Starting Point
The starting point for introducing the simplicity constraints in the spin foam models is the reformulation of general relativity as a BF theory subject to the simplicity constraints, and thus similar to the point of view taken in this series of papers. The crucial difference however is that while spin foam models start classically from discretised general relativity, the canonical approach discussed here starts from its continuum formulation. When looking at the simplicity constraints, this difference manifests itself in the choice of (D − 1)-surfaces over which the the generalised vielbeins (i.e. the bivectors in spin foam models) have to be smeared. Starting from a discretisation of spacetime, the set of (D −1)-surfaces is fixed by foliating the discretised spacetime. Restricting to a simplicial decomposition of a four-dimensional spacetime as an example, these would e.g. be the faces f of a tetrahedron t in the boundary of the discretisation. It follows that one can take the bivectors B IJ integrated over the individual faces of a tetrahedron, B IJ f (t) := f B IJ , as the basic variables and the quadratic (off)-diagonal simplicity constraints read [24] In the continuum formulation however, we have to consider all possible (D − 1)-surfaces, and thus also hypersurfaces containing the vertex v dual to the tetrahedron t. The resulting flux operators a priori contain a sum of right invariant vector fields R IJ e acting on all the edges e connected to v. While this poses no problem for the diagonal simplicity constraints which act on edges of the spin networks as shown in [3], the off-diagonal simplicity constraints arising when both surfaces contain v are not given by (4.2), but by sums over different C f f , see [3] for details. It can however be shown by suitable superpositions of simplicity constraints associated to different surfaces that (4.2) is actually implied also by the quadratic simplicity constraints arising from a proper regularisation in the canonical framework. This statement is non-trivial and had to be proved in [3]. Thus, we can also in the canonical theory consider the individual building blocks (2.2) as done in section 2 of this paper. Furthermore, the same is also true when using linear simplicity constraints, i.e. the properly regularised linear simplicity constraints in the canonical theory imply that all building blocks (3.3) vanish.
We also note that there is no analogue of the normalisation simplicity constraints [11] in the canonical treatment since the generalised vielbeins do not have timelike tensorial indices after being pulled back to the spatial hypersurfaces.
Projected Spin Networks
Projected spin networks were originally introduced in [50,51] to describe Lorentz covariant formulations of Loop Quantum Gravity, meaning that the internal gauge group is SO(1, 3) (or SL(2, C)) instead of SU (2). The basic idea is that next to the connection, the time normal field, often called x or χ in the Spin Foam literature, becomes a multiplication operator since it Poisson-commutes classically with the connection. Since the physical degrees of freedom of Loop Quantum Gravity formulated in terms of the usual SU(2) connection and its conjugate momentum are orthogonal to the time normal field, one performs projections in the spin networks from the full gauge group SO(1, 3) to a subgroup stabilising the time normal. Since the projector transforms covariantly under SO(1, 3), a (gauge invariant) projected spin network is already defined by its evaluation for a specific choice of the time normals and the resulting effective gauge invariance is only SU (2), which exemplifies the relation to the usual SU(2) formulation in the time gauge x I (= N I ) = (1, 0, 0, 0). Despite its close relation to the techniques used in this paper and its merits for the fourdimensional treatment, there are several problems connected with using this approach in the canonical framework discussed in this series of papers which we will explain now. While the extension of projected spin networks to different gauge groups has already been discussed in [51], there is a subtle problem associated with the part of the connection which is projected out by the projections, that could not have been anticipated by looking at Loop Quantum Gravity in terms of the Ashtekar-Barbero variables. There, the physical information in the connection, the extrinsic curvature, is located in the rotational components of the connection. To see this, consider in four dimensions the 2-parameter family of connections discussed in [2] 7 , where γ corresponds to the Barbero-Immirzi parameter restricted to four dimensions and β is the new free parameter appearing in any dimension. K aIJ decomposes as [1,2] K aIJ = 2N [IKa|J] +K trace aIJ +K trace free aIJ , (4.4) whereK aIJ means that N IK aIJ = N JK aIJ = 0 and the trace / traceless split is performed with respect to the hybrid vielbein. The extrinsic curvature which we need to recover from A aIJ is located inK aJ , whereasK trace aIJ vanishes by the Gauß constraint andK trace free aIJ is pure gauge from the simplicity gauge transformations. Now setting β = 0 and N I = (1, 0, 0, 0) in four dimensions, we recover the Ashtekar-Barbero connection and see that the physical information is located in the rotational components of A aIJ . It thus makes sense to project onto this subspace in the projected spin network construction, i.e. we are not loosing physical information. On the other hand, setting γ = 0 in four dimensions or going to higher dimensions, we see that a projection onto the subspace orthogonal to N I annihilates the physical components of the connection. This would not be necessarily an issue if one would just project the projected spin network at the intertwiners, but when one tries to go to fully projected spin networks as proposed in [50]. Then, since one would take a limit of inserting projectors at every point of the spin network, the physical information in the connection would be completely lost.
Next to this problem, there are other problems associated to taking an infinite refinement limit for projected spin networks as discussed by Alexandrov [50] and Livine [51], e.g. that fully projected spin networks are not spin networks any more (since they only contain vertices and no edges) and, connected with this problem, that the trivial bivalent vertex, the Kronecker delta, is not an allowed intertwiner. Similar problems have been encountered in section 3, i.e. while the vertex simplicity constraints could be solved by a construction very similar to projected spin networks where one projects the incoming and outgoing edges at the intertwiner in the direction of the time normal N I , imposing the linear simplicity constraint on the edges, one would have to insert "trivial" bivalent vertices of the form N I N J at every point of the spin network, whereas one would need to insert the the Kronecker delta δ IJ to achieve cylindrical consistency while maintaining a spin network containing edges and not only vertices.
Thus, the main problem with using (fully) projected spin networks is connected to the fact that we do not know of an analogue of the Barbero-Immirzi parameter in higher dimensions which would allow us to put the extrinsic curvature also in the rotational components of the connection. In four dimensions on the other hand, this problem would be absent and one would be left with the issue of refining the projected spin networks, which is however also present in section 3 of this paper. Therefore, using projected spin networks in four dimensions with non-vanishing Barbero-Immirzi parameter is an option for the canonical framework developed in this series of papers and the known issues discussed above should be addressed in further research.
EPRL Model
The basic idea of the EPRL model is to implement the diagonal simplicity constraints as usual, but to replace the off-diagonal simplicity constraints by linear simplicity constraints which are implemented with a master constraint construction [24] or weakly [52]. Furthermore, the Barbero-Immirzi parameter is a necessary ingredient. We restrict here to the Euclidean model since its group theory is much closer to the connection formulation with compact gauge group SO(D + 1). While the diagonal simplicity constraints give the well known relation the master constraint for the linear constraints gives [24], up to corrections 8 , where k is the quantum number associated to the Casimir operator of the SU(2) subgroup stabilising N I . Depending on the value of the Barbero-Immirzi parameter, either k = j + + j − or k = |j + − j − | is selected by this constraint. The EPRL intertwiner for SO(4) spin networks with arbitrary valency [29] is then constructed by first coupling the two SU(2) subgroups of SO(4) holonomies in the representations (j + , j − ), calculated along incoming and outgoing edges to the intertwiner, to the k representation. Then, the k representations associated to each edge are coupled via an SU(2) intertwiner and the complete construction is then projected into the set of SO (4) intertwiners. An alternative derivation proposed by Ding and Rovelli [52] makes use of weakly implementing the linear simplicity constraints, i.e. restricting to a subspace H ext such that φ Ĉ ψ = 0 ∀ |φ , |ψ ∈ H ext . (4.7) In this approach, one can also show that the volume operator restricted to H ext has the same spectrum as in the canonical theory, which is an important test to establish a relation between the canonical theory and the EPRL model.
Closely related to what we already observed in the previous subsection on projected spin networks, the EPRL model makes heavy use of the fact that SO(4) splits into two SU(2) subgroups and that the Barbero-Immirzi parameter is available in four dimensions. Thus, we would have to restrict to four dimensions with non-vanishing γ if we would want to use EPRL solution to the simplicity constraints. One upside of this solution when comparing to our proposition for solving the quadratic constraints is that no choice problem occurs, i.e. if we map the quantum numbers of the EPRL intertwiners to SU(2) spin networks, a change of recoupling basis in the SU(2) spin networks results again in EPRL intertwiners solving the same simplicity constraints.
The problem of stability of the solution space H ext of the simplicity constraint under the action of the Hamiltonian constraint is however, to the best of our knowledge, not circumvented when using EPRL intertwiners.
Also, in order to use the EPRL solution in the canonical framework, one would have to discuss exactly what it means to use linear and quadratic simplicity constraints in the same formulation, i.e. if one can freely interchange them and how continuity of the time normal field is guaranteed at the classical level if one changes from the quadratic constraints to linear constraints from one point on the spatial hypersurface to another. The mixed quantisation proposed in section 3.5 can be seen as an attempt to using both the time normal as an independent variable as well as quadratic simplicity constraints. In this case, the main difference is the presence of an additional constraint relating the time normal constructed from the generalised vielbeins to the independent time normal (which could be used in the linear simplicity constraints). In section 3.5, this additional constraint was regularised as a master constraint which acts only on vertices. Taking the point of view that one can freely change between using the quadratic constraints plus this additional constraint or the linear constraints, one could choose the linear constraints for vertices and the quadratic constraints for edges. Since we can use a factor ordering for the master constraint where a commutator between a holonomy and a volume operator is ordered to the right, the master constraint would vanish on edges and only the quadratic simplicity constraints would have to be implemented, which are however not problematic. At vertices, we would be left with the linear constraints and could use the EPRL intertwiners. Thus, the EPRL solution seems to be a viable option in four dimensions. Whether one considers it natural or not to use both linear and quadratic constraints in the same formulation is a matter of personal taste. Nevertheless, it would be desirable to have only one kind of simplicity constraints.
A further comment is due on the starting point of spin foam models, which is a BF-theory subject to simplicity constraints. It has been argued by Alexandrov [53] that the secondary constraints resulting from the canonical analysis, i.e. the D ab M -constraints on the torsion of A aIJ from our companion paper [2], should be taken into account also in spin foam models. In the present canonical formulation, these constraints were removed by the gauge unfixing procedure [2] and thus do not have to be taken into account here. The requirement for the validity of this step was to modify the Hamiltonian constraint by an additional term quadratic in the D ab M -constraints (the gauge unfixing term) which renders the simplicity constraints stable. While this ensures that we have to deal only with the non-commutativity of the (singularly smeared, or quantum) simplicity constraints in the present paper, the converse does not necessarily follow: Since the Hamiltonian constraint one obtains from the canonical analysis of BF-theory subject to simplicity constraints, the classical starting point of spin foam models, is not the modified Hamiltonian constraint considered here, but the one which results in the secondary D ab M -constraints, it does not follow that these secondary constraints do not have to be taken into account in spin foam models. On the other hand, the present formulation hints that it might be possible to construct a spin foam model subject to simplicity constraints (and not D ab M -constraints) which coincides with the dynamics defined by the modified Hamiltonian constraint. In fact, it was recently shown that the transfer operator of spin foam models can be written as T =µ † W µ (here for the EPRL model) [54], where µ projects onto the solution space of the simplicity constraint. Taking into account the philosophy of spin foam models to impose the simplicity constraint at every time step in order to ensure that the second class D ab M -constraints are satisfied, it is conceivable that the gauge unfixing term of the Hamiltonian constraint in [1,2] could emerge from these µ-projections when taking the continuum limit of the spin foam transfer operator. Thus, in the light of plausible arguments for both sides, only an explicit calculation will be able to decide this issue.
As a last remark, we point out that the non-commutativity of the linear simplicity constraints in the EPRL model results from using γ = 0 and thus we are not faced with this problem in higher dimensions. Essentially, as discussed in more detailed in remark 5 of section 3.4, while the rotations stabilising N I form an SO(D) subgroup of SO(D + 1), the linear simplicity constraints in four dimensions with γ = 0 and β = 0 do not generate such a subgroup.
Discussion and Conclusions
Let us briefly discuss the results of this paper and judge the different approaches. First, the mechanism for avoiding the non-commutativity in the quadratic simplicity constraints discussed in section 2 is new to the best of our knowledge and we do not see any indication that the solution space is identical to previous results (up to the fact that it has the same "size" as SU(2) spin networks). In the spin foam literature, the linear simplicity constraints are cornerstones of the new spin foam models and have been introduced since the quadratic simplicity constraints acting on vertices do not commute. While the methods for treating supergravity discussed in [5] necessarily need an independent time normal and thus suggest using linear simplicity constraints, there is no need for the linear constraints in pure gravity (except for the fact that they exclude the topological sector in four dimensions). Therefore, one should not dismiss the quadratic constraints, especially since the linear constraints come with their own problems in the canonical approach. The solution presented in section 2 is certainly not free of problems, most prominently the choice of the maximal commuting subset, but its close relation the SU(2) based theory and the (natural) unitarity of the intertwiner map to SU(2) intertwiners make it look very promising.
The linear simplicity constraints come with their own set of problems, many of which were already known in the spin foam literature. While the results of section 2 would naturally lead us to consider the quadratic constraints, the connection formulation of higher dimensional supergravity developed in [5] makes it necessary to use an independent time normal as an additional phase space variable. This time normal would naturally point towards using linear simplicity constraints, although the mixed quantisation of section 3.5 could avoid this. Since there is no anomaly appearing when using the linear simplicity constraints (with γ = 0 in four dimensions), we should implement them strongly. However, this leads to a solution space very different from the SU(2) spin networks. At this point, it seems to be best to let oneself be guided by physical intuition and the results from the quadratic simplicity constraints as well as the desired resemblance to SU(2) spin networks. Ad hoc methods for getting close to this goal have been discussed in section 3.4. We however stress that these methods are, as said, ad hoc and they don't follow from standard quantisation procedures. The mixed quantisation discussed at the end of section 3 also does not seem completely satisfactory, especially since the master constraint ensuring the equality of the independent normal N I and the derived normal n I (π) is very complicated to solve. Nevertheless, in section 4.3, an application to EPRL intertwiners is outlined which could avoid this problem by using linear simplicity constraints for the vertices. The strength of the mixed quantisation is thus that it provides a mechanism to incorporate both the quadratic simplicity constraints as well as an independent time normal in the same canonical framework, which is what is done on the path integral side in the EPRL model.
A comparison to results from the spin foam literature, especially projected spin networks and the EPRL model, shows that many of the problems connected with using the linear simplicity constraints have already been known, partly in different guises. While using these known results in our framework seems to be a viable option in four dimensions, we are unaware of possible ways to extend them also to higher dimensions since main ingredients are a non-vanishing Barbero-Immirzi parameter as well as special properties of SO(4).
In conclusion, we reported on several new ideas of how to treat the simplicity constraints which appear in our connection formulation of general relativity in any dimension D ≥ 3 [1,2,3,4,5,6] and found that none of the presented ideas are entirely satisfactory at this point and further research on the open questions needs to be conducted. We hope that the discussion presented in this paper will be useful for an eventually consistent formulation. | 2013-02-12T19:09:54.000Z | 2011-05-18T00:00:00.000 | {
"year": 2011,
"sha1": "06970af17e4bde4ed13c794c0564fef50d0f3caf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1105.3708",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "06970af17e4bde4ed13c794c0564fef50d0f3caf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
74438359 | pes2o/s2orc | v3-fos-license | Antifungal susceptibility, risk factors and treatment outcomes of patients with candidemia at a university hospital in Saudi Arabia
Background: Candidemia is a major cause of morbidity and mortality in hospitalized patients. The spectrum of candidemia has been changed especially among critically ill patients due to emergence of non-albicans Candida (NAC) species. The increasing use of azole agents is suggested to be responsible for this epidemiological shift. NAC species are of special concern because of their high drug-resistance and increasing prevalence.The aim of this study was to detect antifungal-susceptibility patterns, treatment outcomes and associated risk factors in patients with candidemia who were admitted to King Abdulaziz University Hospital (KAUH), Jeddah, Kingdom of Saudi Arabia (KSA) . Methods: This work represents a cross sectional study done in the Clinical and Microbiology Laboratory at KAUH, during the period from March 2012 till February 2014 on a total of 141 patients with candidemia. They were 31(22%) Saudi and 110(87%) non-Saudi patients with age ranged from 1 day 102 years. Blood cultures were collected for suspected cases of candidemia, followed by subculture on SDA. Identification was done by VITEK MS (MALDI-TOF MS), and confirmation of Candida isolates and antifungal-susceptibility testing were performed by using VITEK ®2 system. Results: C.albicans isolates accounted for 39.7%, followed by C. tropicalis (21.3%), C. galabrata (18.4%) and C. parapsiliosis (14.9%). Additionally, C. dublinsis, C. krusei and C. famata were representing 2.1%, 2.1% and 1.4%, respectively. All Candida isolates were 100% susceptible to amphotericin B. The best susceptibility to fluconazole was detected among each C. dubliensis and C. famata (100%). All C. krusei isolates were resistant to fluconazole, while they were susceptible The InTernaTIonal arabIc Journal of anTImIcrobIal agenTs
Introduction
Candida is by far the most common fungal pathogen found in bloodstream [1]. The incidence of candidemia has been increasing worldwide. The epidemiology of candidemia has been changed in the past decades due to use of immunosuppressive and cancer therapy, AIDS epidemic, patients receiving transplantation and the increasing use of antibacterial drugs in hospital settings and even in the community [2]. Mucosal colonization by Candida species, indwelling vascular catheters as central venous catheters, total parenteral nutrition, steroid therapy, abdominal surgery, and immunocompromised condition are also associated risk factors for candidemia [3,4].
Candidemia is becoming a major cause of morbidity and mortality in hospitalized patients. The spectrum of candidemia has been changed especially among critically ill patients due to emergence of non-albicans Candida (NAC) species, including C. tropicalis, C. parapsilosis, C. krusei, and C. glabrata. The increasing use of azole agents is suggested to be responsible for this epidemiological shift [5,6].
NAC species are of special concern because of their high drug-resistance and increasing prevalence in invasive candidiasis [7]. Several years ago, different studies have been reported about candidemia in hospitals of Saudi Arabia, and these studies have used different designs, prospective vs. retrospective and different patient groups (ICU vs. non-ICU) [8][9][10][11][12].
The aim of this study was to detect antifungal-to other antifungal agents. All isolates were susceptible to flucytosine, except C. albicans and C. dubliensis which were susceptible 92.9% and 66.7%, respectively. All isolates were susceptible to itraconazol, except C. albicans and C. tropicalis which were susceptible 94.6% and 96.7%, respectively. The percentage of deceased patients with candidemia was significantly higher than the survivors among age group >64 years, particularly those who were mechanically ventilated and those understeroid therapy. The percentage of deceased patients was significantly higher than survivors among those admitted to adult ICUs (73.78%vs 26.23%) .
susceptibility patterns, treatment outcomes and associated risk factors in patients with candidemia who were admitted to King Abdulaziz University Hospital (KAUH), Jeddah, Kingdom of Saudi Arabia (KSA) .
Study design and setting
A prospective cross-sectional study was done in the Clinical and Molecular Microbiology at KAUH from March 2012 through February 2014.
Candidemia was defined according to study of Leon et al, [13] as at least one positive blood culture from peripheral line for Candida spp. in patients with clinical features of infection.
Ethical consideration
The study was approval by Research Ethics Committee, the Unit of Biomedical Ethics, Faculty of Medicine, King Abdulaziz University (Reference Number: 830-12).
Subjects
This study was included 141 hospitalized patients who were admitted in different units of KAUH. Their ages ranged from 1 day to 102 years with mean ±sd (37.85±31.65) years, of these 63 (44.7%) were males and 78 (55.3%) females. They were 31 (22%) Saudis and 110 (78%) non-Saudis. Candidemia was diagnosed by isolation of Candida spp. from the blood culture of each patient.
Inclusion criteria
All candidemia cases which were considered as nosocomial infection included in the study by taking full patient's history and clinical examination, laboratory investigations, assessment of risk factors and underlying diseases. Candidemia cases referred from other hospitals or patients with second attack of candidemia were excluded from this study.
Methods
Blood cultures were performed using automated blood culture system (BacT/Alert, Organon, Teknika, USA).A total of 10mls of each patient ' s blood was inoculated into each bottle of blood culture system, one for aerobic and another for anaerobic growth. For pediatric patients, up to 5 mls of blood were inoculated into a single pediatric bottle. Culture bottles were loaded into BacT/Alert blood culture and kept until designated positive or for a maximum of 5 days incubation time. All bottles designated positive were smeared for Gram-stain. Culture bottles positive for yeast cells were subcultured on Sabouraud dextrose agar (SDA) (Saudi prepared media Laboratories, Riyadh, KSA) and the yeasts were identified with the use of VITEK MS at the same day if sufficient growth on SDA. The identification (ID) of Candida species is confirmed by using VITEK ® 2 system for ID and antifungal-susceptibility testing (bioMerieux, Inc., France) [14].
Yeast identification and anti-fungal susceptibility testing by VITEK-2
The isolated pure colonies were selected from SDA and a purity plate was done to ensure that a pure culture was used for testing. A total of 3 ml of 0.45% sterile saline were aseptically added into sterile plastic test tube. A sufficient number of morphologically similar colonies was transferred by a sterile loop to the saline tube and its density was checked by using Vitek 2 DensiCheck which should be equivalent to (2) McFarland then, the suspension tube was placed in the cassette followed by an empty tube and the card for identification of yeast was placed in the suspension tube and the card for AST (AST-YS07) was placed in the empty tube. When the sample cycle was finished, the cassettes and the tubes were discarded. Minimal inhibitory concentration (MIC) was calculated and represented as (sensitive, intermediate or resistant) [15].
Statistical analysis
Data were analyzed using Statistical Package for Social Sciences (SPSS) software, version 18. Chisquare test was utilized to test for the association and/or difference between categorical variables. Yates's correction was applied when appropriate. Odds ratio and 95% confidence interval were calculated. Continuous variables were presented as mean, standard deviation and range. P value less than 0.05 was considered statistically significant.
All Candida isolates were 100% susceptible to amphotericin B. The best susceptibility to fluconazole was detected among C. dubliensis and C.famata (100%). C. krusei isolates were 100% resistant to fluconazole, but these were 100% susceptible to other antifungal agents. All isolates were susceptible to flucytosine, except C. albicans and C. dubliensis which were 92.9% and 66.7% susceptible, respectively. All isolates were susceptible to itraconazole, except C. albicans and C. tropicalis which were 94.6% and 96.7% susceptible, respectively. ( Table 2).
The percentage of deceased patients was significantly higher among those who were mechanically ventilated (72.6% vs 27.4%), and who received steroid therapy than others (90.9% vs 9.1%). There was an increased risk of mortality in candidemic patients who were under dialysis, have CVC, chronic renal impairment, heart diseases, diabetes mellitus and those with long stay in hospital (>20 days) or infected/colonized with Candida ( Table 3). The percentage of deceased patients was significantly higher in patients aged ≥ 64 years than younger ages (76.9% vs 23.07%) ( Table 4).
The percentage of deceased patients in adult ICU was significantly higher than the survivors (73.78% vs 26.23%) ( Table 4). There were no significant differences between deceased and survived candidemia patients infected with different Candida species (Table 5).
Discussion
Candida bloodstream infection (CBSI) represents an important problem in critically ill hospitalized patients. (CBSI) is often a consequence of long term use of broad-spectrum antibacterial therapy, complex surgical procedures and invasive medical devices. The epidemiology of candidemia is changing with an increase in the proportion of NAC [16]. (2) During the current study; overall mortality associated with candidemia was 60.3 % (Table 1). Other researchers reported similar results to our with overall crude mortality among their patients with invasive candidiasis or candidemia in the range of 40 -60% [17][18][19]. A previous Saudi study reported that approximately two-fifths (40.6%) of the patients died within 30-days after isolation of Candida species from their sterile body sites [20]. Kumar et al. [21] in Pakistan, reported lower mortality rate (23.4%) in their study [21], and Playford et al. [22], reported higher mortality rate associated with candidemia due to NAC (80%) among non-neutropenic critically ill patients admitted to ICU.
This current study showed that C. albicans was the predominant isolate in patients with candidemia (39.7%), while all other Candida species (NAC) were responsible for 60.3% of the cases, and the most common NAC species was C. tropicalis (21.3%) among all isolates (Table2). However, an earlier study done by Akbar and Tahawi [9] at the same hospital (KAUH) reported that C. albicans was the most frequently isolated species (71%), followed by C. tropicalis (13%) and C. parapsilosis (13%).Both studies showed that the most common species was C. albicans , followed by C. tropicalis, whereas the study of Bukharie et al [8] in Saudi Arabia, has found that C. albicans caused 19% of candidemia cases .
Al-Thaqafi et al. [23],demonstrated that the total number of candidemia cases at King Abdulaziz Medical City (KAMC) in Jeddah during an 8-year period (2002 -2009) was relatively higher than previously reported data from other regions of Saudi Arabia .
Eksi et al. [24] in Turkey, found that 47.7% of their isolates from candidemia cases were C. albicans, followed by C. parapsilosis (36.9%) and other Candida species represented by 15.4%. Chi et al. [7] in Taiwan, reported that C. albicans represented 43.5%, meanwhile, non-albicans spp. including C.glabrata, C.tropicalis, C. parapsilosis, C.kruseiandC. heamulonii were responsible for (56.5%) of candidemia cases [7]. Montagna et al. [13] in Italy, reported that 59.8 % of their studied candidemia cases were caused by NAC, and C. parapsilosis was the most common species. Also, the study of De Luca et al. [25]in Italy, found that C. albicans represented 48% of isolates in candidemia cases, while all other NAC was represented by 52% and the most common species was C. glabrata. The highest prevalence of NAC was found by Kumar et al. [21] in Pakistan, where NAC species were isolated in 90.9% of candidemic patients including C. parapsilosis (36.4%), C. lusitaniae (29.9%), C. tropicalis (20.8%), C. glabrata (3.9%), and only 7 patients (9.1%) were having C. albicans. This finding showed that C. albicans is less common in certain countries than others and there is an etiological shift to higher isolation of NAC species in most candidemic patients which has been also observed in our study. Antifungal susceptibility is highly important for management of patients with candidemia. The results of this study showed that all isolated Candida species were susceptible to amphotericin B, which is in agreement with the results of many other studies [24][25]. However, the increased usage of antifungal agents may contribute to increased occurrence of resistance in some Candida species more than others [26].
The InTernaTIonal arabIc Journal of anTImIcrobIal agenTs
In general, our antifungal susceptibility results were comparable to that reported by Pfllar et al., [27], their study observed that fluconazole-resistance was extremely rare (1%) among blood isolates of C. albicans, C. tropicalis, C. parapsilosis, while C. lusitaniae and C. glabrata exhibited 2% and 8% resistance to fluconazole, respectively.
Al Thaqafi et al. [23] in Saudi Arabia, found that fluconazole-susceptibility was 38.5% for C. albicans and 52.5% for other Candida species, while an Egyptian study of Mohtady et al. [28] reported that fluconazole-resistant C. dubliniensis and C. albicans were (55.6%) and (49%), respectively. Omrani et al., [20], found that more than 90% of C.albicans, C. parapsilosis, and C. tropicalis isolates in their study were susceptible to fluconazole and caspofungin. Moreover, the vast majority of Candida isolates were susceptible to voriconazole and amphotericin B, while 33.3% of C. krusei isolates were resistant to caspofungin. These data were comparable to our results, especially that all Candida isolates in this study were susceptible to voriconazole and caspofungin.
In agreement with our overall results, De Luca et al. [25], has found that all Candida species were susceptible to amphotericin B, whereas C. albicans and C. parapsilosis susceptibility to fluconazole was 100%, with decreased susceptibility of C. glabrata to 76.5%. In addition, Ajenjoet al. [29], indicated that 88.8% of their Candida spp. isolates were fluconazole-susceptible.
The increase incidence of Candida infections contribute to increased usage of antifungal and more developing of resistance in Candida species [26]. Therefore, early and adequate empirical anti-fungal treatment plus early removal of central catheters are considered the main factors to reduce use of antifungal drugs, morbidity and mortality. It is necessary to implement guidelines of empirical antifungal treatment in patients with highly risk factors of developing candidemia [30].
The role of intravascular catheters in causing candidemia has been documented. Removal of vascular catheters has been advocated as an adjunctive strategy for treating patients with catheter-related candidemia [31]. This study showed that most patients (73%) with central venous catheters (CVCs) and those who received antibacterials (67.4%) had developed candidemia (Table 3). A previous study done at the same hospital by Akbar and Tahawi [9], found that patients with CVCs (77%) who received broad-spectrum antibacterial therapy (87%) were associated with candidemia. The study of Chander et al. [6], has similar results to ours regarding associated risk factors with candidemia.
Other associated risk factors for candidemia observed in this study included; age < 1 year > 64 years, urinary catheter, mechanical ventilation, undergoing dialysis, steroid therapy, and prolonged hospital stay for >20 days as shown in Table 3 The study of Montagna et al. [16], reported similar risk factors in their patients especially due to using of Hickman catheter and length of stay in ICU.
The present study shows that the incidence of C. albicans and C. glabrata was is higher among patients with solid compared to hematological malignancies, while the incidence of other non-albicans Candida species was also higher among patients with hematological than solid malignancies. Al-Thaqafi, et al. [23], reported in their study of over 8-year-duration at King Abdulaziz Medical City, Jeddah, that malignancy was significantly associated with the development of non-albicans Candida species.
The study of Kontoyiannis et al. [32], concluded that immunocompromised patients including those affected by solid tumors or haematological malignancies are at high risk for developing Candida infection. Additionally, the widespread use of fluconazole prophylaxis in haematological and stem cell transplant settings might be responsible for a decreased incidence of invasive Candida infections in these populations.
This study found that the rate of deceased patients was higher than Survived ones to candidemia with C. albicans (64.3% vs 35.7%) or NAC (57.6% vs 42.4%), with no significant differences between the two groups (Table 5). However, a study in Greece, found the overall mortality to be significantly higher in patients with NAC species than C. albicans associated with bloodstream infections (90% vs 52.8%) [33]. Kelvay et al. [34], recorded that mortality associated with C. albicans and C. glabrata candidaemia was 44% and 41%, respectively. Other studies reported higher rates of mortality in association with NAC species, especially C. krusei, C. glabrataand C. tropicalis [35][36][37][38].
This study concludes that there is an epidemiological shift to higher isolation of NAC species in candidemic patients and all Candida and NAC isolates were susceptible to amphotericin B. Additionally, increased mortality was observed in patients older than 64 years, with steroid therapy, mechanical ventilation and those who admitted to adult ICU. | 2019-03-12T13:07:17.758Z | 2015-11-24T00:00:00.000 | {
"year": 2015,
"sha1": "49007201b9a8878d2003ec9eb351719112c5052e",
"oa_license": "CCBY",
"oa_url": "http://imed.pub/ojs/index.php/IAJAA/article/download/1410/1082",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "22da19d163d52a54140b5bd83563f541183c76d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261389833 | pes2o/s2orc | v3-fos-license | Types of Social-Media and Academic Performance of Students in Primary School Teachers’ Training Colleges in
The study examined the influence of types of social media on academic performance of students in primary school teachers’ training colleges in Vihiga County in Kenya. T he Technological determinism theory by McLuhan Marshall (1964) guided the study. Correlation survey research design was utilized with the aid of mixed method approach. It involved 6 Teachers’ Training Colleges. The target population of 1584 consisted of 6 principals, 1478 students and 100 tutors. Purposive sampling technique was used to sample the 6 colleges and the 6 principals. Simple random sampling technique was used to sample 306 students and 80 tutors giving a total sample size of 392. Questionnaires and interview guide were used to gather data. Piloting was done in 2 teachers’ training colleges in Kakamega County to test validity and reliability. Validity of the tools was also tested using experts at Kisii University. Cronbach’s alpha coefficient was used to test the reliability of the tools. The questionnaires issued to tutors and students gave acceptable values of 0.78 and 0.80 respectively. Frequencies and percentages were used to analyze quantitative data descriptively. It was also analyzed inferentially using Regression, ANOVA and Pearson Product Moment Correlation to test the existence of a correlation. The findings showed that a positive correlation which was statistically significant existed between the variables in the firsts null hypothesis since the p-value got (0.000) was less than 0.05. The rejection of this hypothesis concluded that types of social media affected students’ academic performance. The study recommended that students should use the various types of social media positively to bolster their academic performance.
Introduction
One of the key focus of the education sector is the academic achievement of students. This is best attained through teachers who have been professionally trained to mould learners. Teachers' Training Colleges (TTCs) are some of the institutions that are mandated to train teachers. These colleges facilitate learning by instilling professional skills in students which they use to teach learners since according to DeMonte (2013), the education standards in any nation depend on the quality of training that teachers receive from these colleges. Hilburn and Ruth (2003) alluded that the ability of learners to acquire key knowledge, attitudes and behaviours that they can use in the society depends on the capacity of teacher to mould and impart in them the necessary 21 st century skills through education. That is why Etkina (2011) noted that teachers must be equipped with the right attitudes and competencies to aid them to produce skilled and independent-minded people. One of these skills is socialization.
It has been noted that social media has undergone many changes since its inception. According to Carly and Anna (2016), email was the first social media to be used. The social media used in the late 1990s are Chatrooms, instant messaging and instant communication platforms like blog communities. They were followed by Facebook, Twitter, MySpace, Instagram and LinkedIn, Snapchat which were developed in the early 2000s (Carly & Anna, 2016). The data reported by Mediabistro (2014) indicated that the top ten social media used for communication were Facebook, WhatsApp, Google +, LinkedIn, Twitter, Tencent QQ, TencentWeibo, Ozone, Wechat and Tumblr. As such, the types of social media used by students in TTCs was an area of interest in the present research.
Social media is used by various people and institutions in different ways. For example, it is used for administrative and student support services at Anadolu University in Turkey (Bozkurt, Karadeniz & Kocdar, 2017). This university used it administratively to communicate with students, get feedback from them, run university marketing campaigns and make institutional announcements. It was also used by these students to communicate among themselves, keep in contact with the university, and connect with friends or family. Students said Facebook was their favourite site followed by Twitter, YouTube and Instagram in that order (Bozkurt et al., 2017). This study concurred with the argument by Rap and Blonder (2017) who reported that social media are used for administrative purposes and more so to hasten communication in some learning institutions. Acheaw and Larson (2015) noted that the extent of using social media was high among university learners in Jordan. These learners use social media to communicate with other members of the society like friends, siblings and parents. Hameed, Maqbool, Aslam, Hassan and Anwar (2013) study found a positive significant link between types of social media and behaviour of university learners in Pakistan since they mostly use Facebook and Twitter. The study by Iffat (2016) revealed that Facebook was an integral part of the life of women in Pakistan since they access it many times daily to communicate, get information and interact. This was because their society do not allow females to mingle with males in gatherings. This agreed with Manasijević, }ivković, Arsić and Miloaević (2016) who argued that learners utilize Facebook in their learning process to communicate and interact with others. Similarly, the study by Aslam and Nazim (2016) revealed that most people daily use Facebook followed by Twitter, Google+, MySpace, Fliker and Bebo to communicate with other people. Archana and Jyotsna (2015) observed that learners in India are heavy users of social media like Facebook, Twitter, Google+, WhatsApp, Myspace and gaming sites. Shukor, Musa and Shah (2017) said that WhatsApp was the most popular social media in Malaysia.
A study in Nigeria by Omoye (2014) explored how social media is used in advertising. It observed that Facebook, Twitter, Blogs and YouTube were mostly used in the advertising industry in Nigeria to communicate via Internet. It also noted that online sites reduce the problem of distance and time by aiding people who are far apart to advertise products, communicate and give instant feedback via social media. This was noted further by Moshi, Ndeke, Asatsa, and Ngozi (2018) who showed that online sites have not gone without major consequences on learning behaviour of secondary school students in Tanzania since the irresistible attraction to social media has encroached into the habits of students making them to be less productive. This makes them to waste their study time downloading materials, chatting with friends and family members on Facebook and WhatsApp. as teachers and parents cannot control their use of Internet and smartphones. They also use Twitter, Skype, Instagram and LinkedIn but not very frequently.
A study by Mutua (2011) opined that over 35% of youths aged 7 to 24 years in the three main East African nations had access to Internet with Kenya leading at 49% followed by Tanzania at 30% and Uganda at 26%. The study by Koross and Kosgei (2016) revealed that social media negatively impact the communication and academics of youths in Kenyan public university because they use it more than television, newspapers, radio and face to face. These Kenyan public university students use Facebook, WhatsApp, Instagram and YouTube. These concurred with the Internet World Statistics report of the year 2020 on the use of Internet in Africa which showed Internet use in Kenya was 85.2% (Internet World Stats, 2021). The 2019 report by Social Media Lab Africa (SIMElab Africa, 2019) on social site use in Kenya showed that WhatsApp (88.6%) was most popular followed by Facebook (88.5%), YouTube (51.2%) and Google+ (41.3%) in that order. LinkedIn (9.3%) and Snapchat (9.0%) are rarely used by Kenyans.
These literature reveals that a lot has been done in relation to types of social media used by various people to communicate globally. However, there exists a gap in literature in relation to the same issue in reference to TTCs in Vihiga County in Kenya. This was because the use of social media has been rising exponentially globally, Kenya included. This rise is common among teenagers and youth most of whom are students at different levels in various learning institutions. Records also shows that students' academic performance in Primary Teacher Examination (PTE) was low in the last five years from 2015 to 2019. To this end, the conduction of this study was justified by the little previous evidence on the subject of social media and academics of students in TTCs. Hence, the current study addressed this literate gap via the study objective in order to examine the relationship between types of social media used by students in TTCs to communicate and their effect on their academic performance in Vihiga County. The study intended to reveal if this low academic performance was partly attributed to this irresistible use of the various types of social media with the aim of identifying whether social media encroaches into the study habits and learning time of learners as most of them spend a lot of their study time on social media socializing and communicating.
The key objective which guided this study was to examine the relationship between types of social media used for communication and the level of academic performance of students in TTCs. The null hypothesis (HO1) which was tested at 0.05 significance level stated that; there was no significant relationship between type of social media used for communication by students and level of their academic performance in Teachers' Training Colleges. It was worth to explore the effect of using various types of social media on the academic performance of students as necessitated by the rapid exponential rise in the usage of social media globally and Kenya in particular. This would in turn guide the administration of the various colleges to set appropriate strategies on the use of social media in their respective colleges.
The study was anchored on technological determinism theory by McLuhan Marshall (1964). It states that technology and specifically media shapes the way people think, feel, act and how societies operate and organize themselves.
McLuhan linked technology to their effect on society. The theory's proposition states that technology is key to society since the use of technology affects the direction and pace of social change. Social changes are caused by technological revolutions. Social media was the technology that was considered in this study while the effect of technology on society was its influence on academic performance of students in TTCs in the county since technology varies the way users communicate and relate globally. For instance, social media has penetrated in all aspects of human life including students' academics. The rapidly varying social media has made students to use various online sites like Facebook, WhatsApp, Twitter and others to enhance their academic performance.
Methodology
Correlation survey research design was used with the help of mixed methods approach to assess the correlation between types of social media and academic performance (Creswell & Plano Clark, 2010). The target population was 1584 comprising of 1478 students, 6 principals and 100 tutors in six TTCs. The sample size of 6 principals, 80 tutors and 306 students was obtained using the table developed by Krejcie and Morgan (1970). It was adequate since a minimum sample of 100 is adequate for a study (Kothari, 2014). Both probability and non-probability sampling techniques were used to sample the informants. Probability sampling procedures that were used are stratified and simple random sampling due to heterogeneous nature of the target population. Non-probability sampling technique that was used to give each member an equal chance to be represented in the study was purposive sampling (Kothari, 2014). The sample sizes for students and tutors were got using purposive and simple random sampling techniques. Questionnaire and interview guide were used to gather data for better understanding of the study (Neubauer, 2019). Questionnaires were administered to students and tutors while the principals were interviewed. It was piloted in two TTCs in Kakamega County to detect the limitations of the tools (Kombo & Tromp, 2011). The piloting sample size was 10% of the sample size that was used in the study (Ondiek, 2008). It included one principal; 9 tutors; and 34 students to give 44 respondents. Experts at Kisii University scrutinized the tools to establish validity and reliability. Reliability of questionnaires was got using Cronbach's alpha value (Ganti, 2020). The α-value got of 0.79 meant that the tool was reliable. The reliability of a tool is adequate if the alpha value is over 0.7 (Plano and Ivancova, 2015).
The data collected was analyzed quantitatively using SPSS software Version 20. Descriptively, it was analyzed numerically using frequencies, percentages and presented in tabular form. Inferential statistics was used to analyzed the data inferentially to test the null hypothesis and draw conclusions. The hypothesis was tested using Regression Analysis, Pearson Product Moment Correlation and Analysis of Variance (ANOVA). Pearson Product Moment Correlation predicted if there was a correlation between the variables in the hypothesis. ANOVA and Regression were used to determine the effect of types of social media on academic performance. The Adjusted R Square value explained the value of the regression analysis since it tells how well data points fit on a regression line. A multiple regression equation was used to test each hypothesis. Unstandardized Beta (β) coefficients were got from the model for each SEREK publication https://www.serek.or.ke This work is licensed under a Creative Commons Attribution 4.0 International License dependent variable. Beta coefficient shows the degree to which academic performance varied when social media varied by one-unit. Unstandardized beta coefficients showed how much the dependent variable varied with an independent variable when all other independent variables are held constant. This regression coefficient gave the expected change in the dependent variable for a one-unit rise in the independent variable. The significance level of testing the null hypothesis was 0.05 as it was the most preferred level in a social science study (Gunby & Schutz, 2016). The null hypothesis was rejected if the P-value was less than 0.05 leading to acceptance of the alternative hypothesis implying that a significant correlation existed. If the P-value was over 0.05, the null hypothesis was accepted. Hence it was concluded that a significant correlation did not exist (Molina & Cameroon, 2015).
Results and Discussions 3.1 Frequency Analysis of Types of Social-Media and Academic Performance
The study required the participants to indicated the type of social sites which they thought students use frequently. Their views were as described in the analysis shown in Table 3.1. Facebook was second at 28% followed by YouTube at 6.7%; Twitter was at 4% and Instagram was the minority at 1.3%. Hence, this report revealed that WhatsApp was the favourite and most popular social site of students followed by Facebook. Bozkurt et. al. (2017) argued Facebook was the most popular social media among distance education students in Turkey followed by YouTube and Instagram. Aslam and Nazim (2016) study noted that most library information professionals in India access Facebook, Twitter, Bebo, Google+, Fliker and MySpace daily to talk with other people. Also, Hameed, Maqbool, Aslam, Hassan and Anwar (2013) noted that social media is used often by university learners in Pakistan to share with their colleagues. The study by Shukor, Musa and Shah (2017) showed that Facebook was a popular social media in Malaysia. This research was also in agreement with the findings of the survey by Balogun et. al. (2017) which showed that the main social media platform which most undergraduate university students in Nigeria used frequently were Facebook and WhatsApp. The same was said by Moshi, Ndeke, Asatsa, and Ngozi (2018) who found out that Facebook, WhatsApp, LinkedIn, Skype and Academia were the social media which were used by Tanzanian secondary school learners. The results of the present study were consistent with the report of 2019 by Social Media Lab Africa which noted that WhatsApp was the popular social media in Kenya followed by Facebook, YouTube, Google+, LinkedIn and Snapchat in that order (SIMElab Africa, 2019). This happened because that social media is one of the biggest platforms for sharing real time information in the world. This is because it is an interactive and Internet driven site that can promote academics when used well.
Likert Scale Analysis on Types of Social-Media and Academic Performance
A five-point Likert scale was used in the questionnaires to determine if students use social media to support their studies positively. In the scale, SA meant strongly disagree; A meant agree; N meant not sure; D meant disagree; and SD meant strongly agree. The degree of disagreement or agreement of students were shown on Table 3.2. It is evident from this table that majority of students strongly agreed and agreed that they use social media to do assignments and discuss with their colleagues; to ask tutors questions; to attend classes online; to seek support from college administration; to text messages; to post images; and to share videos. It was therefore generally concluded from the results got from students that most students use social media to promote their studies positively as supported by the average mean score value of 1.45 and an average standard deviation value of 0.559. This response was in agreement with the findings of the study by Bozkurt et. al. (2017) who reported that sharing of learning content via social media rises the interest of learners in the courses they undertake by 49.5%.
Inferential Statistics on Types of Social-Media and Academic Performance
Inferential statistics were used to test the null hypothesis at a significance level of α = 0.05. The hypothesis would be rejected if the P-value was less than 0.05. The alternative hypothesis would then be accepted to imply that a significant correlation existed and vice versa. The statistical tools used to make an inference are Pearson Product Moment Correlation, Regression Analysis and Analysis of Variance (ANOVA). They are discussed below.
Correlation between Types of Social-Media and Academic Performance
The study sought to establish if there was a correlation between the two variables in the objective. A correlation analysis was done using Pearson Product Moment Correlation to aid in making this inference about the correlation between these two variables. The results of this correlation analysis in the SPSS output are shown in Table 3.3. (2-tailed) .000 N 302 302 **. Correlation is significant at the 0.05 level (2-tailed). Table 3.3 shows there was a positive statistically significant moderate correlation between the social sites visited by students frequently on social media and their performance in end of semester examination (r = 0.774, n = 302, p = 0.000) at 0.05 level of significance. This result meant that the more students use the various types of social media sites, the more their academic performance improve. Based on the findings of this study, it was evident that the types of social media used for communication by students correlates positively with level of their academic performance. The results shown in Table 3.3 shows that the null hypothesis (HO1) was rejected because the p-value got (that is 0.000) was less than 0.05. Hence, the alternative hypothesis was accepted implying that there existed a significant correlation. Consistent with the current results, Bozkurt et. al. (2017) said that there was a positive correlation between types of social media used by students and their academic performance. A linear regression analysis at a confidence level of 95% was done to determine the effect of types of social media used by students on the level of their academic performance in order to establish the magnitude of the significant positive correlation between the independent variable and the dependent variable. The results are in Tables 3.4. The Adjusted R Square value of 0.207 in Table 3.4 show that the variations in dependent variable (academic performance) can be explained by 20.7% of the independent variables (use social media to share videos, use social media to chat / text, use social media to attend lectures online, use social media to ask tutors questions, use social media to seek support from administration, use social media to post images, use social media to discuss or do assignments with fellow students). This means that the remaining 79.3% of other factors not discussed in this study affect academic performance of students in TTCs in the county. This shows that types of social media used by students to communicate had a fairly low effect on their academic performance. This weak relationship between the two variables was supported by the R value of 0.475. These facts noted in Table 3.4 further supported the rejection of the null hypothesis of this study at a threshold of P < 0.05. However, in order to determine if the types of social media used by students was indeed a significant predictor of academic performance, the researcher tested the null hypothesis using Analysis of Variance (ANOVA) based on the views of Creswell (2014).
ANOVA Results on Types of Social-Media and Academic Performance
ANOVA was used to establish the effect of types of social media on academic performance. The ANOVA results obtained from the respondents was as shown in Table 3.5. Table 3.5 shows that there was a statistically significant effect of the predictor (independent variables) on the dependent variable (academic performance) as observed in the ANOVA result [F (7, 294) = 12.225, p = 0.000, Adjusted R Square = .207] at 0.05 significance level. This output showed that the regression model statistically predicts the dependent variable significantly well since the p-value (0.000) got was less than 0.05. This further supported the rejection of the null hypothesis (HO1) leading to the acceptance of the alternative hypothesis (HA1) which stated that there was a significant relationship between types of social media and level of academic performance of students in TTCs in Vihiga County.
Conclusion and Recommendations 4.1 Conclusion
The key purpose of this study was to find out if a correlation existed between types of social media and academic performance of students in Vihiga County. According to the study findings, it was concluded that types of social media used for communication affect the level of academic performance of students in TTCs in the county based on the rejection of the null hypothesis (HO1) formulated. This was because the correlation result got (that is, p = 0.000) was less than the level of significance set (that is, 0.05) meaning that the two variables had a positive statistically significant correlation.
Recommendations of the Study
This study derived the following recommendations based on the conclusion made: Since social media used by learners affects their academics, learning institutions should enact regulations that will govern the proper and positive use of the various types of social media sites among learners in learning institutions to promote their academics. ii.
The government should formulate policies that will enhance the surveillance of the kind of social or interpersonal interactions that occur on social media among learners. | 2023-08-31T15:16:58.125Z | 2023-07-19T00:00:00.000 | {
"year": 2023,
"sha1": "208b3edbc3c7cd83cea904619d095c86fa8b835b",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ajessr/article/view/251271/237497",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "35da8f83cafdbcf85fe60d85987d68c257e1862d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
237846192 | pes2o/s2orc | v3-fos-license | Determinants of industry expertise outsourced IAF: Do company and auditor attributes affect the selection?
Abstract Outsourcing the internal aud it function (IAF) is a worldwide practice attractive to companies, practitioners and regulators because it is believed that providers of this function are objective and competent and can provide high-quality audit. This study explores the potential influence of company and auditor characteristics on selecting outsourced IAF providers with industry expertise. Using 334 observations for non-financial companies that outsourced this function to an external provider over the period 2010–2017, logistic regression suggests that company characteristics such as size, issue of new equity, age, and total accruals significantly determine the selection of industry-expertise outsourced IAF (IEOIAF) providers. We report similar findings when considering alternative approaches for measuring industry expertise, using a matching sample method, and controlling for the potential effect of endogeneity. In additional analysis, we explore these determinants, classifying the IEOIAF providers into big4 or second-tier audit firms; we find that size, leverage, quick ratio, concentrated ownership, loss, assets turnover, age, total accruals, external auditor type, and audit fees are the major determinants in the choice of an IEOIAF provider. Our study is of interest to a variety of users and provides the first empirical evidence for the determinants of outsourced IAF providers with industry expertise.
PUBLIC INTEREST STATEMENT
Outsourcing the internal audit function (IAF) is a form of assigning the IAF activities to a provider outside the company, such as an audit firm. Outsourcing IAF is a worldwide practice for companies who wish to have a higher quality of financial reporting and internal control. Accordingly, a number of studies have discovered the importance of outsourcing IAF in the different context, concluding that the quality of the provider plays a crucial role in ensuring high-quality IAF. This study explores the determinants of selecting a high-quality provider. In particular, it examines whether company-and external auditor-specific characteristics affect the selection of an outsourced IAF provider with industry expertise. It finds that characteristics related to company size and complexity are the primary determinants. It also find evidence linking the characteristics of the company and external auditor with an outsourced IAF provider with industry expertise based on its type, either a big4 or second-tier audit firm. Overall, these findings will interest different stakeholders in capital markets.
Introduction
Over the last two decades, requiring public companies to establish an internal audit function (IAF) has become the norm in many capital market authorities worldwide. For example, effective from October 2004, the New York Stock Exchange (NYSE) in the USA mandated that traded companies incorporate IAF, either in-house or through outsourcing. Other capital markets have been influenced by this requirement, either mandating (e.g., Malaysia; China; Oman) or recommending (e.g., UK; Australia) the establishment of IAF for public companies. The premise is that IAF is a major and effective mechanism solving or mitigating the agency problem (Abbott et al., 2016;Anderson et al., 1993;Sarbanes-Oxley Act, 2002). However, it is evident that companies choose different providers (internally or externally) to perform the IAF activities. For example, the 2015 survey of the Institute of Internal Auditors (IIA) revealed that one-third of respondents worldwide outsourced their IAF activities partially or fully to a third party, with the majority expecting to keep or expand the outsourcing in the future (Barr-Pulliam, 2016). Empirical research also reports variation in public companies in regard to the sourcing of IAF activities, suggesting that many of them outsource this function to a third party such as audit firms (Baatwah et al., 2019;Baatwah & Al-Qadasi, 2020;Mubako, 2019;Wan-Hussin & Bamahros, 2013). However, little is known about whether companies differentiate between the providers of outsourced IAF and why they select a particular provider. Thus, this study aims to explain the choice of IAF provider, and particularly the choice of external IAF provider.
The main objective of this study is exploring the determinants of outsourcing IAF. Specifically, it first examines whether company attributes drive companies to select an external IAF provider who is the dominant force in the industry. Another objective is to examine whether external auditor attributes have any influence on the choice of an external IAF provider with industry expertise. 1 These objectives are justified, first, by the worldwide trend in outsourcing IAF to external providers because companies have difficulty in finding qualified IAF staff (Barr-Pulliam, 2016) and would have to invest heavily in developing a high-quality internal audit department (Mubako, 2019). Thus, as IAF is a cornerstone of monitoring, it is contended that outsourcing IAF can help companies to ensure high-quality monitoring at lower costs because these providers are objective and possess the required human and technology resources (Caplan & Kirschenheiter, 2000;Carey et al., 2006;Mubako, 2019). However, it is apparent that selecting an external provider is not a random choice, and that companies have started to differentiate between providers based on their qualities in performing this function and the level of required monitoring. Thus, it is expected that retaining external IAF providers with industry expertise is becoming a priority, because they may improve the monitoring quality at lower cost (Carey et al., 2006). To our knowledge, very limited research examines the determinants or factors influencing the choice of outsourced IAF provider (Baatwah & Al-Qadasi, 2020;Carey et al., 2006), and none of this research examines the factors associated with the choice of industry-expertise outsourced IAF providers (IEOIAF). Accordingly, Mubako (2019) calls for more exploration on the determinants of outsourcing IAF and the types of provider.
Second, most audit firms incorporate IAF services in their business model (Selim & Yiannakas, 2000) and consider it as a major source of revenue (MarketWatch, 2020;Rittenberg & Covaleski, 1997;The Business Research Company, 2018). However, it is anticipated that there is fierce competition among audit firms to provide IAF, and that they might start to follow particular strategies, for example, industry knowledge, to differentiate their IAF services as a way of attracting clients. As a result, some audit firms identify themselves as a specialist provider of IAF; for example, PwC claims on their website that "With PwC's Internal Audit Solutions, you'll have a partner who thinks about risk in the context of your business", while Ernst & Young's website clearly states that "We are a market leader in innovative and transformative internal audit (IA) and internal controls (IC) services . . . Whatever your company's size, sector, geography or maturity, our IA services are flexible and scalable to help you". 2 This may be an indicator of sufficient incentive for the audit firms to allocate greater investment in industry knowledge and technologies, in order to develop IAF expertise and industry dominance. However, to the best of our knowledge, little is known about industry expertise in the context of IAF and whether audit firms recognise industry expertise at the level of IAF as a strategy differentiating their IAF service.
Finally, although outsourcing IAF is increasingly being studied, little attention has been paid to its determinants. For example, a growing number of researchers have investigated the determinants of outsourced IAF (e.g., Abbott et al., 2007;Abdolmohammadi, 2013;Carey et al., 2006;Sarens & Abdolmohammadi, 2011;Widener & Selto, 1999), but predominantly concentrating on the factors which explain why companies opt to outsource their IAF activities to an external provider rather than keeping IAF in-house. This research is limited and needs further investigation to understand the phenomena of outsourced IAF (Mubako, 2019). Further, although it has advanced our understanding of the determinants of outsourced IAF, little work has been conducted to expand our knowledge of outsourced IAF by type of provider. Baatwah and Al-Qadasi (2020) provide the first empirical evidence examining the factors associated with the selection of big4 or non-big4 audit firms as external IAF providers. They include a set of explanatory variables related to company attributes and external auditor attributes; the results suggest that the choice of big4 audit firms as IAF providers is significantly influenced by board and audit committee expertise, audit committee size, and type of external auditor, while the selection of non-big4 audit firms is significantly determined by board independence, CEO expertise, profitability, and ownership structure. These results indicate that selecting an independent and expert outsourced IAF provider is a stronger motive than cost saving. Therefore, it is interesting and important to explore what factors motivate companies to select an IEOIAF provider.
Investigating factors influencing the choice of companies to demand high-quality auditors, big4 audit firms or industry specialists, is crucial because relatively limited research has considered this area of research to date (DeFond & Zhang, 2014). In the current study, we concentrate on the determinants of selecting an external IAF provider with industry expertise. At the external audit level, this type of auditor is reported to provide more effective and efficient audit services because they understand the industry phenomena surrounding the client (e.g., business risks and accounting issues) and apply audit techniques and tests fitting the nature of the client (e.g., Balsam et al., 2003;DeFond & Zhang, 2014;Solomon et al., 1999). They accordingly ensure high-quality monitoring of the financial reporting process, internal control system, and compliance with the regulations. Thus, a growing number of publications has emerged examining why firms select external auditors with industry expertise and identifying various variables explaining the selection of an industry-expertise external auditor (e.g., Al-Qadasi et al., 2019;Chen et al., 2005;Darmadi, 2016;Ettredge et al., 2009;Huang & Kang, 2018;Kang, 2014;Srinidhi et al., 2014;Zhang et al., 2019). However, no research has explored the determinants of industry expertise in IAF providers, and this study seeks to fill this void.
Based on agency and signalling perspectives, we use a number of company-and external auditor-specific characteristics as the main determinants of IEOIAF providers. In particular, we consider measures for size, complexity, and business risk as company-related attributes, and audit firm type, industry expertise, and audit fees as external auditor-related attributes. According to these two perspectives, the incentives for selecting an IEOIAF provider might be explained by size, complexity, and business risks (DeFond & Zhang, 2014). Also, it is found that external auditors rely on the work of internal auditors, especially if the outsourced IAF provider is of high quality (Desai et al., 2011). Some of these characteristics have been considered in prior outsourcing IAF literature (e.g., Abdolmohammadi, 2013;Carey et al., 2006), but none has explored their influence in the context of the type of outsourced IAF provider. We acknowledge the recent findings of Baatwah and Al-Qadasi (2020) who explored some of these characteristics in the context of outsourced IAF types with an exclusive focus on big4 and non-big4 IAF providers.
Using a sample of companies that outsourced IAF over the period 2010-2017 from Oman, a setting with a common practice of outsourcing IAF and the public availability of data on outsourced IAF providers, the major findings of our study show several company and external auditor attributes significantly explaining the choice of external IAF providers with industry expertise. Specifically, we find that company characteristics such as total sales, equity market to book value ratio, and age are positively associated with IEOIAF providers. On the other hand, we observe a negative association between company characteristics such as issue of new equity and total accruals and these providers. However, we find that company characteristics such as leverage, ownership structure, loss, quick ratio, and assets turnover, and external auditor characteristics such as size, industry specialist, and fees are not significantly associated with IEOIAF providers. We verify these findings by conducting a variety of robustness tests and qualitatively conclude similar findings. Expanding this analysis to the firm type of IEOIAF provider reveals that size, leverage, and quick ratio are important determinants for big4 audit firms, while concentrated ownership, loss, assets turnover, age, total accruals, external auditor type, and audit fees are the major determinants for second-tier audit firms.
The current study seeks to make a threefold contribution to the literature. First, while there is research examining why companies select an external auditor with industry expertise for statutory audit (e.g., Al-Qadasi et al., 2019;Chen et al., 2005;Darmadi, 2016), the examination of industry expertise at IAF level is somewhat novel in the audit literature. The industry-expertise auditor represents the main input for high-quality audit, high-quality financial reporting, and a lower level of agency problems (DeFond & Zhang, 2014). To the best of our knowledge, this paper is the first to examine the determinants of the type of outsourced IAF provider, such as an industry specialist. Second, we expand the determinants of outsourced IAF by including new factors (e.g., market to book value; quick ratio; assets turnover; total accruals, and external audit fees) that may play a crucial role in motivating companies to hire an industry-specialist IAF provider. Third, we build on recent research (Baatwah & Al-Qadasi, 2020) to expand the investigation of whether companies differentiate between the industry expertise provided by external IAF auditors based on their type: big4 or non-big4 audit firms. This new stream of research represents a timely response to the calls for exploring the salient feature of outsourcing IAF and the type of provider (Mubako, 2019).
We organise the remainder of this paper in six sections. The next two sections cover the background of the study setting and theoretical framework. The fourth section reviews prior research and the development of hypotheses. The research method is presented in the fifth section. The sixth section reports and discusses the main and additional results. Finally, we conclude the study and its implications in the seventh section.
Background to the setting of the study
This study employs data from Oman, as this setting provides a number of attributes enabling the examination of outsourcing IAF determinants. Oman is a developing market located on the southeastern coast of the Arabian Peninsula. It is a member of the Gulf Cooperation Council (GCC) and shares several cultural, political, and socioeconomic characteristics. For example, the political system is a monarchy, and oil/gas is the mainstay of the economy. In the late 1980s, the country initiated several measures to diversify its economy and considered a solid financial market as a major development. Accordingly, it established a securities exchange market, the Muscat Security Market (MSM), as a regular marketplace for companies and investors trading securities; and a market regulator, the Capital Market Authority (CMA) (Baatwah et al., 2018). However, like most global capital markets, companies and investors in Oman has experienced capital market shocks resulting from corporate frauds and bankruptcy (Rehman & Hashim, 2020). Thus, and responding to developed markets reforms (e.g., SOX), Omani regulatory authorities took action to recover the investors' trust in the capital market and to increase market efficiency.
One noticeable capital market reform in Oman is the code of corporate governance (CCG). Since 2002, all listed companies are required to implement the code articles and to disclose their compliance in their annual reports, including corporate governance report and auditor's report on corporate governance compliance (Capital Market Authority, 2002). This code was the first to be introduced in the Middle East and North Africa (MENA) region, and is considered to be sophisticated and compatible with codes in developed markets (e.g., USA; UK) (Al-Ebel et al., 2020;Hawkamah, 2006). It contains articles organising and managing the relationship between management, directors, auditors, and investors. For example, it requires the board of directors to be dominated by non-executive directors with at least a third of its membership being independent, and with its chair being independent and/or a non-executive director. Also, the audit committee is required to comprise at least three nonexecutive directors, the majority being independent; to hold four meetings a year; and to include at least one director with accounting expertise. Relatedly, in parallel with the code, the CMA requires all types of companies to establish an IAF, which can be outsourced (Capital Market Authority, 2020). However, in Oman, the financial statements' auditors are prohibited from providing IAF activities to their clients; the outsourced IAF provider is therefore an audit firm/auditor other than the incumbent external auditor. It is reported that 58 percent of Omani listed companies outsource their IAF to external providers (Baatwah et al., 2019), the majority of whom are from big4 and second-tier audit firms (Baatwah & Al-Qadasi, 2020).
Another important aspect in Oman is the audit practices. Auditors in Oman are regulated by Commercial Companies Law 4/74, Accounting and Auditing Profession Law 77/86, and the CMA regulations, circulars and decisions (Al-Ebel et al., 2020;Baatwah et al., 2018). For example, auditors must apply the international auditing standards when conducting statutory audit and ensure their clients follow international accounting standards in preparing their financial statements. Further, they are required to finalise their audit for listed firms within 60 days after the annual closing date, and be rotated after four consecutive years with a two-year cooling-off period. They are allowed to provide only three types of non-audit services to their clients, after audit committee approval: auditing-related services, tax advisory services, and investigation (Capital Market Authority, 2018). This may create more intense competition between auditors to differentiate their services. It is important to emphasise that the audit market in Oman is distinguished by some unique characteristics. First, big4 audit firms control the external audit market, auditing more than 65 percent of listed companies; on the other hand, non-big4 audit firms dominate the outsourced IAF market (Baatwah et al., 2019(Baatwah et al., , 2018. Second, as indicated on the CMA and MSM websites for the year 2020, the number of companies listed on the MSM is small, around 111, and the number of accredited auditors is 20. Third, the size of audit fees is very small compared with other capital markets, on average USD30,260 (Baatwah et al., 2019(Baatwah et al., , 2015. Overall, based on the above criteria, we argue that Oman is an appropriate setting in which to examine the determinants of outsourcing IAF. First, financial reporting and audit regulatory frameworks in Oman may motivate auditors to provide high-quality outsourced IAF. Second, the nature of the audit market may also increase the auditors' motivation to conduct distinctive IAF activities in order to penetrate the audit market and/or to compensate for lower external audit fees. Third, a large number of companies use external providers to conduct IAF activities; they are required to disclose in their annual reports the type of IAF provider (in-house; outsourced; cosourced) and the name of any outsourced provider. This will allow us to examine the factors associated with the selection of particular providers and whether companies consider the expertise of such providers. Finally, empirical evidence from Oman reveals that companies differentiate between outsourced IAF providers and consider the level of agency problem and the associated costs when selecting an outsourced IAF provider (Baatwah & Al-Qadasi, 2020).
Theoretical literature review
Although over the last two decades IAF in various capital markets has been mandated, its establishment is still largely voluntary in others. In recent practice, IAF is dedicated to conducting a review of the financial reporting process, internal controls, and compliance with regulations, and to provide advice on risks and operations (DeFond & Zhang, 2014;Dzikrullah et al., 2020;James, 2003;Jiang et al., 2020;Kabuye et al., 2019;Sarens et al., 2009;Savčuk, 2007). These activities can be assigned to employee(s) within the company or to a third-party specialist or audit firm (Caplan & Kirschenheiter, 2000;Mubako, 2019;Selim & Yiannakas, 2000). Given the difficulty of accessing highly qualified staff, the cost of investment in an in-house department, and the potential threat to objectivity, the primary provider of IAF is a third party, such as a well-known audit firm (Abdolmohammadi, 2013;Barr-Pulliam, 2016;Mubako, 2019). In general, external IAF providers are staffed with more qualified and experienced partners and teams and are more inclined to be independent from management (Caplan & Kirschenheiter, 2000;Carey et al., 2006;Selim & Yiannakas, 2000).
However, the choice of a particular provider of outsourced IAF is still largely in the hands of the company's decision makers. Prior research reports that companies outsource their IAF activities to big4 and second-tier audit firms as well as to other non-big4 audit firms (Baatwah & Al-Qadasi, 2020;Carey et al., 2006;Prawitt et al., 2012). In this study, we focus on the determinants of selecting an external IAF provider with industry expertise; as reported in the external audit literature, an industry specialist auditor provides more effective and efficient audit services that result from understanding the industrial environment of the company (e.g., business risks and accounting issues) and from the application of audit techniques and tests fitting the nature of the company (e.g., Balsam et al., 2003;DeFond & Zhang, 2014;Hsin-Yi & Chen-Lung, 2011;Krishnan, 2003;Liu et al., 2017;Solomon et al., 1999). This implies that the industry specialist can ensure high-quality monitoring of the financial reporting process, internal control system, and compliance with the regulations. Thus, we employ agency theory and signalling theory as boundaries explaining the selection of an external IAF provider with industry expertise. These two theories are common in studies of auditor selection and provide consistent predictions in relation to selection (Firth & Smith, 1992;Morris, 1987).
According to agency theory, a business characterised by large-scale economics and investments has, in most cases, been forced to separate management from ownership (Berle & Means, 1932). Thus, owners or shareholders delegate to managers all strategic and operational decisions, assuming that these decisions will always maximise the wealth of shareholders. However, managers and shareholders have different preferences, and managers' self-interest results in the agency problem (Jensen & Meckling, 1976). This problem is exacerbated by information asymmetry where managers have better knowledge about the company than shareholders (Healy & Palepu, 2001). Thus, shareholders, regulators, and scholars have suggested several mechanisms to reduce this problem (Healy & Palepu, 2001;Jensen & Meckling, 1976). Agency theory suggests that internal auditors can eliminate the adverse selection and reduce information asymmetry as they contribute to the quality of monitoring and disciplining the opportunistic behaviours of managers (Anderson et al., 1993;Widener & Selto, 1999). This monitoring is more likely to be strengthened if the provider is an external body possessing greater industry and technology expertise (Baatwah & Al-Qadasi, 2020;Carey et al., 2006). Thus, according to this theory, companies with a higher agency problem which they are seeking to reduce are more likely to select an external IAF provider with industry expertise.
Another perspective we use to explain this selection is signalling theory. According to this theory, the presence of information asymmetry between managers and external users is anticipated (Morris, 1987;Spence, 1973). To convey their capabilities and/or the quality of their work, managers can signal these capabilities and/or qualities to the market, differentiating themselves from other managers (Morris, 1987;Whelan & Demangeot, 2014). This theory also suggests that managers/directors may select high-quality auditors (e.g., industry specialist) to signal their commitment to shareholders (Morris, 1987); thus, external IAF providers with industry expertise are chosen to signal their good performance.
Empirical literature review and hypothesis development
Studying the factors motivating companies to demand high-quality auditors such as big4 or industry specialists is an interesting area of research because the evidence is relatively limited (DeFond & Zhang, 2014). Much of the work on auditor selection has been conducted in the context of external audit (Habib et al., 2019; see for recent literature review on auditor choice). The focus of the literature has been on the selection of auditors with a known brand-name, such as the big4 audit firms (e.g., Abbott & Parker, 2000;Beasley & Petroni, 2001;Ettredge et al., 2009;Simunic & Stein, 1987). While this research has theoretically and empirically advanced our knowledge of the incentives to choosing high-quality auditors, a number of studies have emerged exploring why companies select auditors with industry expertise (e.g., Al-Qadasi et al., 2019;Chen et al., 2005;Darmadi, 2016;Ettredge et al., 2009;Huang & Kang, 2018;Kang, 2014;Srinidhi et al., 2014;Zhang et al., 2019); explanations include ownership structure, business risk, complexity, and corporate governance mechanisms. However, little research has investigated the determinants of selection of external IAF providers with industry expertise. Our study fills this gap and responds to the call for exploring the types of outsourced IAF provider and factors associated with the choice of one provider over another (Mubako, 2019).
Most studies on IAF focus on explaining why companies choose between in-house and outsourced arrangements. For example, one stream of research examines whether company attributes (e.g., size; loss; growth; profitability) drive companies to outsource IAF activities (Abbott et al., 2007;Caplan & Kirschenheiter, 2000;Carey et al., 2006). Another explores the characteristics of IAF providers and their influence on the decision to outsource some or all of their IAF (Abdolmohammadi, 2013;Carey et al., 2006). These characteristics include competencies, age, degree, professional membership, and interaction with the audit committee. A further stream explores how the characteristics of corporate governance mechanisms (board of directors, audit committee, and CEO) affect the decision (Abbott et al., 2007;Abdolmohammadi, 2013;Baatwah & Al-Qadasi, 2020); other characteristics include size, independence, expertise, meetings, tenure, and authority. While the majority of this research reports interesting findings, a limited number of determinants are considered and the results are not conclusive (Baatwah & Al-Qadasi, 2020). Further, the differences between the outsourced providers are ignored, although each type of provider has its own competencies and abilities (Mubako, 2019).
To date, the study by Baatwah and Al-Qadasi (2020) appears to be the only one to differentiate outsourced IAF providers, whether big4 or non-big4. It reports that board expertise, the expertise and size of the audit committee, and the external auditor's type are significantly associated with selecting a big4 audit firm. It also shows that audit committee independence, CEO expertise, concentrated ownership and profitability are associated with non-big4 audit firms. We extend this stream of research by examining how the company and external auditor's characteristics play a role in choosing an external IAF provider with industry expertise.
Company-specific characteristics
According to the theories adopted by this study (agency and signalling), several factors related to the company may explain the incentives for selecting an industry-expertise external IAF provider. These characteristics, reflecting size, complexity, and risks, are reported to influence the selection of high-quality auditors (DeFond & Zhang, 2014), and the decision to internalise or externalise this function (e.g., Abbott et al., 2007;Baatwah & Al-Qadasi, 2020). The following subsections discuss these characteristics and propose the expected direction of association.
Company size
The size of the company is a common determinant of several auditing and accounting measures. Larger companies tend to appoint a high-quality auditor, with industry expertise, to scrutinise their financial reports and internal controls (Abbott & Parker, 2000;Beasley & Petroni, 2001). Referring to agency theory, larger companies have more shareholders and tend to be associated with greater agency problems (Jensen & Meckling, 1976). Fama and Jensen (1983) contend that the agency cost is a function of company size, implying that as the company grows larger, its agency costs become higher. Therefore, larger companies use high-quality auditors to mitigate the agency problem (DeFond, 1992;Francis & Wilson, 1988;Simunic & Stein, 1987). Similarly, signalling theory suggests that larger companies are prone to greater information asymmetry, given their complexity and business diversity (Beasley & Petroni, 2001;Zhang et al., 2019). Also, they have financial and nonfinancial resources that enable them to signal their monitoring quality (Girella et al., 2019), through appointing high-quality auditors, and to reduce the information asymmetry. Audit research tends to conclude that larger companies are positively associated with industry expertise of external auditors. For example, Abbott and Parker (2000) find a positive association between the size of a company and the selection of an external auditor with industry expertise. Consistent with this finding, Beasley and Petroni (2001) conclude that larger companies are more likely to hire a high-quality auditor such as one with industry expertise. More recent research continues to assert the positive association between client size and the selection of an external auditor with industry expertise (e.g., Al-Qadasi et al., 2019;Ettredge et al., 2009;Hall et al., 2020;Zhang et al., 2019).
In the context of IAF, limited research also documents that the size of companies is associated with the IAF sourcing decision. For instance, Carey et al. (2006) report that large companies have a greater propensity to outsource the IAF activities to an external provider. Conversely, Baatwah and Al-Qadasi (2020) find a negative association between the size of company and outsourcing IAF activities. However, this research fails to differentiate between the types of outsourced IAF provider, except for Baatwah and Al-Qadasi (2020) who find no significant association between company size and the choice of high-quality IAF providers such as big4 audit firms. Drawing on the agency and signalling theories and on the external auditor literature discussed above, we propose that larger companies which outsource IAF to external providers are more likely to select those with industry expertise. In other words, larger companies face greater agency costs and information asymmetry, and they could appoint an IEOIAF provider to reduce these problems because this provider is more likely to be associated with high-quality IAF and sufficiently powerful to convey this quality to externals. Therefore, the following hypothesis is stated: H1: Company size is positively associated with industry-expertise outsourced IAF providers.
Following research on external auditor selection (e.g., Abbott & Parker, 2000;Ettredge et al., 2009;Zhang et al., 2019), we employ two proxies for company size. The first is total sales, as the natural log of total sales. This proxy reflects the size of the company in terms of its operational and business diversity. Thus, companies with large sales revenue are considered larger and more likely to have a positive association with IEOIAF providers. The second proxy for company size is the market to book value ratio, reflecting greater information asymmetry between investors and managers and in turn increasing the cost of external funds (Girella et al., 2019). Thus, we also expect a positive association between this proxy and the selection of IEOIAF providers. 3
Company risk
Companies with a high proportion of operational and financial risk are more likely to suffer from the agency problem (Jensen & Meckling, 1976). Thus, shareholders and market stakeholders require credible information to assess the ability of the company to control or reduce these risks and ensure the company's long-term existence. This will motivate shareholders and other users to demand a high-quality auditor (Abbott & Parker, 2000;Huang & Kang, 2018). Further, companies may want to signal to market players that they are credible and have strong control over business risks and high-quality information, employing highly qualified external auditors to signal these qualities (Chen et al., 2005;Huang & Kang, 2018). Consistent with these arguments, the external audit literature predicts that high-risk companies, for example, those with large debt or poor performance, are more likely to select high-quality auditors (Abbott & Parker, 2000;Ettredge et al., 2009). However, the empirical findings are not consistent. For example, Abbott and Parker (2000) find that factors associated with financial risk (e.g., leverage, profitability) are not significantly associated with the selection of an external auditor with industry expertise. Chen et al. (2005) find results similar to these findings. Using an international sample, Ettredge et al. (2009) find risk factors such as loss and leverage are significantly associated with the choice of an industry-expertise external auditor. Recent literature also reports inconclusive findings in regard to the association between risk factors and the choice of industry expertise (Al-Qadasi et al., 2019;Huang & Kang, 2018).
Indeed, few IAF studies have used proxies for risk as a determinant of IAF arrangements (inhouse or outsourced) (Abbott et al., 2007) or for the types of outsourced IAF provider (Baatwah & Al-Qadasi, 2020). Abbott et al. (2007) find companies with financial troubles outsource IAF activities to a third party. However, they report an insignificant association between profitability and outsourcing IAF. Baatwah and Al-Qadasi (2020) document inconsistent findings on the association between risks and outsourcing IAF. Overall, this literature presents inconclusive evidence to suggest that company risk plays a significant role in selecting an IAF provider. Thus, we draw on agency and signalling theories and on the arguments advanced by the external audit literature positing that companies with high risk are more likely to appoint an industry-expertise auditor. Also, we follow the analytical model of Caplan and Kirschenheiter (2000) which argues that the motivation for outsourcing IAF is increased when the risk is high. Thus, they suggest that high-quality outsourced IAF providers are qualified to reduce these risks. Therefore, we suggest that appointing an external IAF provider with industry expertise can enhance the monitoring quality over financial reports and internal control systems and, accordingly, boost the confidence of market users in the company's ability to manage risk. In other words, companies with a high level of risk are more likely to outsource IAF to an external provider with industry expertise. Thus, we formulate the following hypothesis: H2: Company risk is positively associated with industry-expertise outsourced IAF providers.
We employ four proxies to measure the extent to which company risk motivates selection of a given type of outsourced IAF provider, consistent with several auditor selection studies that focus on the external auditor (e.g., Chen et al., 2005;Huang & Kang, 2018;Zhang et al., 2019). The first proxy is leverage, in which a high proportion of debt is used to finance company operations and assets, indicating a high risk of bankruptcy or financial distress. Thus, we expect a positive association between leverage and selecting an IEOIAF provider. The second proxy is poor performance as measured by loss. Incurring loss signals to the market the potential risk of company bankruptcy and financial difficulties. Thus, companies with poor performance have greater incentive to engage in high-quality outsourced IAF providers to signal that this performance is incurred in the normal course of business and that they maintain high standards of monitoring to remedy this performance. This implies a positive association between loss and selecting IEOIAF providers.
The third proxy is the quick ratio, an indicator of financial risk: the greater this ratio, the more likely is financial risk to be minor. This suggests that companies with a high quick ratio are less likely to appoint external IAF providers with industry expertise. Thus, a negative association between quick ratio and choosing an external IAF provider with industry expertise is predictable. Finally, assets turnover is the fourth proxy for risk, intuitively similar to the quick ratio. This suggests that companies with higher assets turnover are less likely to suffer financial difficulties, so less likely to IEOIAF providers. Thus, a negative association is predicted for the association between assets turnover and IEOIAF providers.
Company complexity
Complexity is another determinant used in previous models of external auditor choice (DeFond & Zhang, 2014), associated with high-quality auditors such as industry specialists (e.g., Abbott & Parker, 2000;Chen et al., 2005;Huang & Kang, 2018;Zhang et al., 2019). Indeed, companies with greater complexity can suffer a greater agency problem because complexity is associated with greater information asymmetry and managers' discretion, increasing the opportunity of managers to maximise their interest at the expense of shareholders (Jensen & Meckling, 1976). Further, shareholders and potential investors lack sufficient information on the complex business operations and, consequently, may require additional credible action to reduce this asymmetry. These scenarios may provide companies with the incentive to engage high-quality auditors with industry expertise (Beasley & Petroni, 2001;DeFond, 1992). Consistent with this argument, empirical research finds that more complex companies engaged high-quality external auditors such as those with industry expertise. For example, Beasley and Petroni (2001) report that complexity, proxied by geographic dispersion of the business, is associated with big4 audit firms who are industry specialists. Chen et al. (2005) find complexity is positively associated with industryexpertise external auditors. Other studies (e.g., Abbott & Parker, 2000;Al-Qadasi et al., 2019;Huang & Kang, 2018) report complexity as a major predictor for selecting an auditor with industry expertise, although the results are not consistent.
Although studies have investigated the factors associated with complexity in the context of selecting external auditors, using them in determining IAF sourcing arrangements or the type of outsourced IAF provider is rare. For example, Rönkkö et al. (2018) find complexity positively associated with the establishment of IAF. More closely related, Baatwah and Al-Qadasi (2020), among the pioneer studies that consider complexity factors in outsourcing IAF, observe inconsistent results for the association between complexity measures and the IAF sourcing arrangement but a positive association between complexity and outsourcing IAF if the provider is a non-big4 audit firm. Although the literature on outsourcing IAF provides new insights into its determinants, little is known in the context of industry expertise. Thus, we make our prediction for the association between complexity factors and outsourcing IAF to an industry expert provider based on agency and signalling theories. In particular, we anticipate that companies with greater complexity are more likely to hire external IAF providers with industry expertise. In other words, such providers may be used to reduce agency problems or signal their quality in managing complex business operations and controls. We therefore propose the following hypothesis: H3: Company complexity is positively associated with industry-expertise outsourced IAF providers.
Similarly to research on external auditor choice (e.g., Beasley & Petroni, 2001;Chen et al., 2005;Darmadi, 2016;Ettredge et al., 2009;Huang & Kang, 2018), we use four proxies for company complexity. The first is ownership structure as proxied by concentrated ownership. Concentrated ownership indicates a lower level of complexity because shareholders can monitor and observe managers' behaviours, updating their information. Also, complexity associated with voting and cash-flow rights is smaller with concentrated ownership. Thus, companies with concentrated ownership structures are less likely to select IEOIAF providers. This suggests a negative association between concentrated ownership and selecting such providers. The second proxy is issuance of new equity. This implies greater complexity because issuing new equity is more likely to be associated with an increased number of shareholders who may demand additional monitoring or extra information. This indicates that companies that issue new equity are more likely to select a high-quality auditor to reduce this complexity and information asymmetry. Thus, a positive association is predicted between issuing new equity and IEOIAF providers. The third proxy is company age. It indicates less complexity because well-established companies have more experience, and are associated with sophisticated internal control systems that are effective in reducing complexity. Thus, mature companies are less likely to appoint high-quality auditors. This suggests a negative association between company age and IEOIAF providers. The fourth proxy is total accruals. Greater complexity is more likely to be associated with a greater amount of accruals, because it imposes uncertainty and less objective judgements in assessing uncertain conditions. Thus, companies with more accruals are more likely to select high-quality auditors, suggesting a positive association between total accruals and selecting IEOIAF providers.
External auditor-specific characteristics
Several studies on IAF examine how the source arrangements affect the quality of financial reporting (Abbott et al., 2016;Prawitt et al., 2012) or reliance on the external auditor (Desai et al., 2011;Glover et al., 2008). However, few have examined the determinants of source arrangements/type of outsourced IAF provider employing the characteristics of the external auditor (Baatwah & Al-Qadasi, 2020). It is vital to note that one important input for external audit is the assessment of internal control systems, the major responsibility of IAF. External auditors also rely on the work of internal auditors, especially if the IAF provider is of high quality (Desai et al., 2011). Thus, we include the characteristics of external audit as additional determinants: audit firm type, industry expertise, and audit fees. The following subsections show these characteristics and provide the expected association with IEOIAF providers.
Audit firm type
Research into external and internal auditors' reliance provides insight into the potential association between the type of external auditor and IEOIAF providers (Desai et al., 2011;Glover et al., 2008;Trotman & Duncan, 2018). This research shows that external auditors rely on the work of IAF when conducting their audit, and this reliance is greater if the IAF provider is external (Baatwah & Al-Qadasi, 2020). External auditors, who play a crucial role in regard to the credibility of financial reports and the effectiveness of internal controls, are more likely to be associated with high-quality outsourced IAF providers. Companies use external auditors to mitigate agency problems and to reduce information asymmetry (Anderson et al., 1993;Jensen & Meckling, 1976). However, research differentiates between external auditors in terms of their size and audit quality and considers the big4 audit firms as of higher quality than non-big4 audit firms (DeAngelo, 1981;DeFond & Zhang, 2014), because they are strongly motivated to increase their credibility and reputation and to avoid litigation costs. Thus, a lower agency problem and information asymmetry is predicted for those companies hiring big4 audit firms as external auditor.
Baatwah and Al-Qadasi (2020) argue that companies are more likely to complement the monitoring role of high-quality external auditor (e.g., big4 audit firms) by outsourcing the IAF to external providers. They add that high-quality external auditors are more likely to intervene in the IAF arrangement decision and to support the outsourcing decision. However, little research considers the type of external auditor as a determinant of outsourcing IAF. We acknowledge Baatwah and Al-Qadasi (2020) among the limited research considering the characteristics of external audit on the outsourcing IAF empirical model. They report that big4 audit firms as external auditors are positively associated with outsourced IAF providers, in particular with big4 audit firms, suggesting that the big4 tend to work with or recommend selecting high-quality IAF providers. Consistent with this study and with agency and signalling theories, we assume that external auditors such as big4 audit firms are more likely to be associated with IEOIAF providers. This may represent complementary mechanisms to reduce agency costs and/or signalling high-quality monitoring and control to the capital markets. Thus, we formally state this in the following hypothesis: H4: Big4 audit firms are positively associated with outsourced IAF providers who have industry expertise.
Audit firm industry expertise
Many studies assert that the industry expertise of external auditors adds value because it enhances their ability to detect and report irregularities in financial reports and controls (Balsam et al., 2003;Solomon et al., 1999;Zalata et al., 2020). Further, this expertise strengthens the external auditor's aim to protecting its reputation and avoid the financial and litigation costs that arise from audit failure. Thus, companies use industry-specialist auditors to reduce the agency problem and signal their quality to external stakeholders (DeFond & Zhang, 2014). However, IAF studies rarely consider the industry expertise of an external auditor as an explanatory variable for sourcing or the type of IAF provider. Based on this, Baatwah and Al-Qadasi (2020) argue that external auditors with industry expertise are more likely to recommend their clients to outsource their IAF to external providers because they rely on them in assessing controls and documentation. However, this study fails to find a significant association between industry-specialist auditors and outsourced IAF activities, or between these auditors and high-quality outsourced IAF providers such as the big4. To our knowledge, the study by Baatwah and Al-Qadasi (2020) is the only one to examine industry specialisation as a determinant of IAF sourcing arrangements and of the type of outsourced IAF provider. Thus, we maintain the similarity of big4 audit firms to industry specialists in relation to the selection of IEOIAF providers, as high-quality external auditors tend to rely on the work of high-quality outsourced IAF providers and may intervene in the decision on IAF sourcing. In other words, we assume a positive association between industry specialists and the choice of IEOIAF providers. We therefore formulate the following hypothesis: H5: External auditors with industry expertise are positively associated with outsourced IAF providers who have industry expertise.
Audit fees
As indicated by DeFond and Zhang (2014), the fees for external audit are among the indicators of high-quality audit. However, there is little evidence for the association between audit fees and the choice of external auditor and/or outsourcing of IAF, or the types of outsourced IAF provider. Indeed, companies pay higher audit fees to their external auditors to ensure that the financial reports are credible and contain information that represents the true performance and value. This, in many cases, contributes to reducing the agency problem and information asymmetry (DeFond & Zhang, 2014;Jensen & Meckling, 1976). Further, companies hire high-quality auditors such as big4 audit firms, who usually charges high fees, to signal their credibility and good performance (Al-Qadasi et al., 2019;Fan & Wong, 2005;Huang & Kang, 2018). In doing so, external auditors have to spend a correspondingly large amount of time and effort testing and verifying documents and the internal control system (Simunic, 1980;Zalata et al., 2020). Prior research indicates that external auditors use internal auditors or rely on their work in planning and testing the management's assertions on financial statements and internal controls (Desai et al., 2011;Glover et al., 2008). However, in many cases, they require internal auditors to have the required competence and objectivity, and they place more reliance on the internal auditors from outsourced IAF providers (Desai et al., 2011). Thus, if the external auditor relies on the work of high-quality IAF providers, the client is more likely to be charged lower fees as a result of less work.
Given the lack of prior research linking external audit fees with industry expertise, we build our arguments on the association between audit fees and outsourced IAF providers with industry expertise based on our proposed theories and the arguments related to outsourced IAF, as a costeffective and high-quality function (Mubako, 2019). More specifically, we assume that companies are more likely to assign IEOIAF providers as a way of mitigating the agency problem or signalling quality and, at the same time, reducing the external audit fees. In other words, higher external audit fees may motivate companies to outsource the IAF to a provider with industry expertise because this provider will increase the quality of the function and the degree of reliance of external auditors. We therefore state the following hypothesis: H6: External audit fees are positively associated with outsourced IAF providers who have industry expertise.
Research model
To the best of our knowledge, no research has yet examined the determinants of IEOIAF providers. Thus, following prior research on external auditor selection (e.g., Abbott & Parker, 2000;Beasley & Petroni, 2001), we employ the following logistic regression to test our hypotheses because this method is more appropriate when the dependent variable is dichotomous, as in our case, and it is a common method in accounting research, specifically in audit selection research (Ge & Whitmore, 2010). This regression is a pooled panel data-based analysis. We consider the possible influence of heteroscedasticity and autocorrelation by using firm and year clustered robust standard error to correct these issues. Further, we winsorize all continuous variables to reduce the influence of outliers. All these statistics were implemented using STATA 14 software. The following regression represents the empirical model of this study: INDSOIAF it = β 0 + β 1 LNSALE it + β 2 MB it + β 3 LEV it + β 4 LOSS it + β 5 QUICK it + β 6 ATURN it + β 7 OWCCO it + β 8 NWEQTY it + β 9 TACC it + β 10 LNAG it + β 11 ADFSIZE it + β 12 ADFIND it + β 13 LNADFEE it + β [14][15][16][17] (1) where INDSOIAF denotes the dependent variable; LNSALE, MB, LEV, LOSS, QUICK, ATURN, OWCCO, NWEQTY, LNAG, and TACC denote companies' characteristics; ADFSIZE, ADFIND, and LNADFEE denote external auditors' characterises; YFIX and INDFIX denote the fixed effects of time and industry respectively; i denotes the cross-section dimension and t the time dimension. Table 1 presents definitions of these variables and the data source for each.
Measurement of outsourced IAF industry expertise
Following prior research (e.g., Abbott & Parker, 2000;Balsam et al., 2003;Chen et al., 2005;Ettredge et al., 2009), we use a market share approach to identify IEOIAF providers. The process begins by identifying companies that outsource part or all of their IAF activities to external providers (439 observations). Then, we broadly classify these companies into two groups, industrial and service industries, because following the 2(3)-digit SIC industry classification would result in very few observations for each sector in each year. Thus, our classification considers at least 30 observations for each industry and year. For companies with available data, we compute the total sales for each industry in each year, and the total sales for the clients of each outsourced IAF provider in the given industry and year. We use clients' sales as the basis for market share computation because audit fees in relation to outsourcing IAF are disclosed only by very few of the sampled companies. Finally, we consider a provider as having IAF industry expertise if it has at least 30 (20) percent market share (INDSOIAF30 and INDSOIAF20). 4 After these processes of identification, we assign one to outsourced IAF providers designated as having industry expertise, and zero otherwise.
Measurement of company-related characteristics
Several company-specific factors are considered to proxy the three main company characteristics: size, risk, and complexity. Thus, following common measures used in prior external audit research (e.g., Abbott & Parker, 2000;Ettredge et al., 2009;Zhang et al., 2019), this study measures these factors as follows. For company size, we proxy it by total sales (LNSALE) and market to book value ratio (MB); these are measured respectively by the natural log of total sales and the common share market value divided by the common share book value. As for company risk, we employ leverage (LEV), loss (LOSS), quick ratio (QUICK), and assets turnover (ATURN). These proxies are respectively measured by: the total debt scaled by total assets; indicator variable equals one if the company incurred loss in the current year, zero otherwise; the current assets minus inventory scaled by current liabilities; the current sales/revenues divided by total assets. 5 For company complexity, we employ concentrated ownership structure (OWCCO), issuing new equity (NWEQTY), age (LNAG), and total accruals (TACC). These proxies are respectively measured by: the percentage of common shares held by larger shareholders (≥10%); indicator variable equals one if a company issues new common shares during the year, zero otherwise; the natural log of number of years since the establishment of the company; the difference between sales/revenues and operation cash flow scaled by total assets.
Measurement of external auditor-related characteristics
Following the literature (e.g., Baatwah & Al-Qadasi, 2020;Balsam et al., 2003;DeFond & Zhang, 2014), we consider three important characteristics of high-quality external auditors: auditor size, expertise, and fees. First, audit firm size (ADFSIZE) is measured by an indicator variable equalling one if the external auditor is a big4 audit firm, zero otherwise. Second, auditor industry expertise (ADFIND) is also measured by an indicator variable equalling one if the external auditor is designated as having industry expertise, zero otherwise. We use market approach and external audit fees to classify industry expertise; an external auditor who has 10 percent market share is considered as having industry expertise. The final proxy is audit fees (LNADFEE) which is the natural log of total fees paid to the external auditor for statutory audit of financial reports.
Control variables
We control for two important factors. First, we control for the industry-specific effects (INDFIX) by including four indicator variables representing four sectors: industrial, material, consumer staples, and consumer discretionary. The energy industry is used as the basis for comparison. The second set of control variables are year-specific effects indicators (YFIX). Using 2010 as the benchmark for comparison, this set has seven indicator variables representing the years 2011 to 2017.
Sample selection and data
In line with the objectives of this research, the study population includes all companies listed on the Omani capital market during the period 2010-2017 which outsourced their IAF activities either partially or fully to external providers. Accordingly, we begin the process of data collection by identifying 935 year-observations for all companies listed on the Omani capital market during the period 2010-2017. Then, we exclude 271 observations from financial and investment companies because of their unique structure and regulatory framework and because they are required to have in-house IAF. Further, we delete 225 observations which use in-house IAF providers. This reduces the number of observations to 439. We collect data for these companies from several sources. For example, we use corporate governance reports to identify the name of the outsourced IAF provider and audit fees. We consider DataStream and annual financial reports for collecting data related to company characteristics. We then remove a further 105 observations with missing data, resulting in 334 observations as the observations for testing our hypotheses. Table 2 reports the process of sample selection.
As noted, we use data for companies listed in the Omani capital market where public disclosure for information on the providers of IAF, either in-house or outsourcing, is required. This has advantages over several other settings for conducting this study because the application of IAF activities has been required since 2002, reflecting the maturity of application of companies and the maturity of the IAF profession. Further, outsourcing of the IAF to an external provider is the preferred sourcing arrangement for the majority of companies in Oman (Baatwah et al., 2019;Baatwah & Al-Qadasi, 2020). This might increase the motivation of auditors to develop and strengthen the quality of their service. Finally, Oman is an emerging market with more sophisticated accounting and corporate governance regulatory frameworks than many other emerging markets, and is comparable to developed markets in these frameworks (see Baatwah et al., 2019, for more review). As for the sample period, note that we opt to use data for the period 2010-2017 because 2010 was the first year after the financial crisis that hit most capital markets in 2008, with after-effects in 2009. This should reduce any effect of the crisis on our estimates. 2017 supplied the most recent data when the study was initiated. Table 3 presents the results of descriptive analysis. We observe that the mean of INDSOIAF30 (INDSOIAF20) is 0.243 (0.305), suggesting that 24 (31) percent of companies with outsourced IAF use those with industry expertise. For company characteristics, namely total sales, equity market to book value ratio, leverage, loss, quick ratio, asset turnover, ownership structure, new equity, total accruals, and age, the means are 9.065, 23.963, 0.466, 0.159, 1.901, 0.788, 60.556, 0.138, −0.043, and 2.999 respectively. However, using INDSOIAF30 to classify the sample into specialist and non-specialist, we find significant differences between companies with industry-specialist and non-specialist outsourced IAF providers in terms of company characteristics. For example, companies with industry-specialist outsourced IAF are larger in size and have a higher growth rate than those employing non-specialist outsourced IAF providers. We also observe that companies with industry-specialist outsourced IAF providers have less concentrated ownership, less frequent issue of new shares, are less likely to report loss, and have a lower quick ratio. For other variables such as leverage, assets turnover, total accruals, and age, both companies with industry-specialist and non-specialist outsourced IAF providers tend to share quantitatively similar characteristics. As for external auditor characteristics, we observe that the mean for audit firm size is 0.617, suggesting that 62 percent of our sampled companies use big4 audit firms as external auditor. We also find that 32 percent of the sampled companies hire industry-specialist auditors, and pay remuneration of around 8.754 (fees natural log), on average USD 18,750. In terms of differences between industry-specialist and non-specialist outsourced IAF providers, we find that the former use big4 audit firms more than the latter. Further, companies with industry specialists pay higher fees for external auditors than companies with non-specialist outsourced IAF providers. In regard to employing industry-specialist external auditors, there are similar numbers of companies with industry-specialist and non-specialist outsourced IAF providers. Table 4 is the correlation matrix. This analysis provides initial results in relation to most of the factors associated with outsourced IAF industry specialists, and insight into the presence of a multicollinearity problem among our independent variables. We observe that total sales, equity market to book value ratio, assets turnover, big4 audit firms, and audit fees are positively and significantly associated with both measures of IEOIAF providers. We also find that concentrated ownership, new equity issuance, loss, and quick ratio are negatively and significantly associated with IEOIAF providers. For leverage, total accruals, age, and industry-specialist external auditors are not significantly associated with either measure of industry expertise. In terms of the correlation between the independent variables, we observe that the highest are between total sales and audit fees (0.67) and between audit firm size and audit fees (0.55). However, this degree of correlation is lower than 0.70, suggesting that there is no multicollinearity problem. We supplement this analysis by calculating the Variance Inflation Factor (VIF) and, in untabulated results, observe that VIF values are less than 3, again indicating no multicollinearity problem (Gujarati & Porter, 2009). Table 5 shows the results of the pooled panel data logistic regressions for IEOIAF providers, and a set of explanatory variables related to company and external auditor characteristics. Columns 3 and 4 report results for IEOIAF providers using the 30 percent market share measure. Columns 5 and 6 show results for IEOIAF providers using the 20 percent market share measure. We observe that the estimated models are significant at the p < 0.0001 level, and the explanatory variables explain 29 (34) percent of the variation in selecting IEOIAF providers. These results indicate that the models are well fitted and sufficiently explain industry expertise selection in the context of outsourced IAF.
Multivariate results
Table 5 also shows that company characteristics such as total sales (LNSALE) (Estimate = 1.278 (1.811); T.stat = 4.227 (4.448)), equity market to book value ratio (MB) (Estimate = 0.169 (0.238); T. stat = 1.805 (2.282)), issuing new equity (NWEQTY) (Estimate = −1.315 (−1.526); T.stat = −1.976 (−2.093)), total accruals (TACC) (Estimate = −6.024 (−6.223); T.stat = −2.692 (−3.197)), and age (LNAG) (Estimate = 1.366 (2.619); T.stat = 2.058 (3.996)) are significantly associated with both measures of IEOIAF providers, at least at the p < 0.05 level, except for the association between equity market to book value ratio and the 30 percent market share measure (p < 0.10). These predictors represent company size and complexity suggesting that variables measuring company risks are not significantly associated with selecting IEOIAF providers (p > 0.10). In particular, total sales and equity market to book value ratio are positively associated with IEOIAF providers suggesting that large companies are more likely to select IEOIAF providers. This result is consistent with agency and signalling theories that suggest larger companies have greater incentives to mitigate the agency problem and signal their quality externally by hiring an industry-expertise auditor (DeFond, 1992). Also, this result is in line with prior IAF studies, suggesting that large companies have greater propensity to employ high-quality providers of IAF (Carey et al., 2006). Overall, we find empirical support for our first hypothesis, suggesting larger companies are more likely to choose an outsourced provider of IAF with industry expertise. With regard to measures of complexity, the coefficients on issuing new equity and total accruals are positive while the coefficient on age is negative, indicating that companies that issued new shares and those with a higher proportion of accruals opt to choose non-industry specialist outsourced IAF, while mature companies experience less complexity and are less likely to select IEOIAF providers. We also observe that concentration ownership (OWCCO) is not significantly associated with selecting outsourced IAF providers with industry expertise, suggesting that ownership structure as proxied by concentrated ownership is not a major determinant for an IEOIAF provider. These findings are not consistent with the agency and signalling theories (Beasley & Petroni, 2001;DeFond, 1992) or with the prior outsourcing IAF literature, suggesting that companies with greater complexity have an incentive to engage high-quality auditors such as those with industry expertise. Thus, we offer the following explanations to justify these results. In relation to age, we believe that mature companies perceive IEOIAF providers as a way to signal that they manage and control risks and complexity using a competent and objective provider. Alternatively, IEOIAF providers are used as management training ground or as consultants in managing risk and complexity. Thus, they differentiate between the outsourced IAF providers by selecting a provider with industry expertise.
In relation to the results for issuing new equity, we argue that companies may consider the investment in a high-quality external auditor as a worthy signal during the time of issuance of new shares if compared with the investment in IEOIAF providers. Consistent with this, we observe, in unreported results, that 74 percent and 33 percent of companies that issued new equity employ big4 or industry-expertise external auditors, respectively. Another explanation for this result is that issuing new equity increases managers' incentive to manage earnings to attract new investors by reporting good performance. Thus, managers of these companies will try to avoid selecting IEOIAF providers as they are more competent and objective and are likely to discover and report earnings manipulation. For the result of total accruals, we suggest that a large proportion of accruals is not always an indicator of low-quality accruals. Thus, employing IEOIAF providers would not add value to the company. We also suggest that IEOIAF providers will limit managers' accounting flexibility and, in turn, their chance to manipulate earnings. Thus, as managers are involved in the outsourced IAF decision, they are more likely to select a less competent-outsourced IAF provider such as a non-industry specialist. Overall, these findings reject our second hypothesis.
As for other company characteristics related to risk, such as leverage (LEV), loss (LOSS), quick ratio (QUICK), and assets turnover (ATURN), these have insignificant associations with the measures of outsourced IAF industry expertise (p > 0.10), suggesting that companies with a high proportion of debt, concentrated ownership structure, poor performance, high quick ratio, and/or high proportion of assets turnover do not differentiate between the expertise of outsourced IAF providers, and consider both industry-and non-industry specialist outsourced IAF providers as of high quality. These findings are not consistent with the third hypothesis arguing that companies with a high proportion of operational and financial risks are more likely to suffer from the agency problem and information asymmetry and to use high-quality auditors to reduce these issues (Caplan & Kirschenheiter, 2000;Jensen & Meckling, 1976). These findings are consistent with the limited outsourcing IAF literature which reports that risk factors are not crucial drivers for demanding a high-quality outsourced IAF provider (Abbott et al., 2007;Baatwah & Al-Qadasi, 2020).
In relation to external auditor characteristics such as size (ADFSIZE), industry expertise (ADFIND), and fees (LNADFEE), we observe an insignificant association with the measures of IEOIAF providers, p > 0.10. This result is not consistent with our hypotheses in relation to the characteristics of high-quality external auditors, and suggests that companies are less likely to select IEOIAF providers if they are already associated with high-quality external auditors. Also, high-quality external auditors will not push their clients to select IEOIAF providers because the only criterion for using the work of the IAF provider might be outsourcing the function to an external provider without specifying a particular provider. This finding is consistent with Baatwah and Al-Qadasi (2020) report that external auditors with industry expertise are not significantly associated with outsourcing IAF or either type of provider. Overall, our results support the first hypothesis, but are not consistent any of the others.
Robustness analysis
To test the robustness of our main results, we conduct a number of sensitive analyses. For example, we use industry total assets, instead of revenues, to determine the market share of outsourced IAF providers. We use similar thresholds to classify industry and non-industry outsourced IAF specialists. This analysis will reduce the sensitivity of our measure for IEOIAF providers. This approach has been used by other audit researchers (e.g., Balsam et al., 2003). We also check whether the significant predictors related to the company characteristics are sensitive to the measurement of the industry expertise of outsourced IAF providers. Prior external audit research (Abbott & Parker, 2000;Ettredge et al., 2009;Huang & Kang, 2018) uses a variety of thresholds to designate a provider as having industry expertise. In the main analysis, we adopted 30 (20) percent of market share as a threshold for considering IAF providers as industry experts. Here, we consider the largest and 10 percent market share thresholds to designate an IEOIAF provider. In untabulated results, we observe qualitatively similar findings of these alternative measures with the main findings.
Another sensitivity method is matching sample analysis, using a 30 percent market share threshold and industry and company size as criteria for matching. Non-specialist companies with the same industry classification and the closest size in terms of revenue are chosen to match companies with specialists. This procedure results in 156 matching observations. The final robustness test is simultaneity concern, one type of endogeneity problem (Larcker & Rusticus, 2010). Under this concern, the selection of IEOIAF providers may be a function of the company's earlier years and auditor characteristics. Thus, we use a lead-lag approach to check the effect of this issue by re-regressing the main equation using a one-year lag for all explanatory variables. In untabulated results, we find that the coefficients and the level of significance for all variables are quantitatively similar to the results reported in Table 5, indicating the robustness of our main findings.
Does the type of industry-expertise outsourced IAF provider matter?
We also examine whether companies consider the audit firm's type of IEOIAF provider in selecting such provider. Accordingly, we classify the IEOIAF provider into two types of audit firm, big4 and second-tier. This analysis is interesting because recent research demonstrates that big4 and second-tier firms deliver similar audit quality (Baatwah et al., 2019;Baatwah & Al-Qadasi, 2020;Boone et al., 2010;Cassell et al., 2013). 6 Thus, results indicating companies differentiated by the expertise of outsourced IAF based on type will advance our knowledge of big4 and second-tier providers in the context of outsourced IAF. This investigation also responds to the call for exploring whether the type of outsourced IAF provider influences the outsourcing decision (Mubako, 2019). Table 6 reports the results for this analysis using the 30 percent market share threshold for measuring industry expertise, although the 20 percent threshold suggests similar findings. Columns 2 and 3 show results for the determinants of big4 IEOIAF providers, while columns 4 and 5 report results for the determinants of second-tier providers. We find that, in terms of sales, large companies are significantly associated with big4 IEOIAF providers. However, we observe a negative and significant association between equity market to book value ratio, leverage, and quick ratio, indicating that companies which are highly leveraged, large in terms of market and book value ratio, and more liquid are less likely to select big4 IEOIAF providers. For second-tier audit firms, we find that the coefficients on leverage, concentrated ownership structure, assets turnover ratio, age, and big4 as external auditor are positive and significant, suggesting that companies with high debt ratio, concentrated ownership, high assets turnover, and big4 audit firms as external auditors are more likely to select outsourced IAF providers who are second-tier with industry expertise. However, we find negative and significant coefficients for loss, total accruals, and audit fees, indicating that companies with poor performance, a large proportion of total accruals, and higher audit fees are less likely to appoint outsourced IAF providers who are second-tier and have industry expertise. Results for other variables are not significant for either classification. Overall, we conclude that companies do consider the type of industry expertise in selecting outsourced IAF providers.
Summary and conclusion
Remarkable attention is being paid to outsourced IAF providers because they are more likely to be competent and objective. Thus, it is suggested that companies may use external IAF providers to reduce agency conflict or to signal their effective monitoring and high-quality financial reports. This study is an empirical investigation exploring the determinants of selecting outsourced IAF providers. Since such research is scarce and rarely focuses on the type of provider (Baatwah & Al-Qadasi, 2020;Mubako, 2019), this study empirically examines factors associated with selecting IEOIAF providers. We adopt factors related to company and external auditor characteristics as the determinants, since research on external auditor choice documents that these characteristics play a significant role in selecting an auditor with industry expertise.
With the analysis of 334 observations from the Omani capital market that outsourced IAF to external providers, we observe in Tables 5 and 6 a number of company and external auditor characteristics which are potential determinants of IEOIAF providers. In terms of company characteristics, we document that size in terms of sales and equity market to equity book ratio, issuing new equity, age, and total accruals are major determinants in selecting IEOIAF providers. These results are robust under a variety of tests. However, we find insignificant results for the association between the selection of IEOIAF providers and risk-related factors (leverage, loss, quick ratio, and assets turnover) and external-auditor attributes (size, expertise, and fees). In further analysis, we find that companies which are more leveraged, concentrated ownership, higher assets turnover, mature, and hiring big4 as external auditor are more likely to select an IEOIAF provider from second-tier audit firms while companies with bad performance, higher total accruals, and higher external audit fees are not associated with these providers. We also observe that larger companies in terms of sales are associated with an IEOIAF provider if this provider is a big4 audit firm, while companies with higher equity market ratio, more leveraged, and higher quick ratio are less likely to engage in an IEOIAF provider who is a big4 audit firm.
The overall results offer interesting theoretical and practical contributions. By extending the results of previous studies which focus on the role of industry expertise in reducing agency problems or signalling the quality of financial reports in the context of external audit, our study shows that outsourcing IAF to a third party with industry expertise is a measure for mitigating agency problems or reducing information asymmetry. It also expands outsourced IAF research by considering several company and auditor characteristics that have been ignored in earlier models. Our findings show initial evidence indicating that market to book value, quick ratio, assets turnover, total accruals, and external audit fees are relevant determinants of the decision to outsource IAF. Further, we extend the recent call by Mubako (2019) and the empirical evidence of Baatwah and Al-Qadasi (2020) on the types of outsourced IAF provider by examining the factors motivating companies to hire industry specialists. In practice, our results also provide companies, audit firms, and regulators with an indicator of the development of outsourced IAF. In particular, companies can realise that the quality of outsourced IAF providers differs, and that those with industry expertise become a choice to improve or signal the quality of their financial reporting and internal controls. As for audit firms, our findings indicate a new orientation for them to use industry expertise as a means to differentiate them from other providers. Thus, this study may help them to develop and maintain appropriate strategies and programmes to ensure that the quality of outsourcing IAF meets the expectations of their clients. Finally, our study provides interesting inputs to regulatory authorities such as the IIA and Public Company Accounting Oversight Board on the role of outsourced IAF in current practice, encouraging them to establish a common framework for outsourced IAF. In the current IAF framework and IAF best-practice recommendations, neither regulatory authority appears to support or encourage outsourced IAF.
Despite these contributions, this study is subject to theoretical and practical limitations which might be translated into avenues for future research. First, our study focuses on two theories, agency and signalling, as the main motivation for companies in selecting IEOIAF providers. Future research may consider other explanations for formulating and testing the new hypotheses. Additionally, this study takes into consideration only basic company and auditor characteristics. Thus, future research may consider additional determinants related to other characteristics such as governance or ownership. A final limitation is that data on audit fees related to outsourced IAF is unavailable, which forces us to use clients' sales to measure the market share of outsourced IAF providers. It will be interesting if future research collects data on outsourced IAF fees to identify the providers' market share. | 2021-08-27T16:46:21.186Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1a18f008398f1c846ffa7adddb22ee1a321a9543",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311975.2021.1938931?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a2b827ea722e7dcd9283666f9b80b9c663804664",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
115196035 | pes2o/s2orc | v3-fos-license | Relationships between body mass index, lifestyle habits, and locomotive syndrome in young‐ and middle‐aged adults: A cross‐sectional survey of workers in Japan
Abstract Objectives Although many studies have examined locomotive syndrome (LS) among elderly people, few studies have examined LS in young‐ and middle‐aged adults. This study aimed to provide basic data on the epidemiological characteristics of LS, including in young‐ and middle‐aged adults. Method We conducted a cross‐sectional survey of a nonrandom sample of 852 adults aged 18–64 (678 males, 174 females) working in five companies in Japan, between December 2015 and February 2018. LS stage was determined using the criteria proposed by the Japanese Orthopaedic Association (JOA). LS stage 0 was defined as No‐LS, and stages 1 and 2 were defined as LS. Multiple logistic regression analysis was used to investigate the independent relationship between LS and sociodemographic, smoking, alcohol drinking (AD), frequency of breakfast consumption (FBC), dietary variety score (DVS), and the University of California Los Angeles (UCLA) activity score after adjusting for age and sex. Results We found that 23.1% of participants were evaluated as LS, including 21.5% of males and 29.3% of females (P < 0.05). Participants aged ≥45 years exhibited higher rates of LS (males: 23.1%, females: 43.6%) compared with those aged <45 years (P < 0.05). Logistic regression analysis revealed that age, body mass index (BMI), AD, UCLA activity score, and FBC were also related to LS. Conclusion Education initiatives about LS should be targeted not only to elderly populations but also to young‐ and middle‐aged adults in the workplace.
| INTRODUCTION
In 2015, the average life expectancy in Japan was 80.8 years for men and 87.0 years for women, and both are predicted to rise in the future. Moreover, people >65 years of age comprised 27.3% of the entire Japanese population in 2016, and this proportion is expected to increase to 38.4% by 2065. No country has previously experienced such a long life expectancy, and it is clear that Japan is a rapidly aging society. 1 Locomotion difficulties affect activities of daily living in older people. In 2007, the Japanese Orthopaedic Association (JOA) proposed the concept of locomotive syndrome (LS) to refer to the risk of elderly individuals becoming bedridden because of reduced function of the locomotive organs (eg, muscles, bones, and joints). LS is caused by reduced muscle strength and balance associated with aging and locomotive pathologies such as osteoporosis, osteoarthritis (OA), and sarcopenia. [2][3][4] In 2013, the JOA proposed using the following three tests for assessing the risk of LS: the two-step test, the stand-up test, and the 25-question Geriatric Locomotive Function Scale (GLFS-25). 5 The JOA determined the clinical decision limits of these three indices for assessing the risk of LS in 2015, and Yoshimura et al 6 reported that all of these indices could significantly and independently predict a decline in mobility and, according to general population data, high scores on these indices may exponentially increase the risk of immobility.
The following factors have been proposed as causes of LS: lack of regular exercise, excessive exercise and injury, being underweight or overweight, and reduced physical activity. 7 All of these factors are closely related to lifestyle, dietary, and exercise habits acquired from a young age. Therefore, it is important for individuals, and for Japanese society as a whole, to effectively cope with the expected restrictions in walking ability that become present after middle age. Although some studies investigating physical function and degenerative disorders, such as lumbar canal stenosis and OA, have reported evidence of an improvement in LS in older adults, [8][9][10] little is known about the actual state of LS in young-and middle-aged adults. Thus, differences in LS status among different age groups are important to understand. We have just reported the prevalence of LS and the LS level among youngand middle-aged adult workers. 11 However, the relationships between lifestyle habits such as dietary habits and LS are unknown.
The current study sought to provide basic data on the epidemiological features of LS in young-and middle-aged adults, including lifestyle habits such as dietary habits and physical activity. These data can provide a scientific basis for preventing and treating LS.
| METHODS
A cross-sectional study was conducted with the employees of five companies, who were recruited by the public health department of Mie Prefecture, Japan, between December 2015 and February 2018. These companies consisted of various white-collar (74.6%) and blue-collar (25.4%) departments at two drug companies (Company A, 250 day shift employees; Company B, 124 day shift employees), a chemical company (Company C, 275 day shift employees), an office equipment manufacturer (Company D, 258 day shift employees), and an electronics company (Company E, 215 day shift employees). The participants were not eligible to participate in the survey if they: (a) could not walk without instruments (T-cane, crutch, wheelchair, etc.); (b) had some sort of injury so as to hinder exercise at the time of the survey and (c) were not able to participate in all activities to evaluate LS. Each company notified its employees about the eligibility for participation in advance and only collected voluntary participants. All participants were provided written informed consent prior to participating in the study, which was approved by the Institutional Review Board of Suzuka University of Medical Science (approval no. 241). This study was conducted in accordance with the principles of the Declaration of Helsinki.
Body weight was measured using Inner Scan ® 50V (BC-622; TANITA Co., Tokyo, Japan). Body mass index (BMI) was calculated as follows: weight (kg)/height (m) 2 . Data on LS stage, demographic factors, socioeconomic status, lifestyle habits, and physical activity were collected using LS scales or paper-based questionnaires. The independent variables included age (<45 or ≥45), BMI (<18.5 or 18.5-24.9 or ≥25.0), sex (male or female), education (<13 years or ≥13 years), occupation (white collar or blue collar), income (<5 million yen/year or ≥5 million yen/year), smoking (none, past smoker, current smoker), alcohol drinking (AD; none, a few times/month, a few times/week, daily), the University of California Los Angeles (UCLA) activity score (<5 points or ≥5 points), 12 frequency of breakfast consumption (FBC) (<6 days/week or ≥6 days/week), and the dietary variety score (DVS) (<3 points or 3-5 points or ≥6 points). 13 Physical activity was assessed using UCLA activity score, 12 which is a simple scale that ranges from 1 to 10. Participants indicated their most appropriate activity level, with 1 defined as "no physical activity, dependent on others" and 10 defined as "regular participation in impact sports." Dietary variety was assessed using the DVS developed by Kumagai et al. 13 DVS is a food-based composite score that is calculated using the consumption frequencies of 10 food items (fish and shellfish, meats, eggs, milk and dairy products, soybeans and soybean products, green and yellow vegetables, seaweed, potatoes, fruits, and fats and oils) in the week before the questionnaire was administered. To reflect differences in consumption patterns and to simplify scoring, a score of 1 was given if a food item was consumed every day; otherwise the score was zero. Therefore, dietary variety improves as DVS approaches 10. Four weeks before physical measurements were obtained and LS stage was determined; questionnaires were distributed to all day shift employees at each company.
To evaluate LS, we used the two-step test, 14 stand-up test, 14 and GLFS-25. 15 In the two-step test, participants start in a standing posture, then move two steps forward with maximum stride, while being careful not to lose balance. The stand-up test is performed with stools that are 10, 20, 30, and 40 cm in height. Subjects are requested to stand up from each stool, using one leg or two legs. GLFS-25 is a self-reported comprehensive measure, consisting of 25 questions referring to the preceding month. The scale includes four questions regarding pain, 16 questions regarding activities of daily living, three questions regarding social functioning, and two questions regarding mental health status. We determined the risk of LS as follows: if the two-step score was <1.3, LS risk is 1; if the two-step score was <1.1, LS risk is 2; if the participant could not stand up from a height of 40 cm on either leg, LS risk was 1; if the participant could not stand up from a height of 20 cm on both legs, LS risk was 2; if the participant received ≥7 points on GLFS-25, LS risk was 1; if the participant received ≥16 points on GLFS-25, LS risk was 2. We used the highest risk level among these three LS evaluation tests. For example, if LS risk of the two-step test was 0, LS risk of the stand-up test was 2 and LS risk of GLFS-25 was 1, the stand-up test of LS risk 2 was adopted. These LS risk scores were evaluated according to the "How to determine your risk level" section of the official JOA website for LS. 7 Participants were classified as No-LS (LS risk stage 0) or LS (LS risk stage 1 or 2), and the independent variables were compared between groups. Differences in continuous variables between the No-LS and the LS groups were evaluated using t-tests, and chi-square tests were used to evaluate categorical variables.
Multiple logistic regression analysis was used to investigate the relationship between dietary habits, lifestyle and physical activity, and LS, after adjusting for age and sex. The independent variables included sex, age, BMI, occupation, income, smoking, AD, UCLA activity score, FBC, and DVS. Adjusted odds ratios (OR) and 95% confidence intervals (CI) were calculated. All statistical analyses were conducted using JMP 9.0.2 (SAS Institute Inc., Cary, NC).
| RESULTS
In total, 852 participants responded to the questionnaire and had their physical attributes measured (678 males and 174 females) (mean age ± standard deviation = 44.4 ± 10.2 years; range = 18-64 years) ( Sociodemographic characteristics and lifestyle habit, diet habit, and physical activity of all participants are shown in Table 1. The mean BMI was 23.7 ± 3.3. With regard to occupation and income, 634 (74.4%) were white collar, and 698 (81.9%) had a salary of ≥5 million yen. Of the 852 employees, nonsmokers and non-AD were 515 (60.4%) and 280 (32.9%), respectively. The mean UCLA activity score, FBC, and DVS were each 5.1 ± 2.4, 6.1 ± 2.0, and 2.8 ± 1.8. Table 2 shows the age-stratified prevalence of LS stages 1 and 2 and the corresponding indices in the Short Test Battery for LS. LS stage 1 or 2 was present for both males and females in all the age-stratified groups ( Table 2). The percentage of LS in females gradually increased with age excluding 60-65. Table 3 shows the two age group prevalence of LS that was determined using the criteria proposed by JOA. Of all participants, 23.1% were evaluated as LS (21.5% of males and 29.3% of females; P < 0.05). Participants aged ≥45 years showed significantly higher percentages of LS (male: 23.1%, female: 43.6%) compared with those aged <45 years (male: 19.6%, female: 17.7%). Participants aged ≥45 years in the female's group showed significantly higher percentages of LS compared with aged <45 (aged <45: 33.3%, aged ≥45: 66.7%; P < 0.001), while there were no significant differences between the male groups (aged <45: 41.1%, aged ≥45: 58.9%; P = 0.27).
The distributions of the participant characteristics in the No-LS and LS groups are shown in Table 4. Age, sex, AD, UCLA activity score, and FBC significantly differed between groups, whereas BMI, occupation, income, smoking, and DVS did not significantly differ between groups. Table 5 shows the results of logistic regression analysis of associated factors for LS in all participants. The odds ratio of
| DISCUSSION
The current study investigated the relationships between LS in young-and middle-aged adults and demographic, socioeconomic status, lifestyle habits, dietary habits, and physical activity. The results revealed that age, BMI, AD, UCLA activity score, and FBC were significantly associated with LS. Overall, this cross-sectional observational study of lifestyle habits, dietary habits, and physical activity in young-and middle-aged adults in Japan revealed that moderate alcohol consumption, better physical activity, and higher FBC were significantly associated with lower levels of LS. To the best of our knowledge, this is the first report to demonstrate a relationship between lifestyle habits, dietary habits, and physical activity and the classification of LS proposed by the JOA to assess the risk of the disorder in young-and middle-aged adults.
The decay of locomotive function typically starts from the late 40s. Functional decline of movement often progresses without detection, particularly in modern societies with widespread motorized transportation. 16 Regarding the prevalence of the indices in LS risk stages 1 and 2, we found that they exist in all of the age groups and indicated that LS risk was increased in participants over the age of 45. Our data indicated a gradual increase with advancing age as with our previous report. 11 Moreover, there were no significant differences between the male groups, while participants aged ≥45 years in the female's group showed significantly higher percentages of LS compared with aged <45. As one of the causes of sex difference, the effect of testosterone which has protein anabolism and promotes muscle mass development is known. It has been reported that males have more muscle mass than females as testosterone rises since puberty, and muscle mass reduction rate increases with decreasing testosterone after middle age. There is also a report suggesting that female muscle mass reduction is related to the female hormone. 17,18 In any case, it is clear that there is a gender difference in muscle mass and age change, and it is necessary to examine both males and females when examining muscle mass. In general, females have more diseases of locomotive organs such as knee osteoarthritis (KOA) than males. 19,20 Therefore, it is considered that there are many LS which are highly related to diseases of locomotive organs.
Several previous reports have indicated that BMI is positively correlated with KOA which is a potential cause of LS. 21,22 Jiang et al 23 concluded that obesity is a risk factor for KOA after systematically analyzing the correlation between BMI and KOA in 21 independent reports. Moreover, Liu et al 24 also reported that populations with high BMI showed a significantly increased risk of KOA. Obesity is a risk factor for these disorders because the pressure exerted on the articular cartilage is increased, accelerating degeneration. In contrast, Nakamura et al 25 reported that BMI, particularly BMI ≥ 23.5 kg/m 2 , was significantly associated with LS in Japanese women over 60 years of age, and that BMI was an important measure for the detection of LS. In this study, BMI of LS group is significantly higher than that of No-LS group in multivariate analysis. This result may indicate that high BMI is a useful screening tool for LS prevention in youngand middle-aged adults in Japan.
Patel et al 26 reported that the amount of physical activity performed in middle age is associated with mobility in older adulthood. Moreover, several studies reported that sports participation and a high level of total physical activity were associated with reduced decline in physical function, indicating that a higher level of physical activity may protect against impairments in activities of daily living. 27,28 In Japan, Akune et al 29 reported that exercise habits during middle age were associated with a lower prevalence of sarcopenia, which is one of the main causes of LS. Furthermore, Nishimura
T A B L E 4 Characteristics of the
No-LS and LS groups et al 30 recently reported that exercise habits during middle age contribute to preventing LS in old age. The current results revealed that UCLA scores in the No-LS group were significantly greater than those in the LS group, suggesting that lower levels of physical activity can contribute to LS, even in young-and middle-aged adults.
The LS group had a significantly lower FBC compared with the No-LS group. Some previous studies suggested that skipping breakfast is associated with lower levels of physical activity. For example, individuals who skip breakfast have been observed to participate in less exercise and exhibit lower levels of physical activity than those who eat breakfast. 31,32 Moreover, Sukurai et al 33 reported that a higher frequency of skipping breakfast was associated with a higher likelihood of current smoking and a lower likelihood of habitual exercise. However, we did not collect comprehensive data on these variables, which is a limitation of the current study. However, some lifestyle factors closely associated with skipping breakfast may also affect the onset of LS. Therefore, it is important to encourage people who skip breakfast to change their lifestyle to form reliable breakfast eating habits. Several previous studies have suggested a dose-response relationship between alcohol consumption and levels of physical activity, [34][35][36] indicating that as drinking increases, so does physical activity level. According to a review by Piazza-Gardner et al 37 examining the association between alcohol consumption and levels of physical activity, alcohol consumers of all ages were more physically active than their nondrinking peers. However, alcohol consumption is reported to be associated with osteoporosis which is a potential cause of LS. Importantly, moderate alcohol consumption has been shown to have a protective effect against osteoporosis, while heavy alcohol intake is conversely associated with an increased risk of osteoporosis. 38 Akahane et al 39 reported that alcohol consumption a few times/month or a few times/ week was inversely related to the presence of LS in a crosssectional study using an Internet panel survey. The current results indicated a similar tendency. However, regarding the association between alcohol consumption and LS, we may have only observed the effect of alcohol consumption on a healthy worker. In general, it is known that alcohol consumption has a complex association with health and some studies link its consumption to acute and chronic diseases. [40][41][42] Other research suggests that low levels of alcohol consumption can have a protective effect on ischemic heart disease, diabetes, and several other outcomes. [43][44][45] However, it is possible that those who drink too much and impair their locomotive function could not work effectively and could not participate in our study.
This study involved several limitations that should be considered. First, the study design was occupational field based, not population based. Moreover, because the participants in this study were workers and the ratio of males and females varied widely between companies, we pooled the data for all companies for analysis. Therefore, caution should be exercised when generalizing these results to the general population of the same generation. Second, in this study, employees who were hindered from exercising at the time of the survey in advance were excluded for safety considerations. Moreover, because the current examination was not compulsory, but voluntary, it is possible that people with LS were less likely to participate in the experiment, since many locomotive organ diseases, such as OA and spinal canal stenosis, are associated with pain and limited movement. This could represent a potential bias. Furthermore, convenience sampling may have contributed to selection bias, with most of the sample being the healthy voluntary participants, the white collar and relatively high-income levels. A replication of the study with a random sample of diverse employees (eg, employees who had diverse injury or illness history, poorer socioeconomic backgrounds, more variety of occupations) is recommended to improve the generalizability of results. Finally, female employees were more difficult to recruit for this study. Future research should be conducted using a larger and more diverse sample group, including all genders.
In conclusion, we conducted a cross-sectional study to investigate the relationship between LS and demographic factors, socioeconomic status, lifestyle habits, dietary habits and physical activity of young-and middle-aged adults in five local companies. We found that age, gender, BMI, alcohol consumption, physical activity, and frequency of breakfast consumption were significantly associated with LS. Health education factors, including lifestyle habit, dietary habit, and physical activity in the workplace, are important for reducing the risk of LS. Therefore, education initiatives about LS should be targeted not only to elderly populations but also to young-and middle-aged adults in the workplace. However, since this study was a cross-sectional study, causal relationships cannot be determined. In the future, we have to examine whether lifestyle habit, dietary habits, and physical activity are the results of LS or the cause of LS. | 2019-04-16T13:21:58.022Z | 2019-04-13T00:00:00.000 | {
"year": 2019,
"sha1": "a41f05f8ae2f1cc97c2f96297f6e21c6f5ec4e4a",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1348-9585.12053",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a41f05f8ae2f1cc97c2f96297f6e21c6f5ec4e4a",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195444816 | pes2o/s2orc | v3-fos-license | A study to assess the effect of articular congruency and displacement in treatment of fracture of lateral condyle of humerus in children
Background: Lateral condyle fractures of the distal humerus are the second most common fractures at the elbow in the paediatric population which if not properly treated leads to non-union deformity and other complications. Operative intervention with closed reduction and percutaneous pinning or open reduction internal fixation are indicated for a malaligned articular surface and/or an unstable fracture. Intraoperative arthrograms helps in doubtful congruency of joint surface and in assessing joint reduction. We present our results of treatment of fracture lateral condyle humerus in 24 patients and effect of the fracture displacement and congruency on functional and clinical outcome. Aim: The aim of our study was to assess functional and clinical results of fracture lateral condyle of humerus in children based on articular congruency. Materials and Methods: It was a retrospective study of total 24 patient with isolated fracture of lateral condyle of humerus. Open fracture and other co morbidities cases were excluded. Functional and clinical outcome according to hardware criteria was evaluated. Results: There were 24 cases in study with average age of 7.54 years. There were 19 male and 05 female. According to Jacob classification 06 were type I, 06 were type II and 12 were type III. Arthrography helped in decision making in doubtful cases of joint congruency. 100% excellent results achieved in type I and type II fractures whereas in 85% cases of type III fractures achieved excellent to good results. Increase operative time is associated with spur formation. Conclusion: Appropriate plan of treatment decided on articular congruency and displacement with the help of arthrography gives excellent results in fracture lateral condyle of humerus.
Introduction
Lateral condyle fractures of the distal humerus are the second most common fractures at the elbow in the paediatric population usually between the ages of 6-10 years. The diagnosis can be difficult both radiologically and clinically due to extension into the articular surface which if not treated properly leads to complications like non-union, mal union, fish tail deformity [1] These fractures need perfect anatomical reduction and articular alignment to prevent complications. Most of the fractures occur between 4 to10 years of age with peak incidence at 5-6 year age [2] Controversy exists regarding the management of non-displaced and minimally displaced (less than 2 mm) fractures. These fractures account for 33% to 69% of lateral humeral condylar fractures [3][4][5] . It is generally accepted that open reduction of the fracture and internal fixation with screws or wires constitutes the method of choice. The Milch classification is widely used, and they are type I and type II according to whether the fracture exited through the capitellar-trochlear groove or through the trochlear, respectively [6] The Jacob classification dictates whether surgical intervention is required. A Jacob I is non-displaced, II is displaced by 2 mm, but not malrotated. Type III is displacement with malrotation. The aim of lateral humeral condyle fracture treatment is to ensure healing of the fracture and to prevent pseudoarthrosis, malunion, deformities and functional disorders [7] Traditionally, undisplaced stable fractures were treated in cast immobilisation with observation.
Articular fractures that have a hinge may be treated with closed reduction and percutaneous pinning. In certain situations, an arthrogram or an MRI scan may help define articular congruity and adequacy of the reduction [7,8] Fractures that are unstable, malrotated and displaced by over 2 mm usually undergo open reduction internal fixation usually with wires, smooth pins or screws. Though recent methods of diagnosis and proper understanding of anatomy and treatment have reduced complications, there is constant need of improvement for early diagnosis and adequate management which may not be too much demanding. We have tried to study proper classification and the treatment in this study.
Materials and Methods
Retrospective study of 24 cases of lateral condyle fractures treated was performed. Initial assessment of the patients was performed in the Emergency Department of our Institution. The injured limb was examined for deformity, wounds and neurovascular integrity. Antero-posterior, oblique and lateral radiographs of the elbow were routinely performed. Exclusion criteria involved: a) those with open fractures; b) those whose injury involved more than the lateral condyle, and/or those with a dislocation and d) those who underwent fixation more than two weeks after the injury. All fractures were classified by the following system: A Type I fracture is displaced less than 2 mm. In a Type II fracture there is >2mm of displacement with intact articular cartilage, as demonstrated by arthrogram. In a Type III fracture there is > 2mm of displacement and the articular surface is not intact. The acceptable displacement for conservative management in an above elbow plaster of Paris (POP) cast was up to 2 mm. Patients who were treated conservatively were closely followed up with radiographs every week to ensure that the fracture has not displaced. The POP cast was removed upon radiological union typically between 4 and 6 weeks-and physiotherapy commenced. The surgical technique is as follows: First, fracture displacement was evaluated. If there was uncertainty as to the congruity of the articular surface, an arthrogram was performed. If the articular surface was congruent, then closed reduction and pinning were performed. If the articular surface was not congruent, an open reduction and internal fixation was performed. The fracture was identified and reduced via a dorsolateral approach to the distal humerus, through the interval between brachioradialis and triceps. The articular surface was directly visualized and reduced, and either 2 or 3 k-wires were placed in a divergent pattern to stabilize the fracture. Subsequently, an above elbow POP in neutral position was applied. Fluoroscopy was used intraoperatively to help assess fracture reduction and pin placement. Postoperatively, the first radiographic assessment was at 1 week after surgery to assure that the fracture reduction was maintained. All patients were then monitored until they showed solid radiographic healing, regained their motion, became asymptomatic and had no residual problems. Follow up evaluation was done using functional and overall grading and Hardacre [9] criteria. Hardacre Criteria considers functional and cosmetic base which was used for the evaluation of our results. Excellent result means no loss of motion, no alteration in carrying angle, and no symptoms. Good result is characterized by a satisfactory functional range of motion, lacking no more than 15 degrees of complete extension with no arthritic or neurological symptoms. Poor results included disabling loss of motion, conspicuous alterations of carrying angle, arthritic symptoms, ulnar neuritis, and radiographic findings of nonunion or avascular necrosis.
Results
We have studied 24 cases of fracture lateral condyle of humerus children at our hospital and make following observations regarding various factors affecting fracture & plan of management and results of various fracture type. In 2 cases of poor results one had full range of motion but progressive valgus deformity while other one had disabling loss of motion but inconspicuous carrying angle. This configuration is relatively stable and can be treated by close reduction and percutaneous pinning. In our study we found that proper understanding of articular congruency on xray and arthrography [10,11] helped us to treat these patient by close means and get excellent results. This treatment modality avoid operative complications like avascular necrosis and nonunion which results from excessive soft tissue dissection. In 14 cases of open reduction, on perop findings we observe that none of the fracture appeared to traverse the ossified portion of the capitellum (Milch type I). On X-ray 4 out of this 14 cases were of Milch type I, but on perop examination they were found to be Milch type II. So, intraoperative findings did not correlate with the preoperative radiographic diagnosis. 3 out of 7 cases of undisplaced fracture (type I) had Milch type II fracture on X-ray. Generally Milch type II fractures are treated by open reduction but we treated this patients by conservative means and got excellent results. Milch classification does not provide useful clinical information for plan of management. So we have classified this fracture according to displacement. We agree with the work of Mirsky and Karas [12] in which they observed the same findings. K-wire is the most reliable implant for treating lateral condyle fractures as threaded screw or compressive plate can cause absorption of epiphysis and later on valgus and fishtail deformity. In some cases one transverse K-wire through physis is helpful to prevent rotation. 2 mm is proper size for fixing these fractures. There is definite association between operative time and spur formation as we found in 4 cases. More operative time causes more soft tissue dissection and fragment manipulation, which leads to periosteal stripping and spur formation. Usual time for union of fracture of lateral condyle is 6 week. Implant removal should be considered at that time. In one case, early removal of K-wire at 4-weeks resulted in delayed union. Same findings are observed by Cardona and Riddleo. [13] Although physiotherapy was started earlier in operative procedure it had to be given for longer duration as compared to conservative treatment. Postoperative fibrosis may be responsible for this. In operative procedure we can start physiotherapy as early as possible from 4-weeks. This reduces elbow stiffness and number of visits. In physiotherapy schedule we found that elbow flexion is achieved earlier than the extension. All (3) cases of loss of movement are associated with type III fracture. Although in one case carrying angle is affected by 15°, movement is full. This patient had progressive cubitus valgus deformity without any affection of ulnar nerve at the time of final follow-up of 2 year. In all cases (14) of open reduction there is about 3-4° loss of hyperextension compared to normal side. All cases (4) of various deformities are also associated with type III fracture. Lateral condyle overgrowth is noted in 10 out of 14 cases of open reduction, which has no significance functionally except parent's concern so parents should be explained about this bony bump before surgery. Poor results also have complications but their final range of motion is good compared to pre-operative movement. Average increase in range of motion of these cases is of 80°. Complications like pin tract infection and spur formation are also noted but they did not alter the final outcome. There is not a single case of non-union but there are 2 cases of delayed union. Results are evaluated according Hardacre criteria. We have modified the criteria for the carrrying angle. Minimal alteration (0-5), alteration is inconspicuous (6-10°) and alteration in carrying angle is conspicuous (10°). Minimal to inconspicuous alteration in carrying angle did not alter the range of motion, which is functional requirement of the patient. All fresh cases have excellent results with specific treatment modality according to type of fracture.
Conclusion
At final, we conclude that depending on displacement of fracture appropriate treatment should be given and minimally displaced fracture with articular congruency should be fixed with percutaneous K-wiring and thus reducing complications and operative burden.
Conflict of Interest
The authors declare no conflict of interest.
Fig 1:
The joint surface was accurately reduced with minimal dissection of soft tissues from the distal fragment in order to reduce the risk of avascular necrosis of the capitellum. | 2019-06-26T13:40:53.144Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "15c76262a3baba0edd5858d6a95214278dff4788",
"oa_license": null,
"oa_url": "http://www.orthopaper.com/archives/2019/vol5issue2/PartD/5-1-45-807.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7895234ac0670c5bd34de9814371d81021d89667",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21469624 | pes2o/s2orc | v3-fos-license | Quantum Smectic and Supersolid Order in Helium Films and Vortex Arrays
A flux liquid can condense into a smectic crystal in a pure layered superconductor with the magnetic field oriented nearly parallel to the layers. Similar order can arise in low temperature $^4$He films with a highly anisotropic periodic substrate potential. If the smectic order is commensurate with the layering, this periodic array is {\sl stable} to quenched random point--like disorder. By tilting and adjusting the magnitude of the applied field, both incommensurate and tilted smectic and crystalline phases are found for vortex arrays. Related variations are possible by changing the chemical potential in the helium system. We discuss transport near the second order smectic freezing transition, and show that permeation modes in superconductors lead to a small non--zero resistivity and a large but finite tilt modulus in the smectic crystal. In helium films, the theory predicts a nonzero superfluid density and propagating third sound modes, showing that the quantum smectic is always simultaneously crystalline and superfluid.
I. INTRODUCTION
The discovery of high temperature superconductors, with their broad fluctuation regime, has emphasized the inadequacy of the conventional mean-field description of critical behavior. 1 Early attempts to apply field theoretic methods to the Ginzburg-Landau (GL) free energy, however, are of limited applicability in low dimensions. 2,3 Instead, superconducting fluctuations in low dimensions are now understood in terms of vortices, which emerge as the low energy degrees of freedom of the Ginzburg-Landau theory. The phases of the superconductor within this picture are analogous to states of conventional matter, except that they are composed of flux lines instead of molecules. In fact, because the vortices are extended objects, the system most closely resembles a collection of quantum bosons. 4 As the thickness of the superconductor approaches infinity, the effective "temperature" of this bosonic system goes to zero, and interesting strongly-correlated phases can emerge.
One difference between the flux line array and true assemblies of bosons is that in the former, the effects of the embedding medium are more often dramatic. Indeed, without careful preparation, most high temperature superconducting samples are dominated by internal defects which tend to disorder the vortex array. Only at reasonably high temperatures, when the fluxons are best described as a liquid, can the effects of these random pinning centers be neglected. 5 At lower temperatures, disorder may induce subtle types of glassy order, [6][7][8][9] or simply force the system to remain a liquid with very sluggish dynamics. 10 The layered structure of the copper-oxide materials itself provides a non-random source of pinning. 11 At low temperatures, the c-axis coherence length ξ c0 ≈ 4Å < ∼ s ≈ 12Å, where s is the lattice constant in this direction. Vortex lines oriented in the ab plane are attracted to the regions of low condensate electron density between the CuO 2 layers. Such a periodic potential for true twodimensional bosons could be induced by an anisotropically corrugated substrate, possibly leading to the observation of the effects described here in He 4 films.
Previous work on intrinsically pinned vortices has focused on the low temperature fluctuationless regime, in which the vortices form a pinned elastic solid. Near T c , however, when thermal fluctuations are important, entirely different phases can exist. These thermally fluctuating states are particularly interesting experimentally because hysteretic effects are weak and equilibrium transport measurements are more easily performed than at low temperatures. Our research is motivated by the recent experimental work of Kwok et. al. 12 who observed a continuous resistive transition in YBa 2 Cu 3 O 7 for fields very closely aligned (θ < 1 • ) to the ab plane. A preliminary version of our results appeared in Ref. 13.
To explain the experiments, the interplay between inter-vortex interactions and thermal fluctuations must be taken into account in an essential way. The experiments of Ref. 12 seem to rule out conventional freezing, which is first order in all known three-dimensional cases. The additional observation of a strong first order freezing transition for θ > ∼ 1 • suggests that point disorder is relatively unimportant at these elevated temperatures (strong point disorder would destroy a first order freezing transition). In addition, an attempted fit of the data to a dynamical scaling form yielded exponents inconsistent with vortex or Bose glass values. 12 Instead, we postulate freezing into an intermediate "smectic" phase between the high temperature flux liquid and a low temperature crystal/glass. Such smectic freezing, as discussed by de Gennes for the nematic-smectic A transition, 14 can occur via a continuous transition in three dimensions. The vortex smectic state is richer than its liquid crystal counterpart, however, for two reasons. First, the existence of a periodic embedding medium (i.e. the crystal lattice) in the former leads to commensurability effects not present in the liquid crystal. 15 In addition, the connectedness of flux lines leads to constraints with no analog for pointlike molecules. As we show below, the onset of smectic order should be accompanied by a steep drop in the resistivity and a rapid increase in the tilt modulus for fields which attemp to tip vortices out of the CuO 2 planessee Fig.10. The smectic phase may also be distinguished experimentally using neutron scattering, which measures the Fourier transform of the magnetic field two-point correlation function (see Fig.1). We assume a magnetic field along the y-axis and CuO 2 layers perpendicular toẑ.
The vortex liquid structure function shows the usual diffuse liquid rings, as well as delta-function Bragg peaks at q z = 2πn/s (for integral n), representing the "imposed" vortex density oscillations from the CuO 2 layers. On passing to the smectic state, additional peaks develop at wavevectors q z = 2πn/a, interlacing between those already present in the liquid. The new peaks represent the broken symmetry associated with preferential occupation of a periodic subset of the layers occupied by the vortices in the liquid. At lower temperatures in the vortex solid, further peaks form for q x = 0, producing the full reciprocal lattice of a two-dimensional crystal.
Our analysis leads to the phase diagrams shown in Fig.2. Upon lowering the temperature for H c = 0 and a commensurate value of H b , the vortex liquid (L) freezes first at T s into the pinned smectic (S) state, followed by a second freezing transition at lower temperatures into the true vortex crystal (X). When H c = 0, tilted smectic (TS) and crystal (TX) phases appear. The TS-L and TX-TL transitions are XY-like, while the TS-S and TX-X phase boundaries are commensurate-incommensurate transitions (CITs). 15 At larger tilts, the TX-TS and TS-L phase boundaries merge into a single first order melting line. As H b is changed, incommensurate smectic (IS) and crystal (IX) phases appear, again separated by CITs from the pinned phases, and an XY transition between the IS and L states. If the low-H b commensurate smectic (S) phase corresponds to, say, 5 CuO 2 plane periodicities per vortex layer, the high-H b S phase will represent a state with 4 CuO 2 plane periodicities per sheet. We show that the commensurate smectic order along the c axis is stable to weak point disorder, in striking contrast to the triangular flux lattice which appears for fields aligned with the c axis. 16 . This stability should increase the range of smectic behavior relative to the (unstable) crystalline phases when strong point disorder is present. The high temperature flux liquid (for fields in the ab plane) is of some interest in its own right. Consider first one vortex line wandering along the y-axis, as shown schematically in Fig.3. This line is subject only to thermal fluctuations and a periodic pinning potential along the z-axis, provided by the CuO 2 planes. 17 If thermal fluctuations are ignored, the vortex acts like a rigid rod, and will be localized in one of the potential minima. 18 This localization assumption, however, is always incorrect in the presence of thermal fluctuations, provided the sample is sufficiently large in the y-direction. As L y → ∞, the statistical mechanics of this single wandering line random walking in directions perpendicular to y leads inevitably to equal probabilities that the vortex is in any of the many possible minima alongẑ.
On a more formal level, this probability distribution P (z) is given by the square of the ground state wave function of the Schrödinger equation in a periodic potential -see section III below. The jumps shown in Fig.3 across CuO 2 planes are represented by quantum mechanical tunneling in imaginary time. According to Bloch's theorem, this tunneling leads to P (z) = |ψ k=0 (z)| 2 , where the Bloch states in general have the form ψ k (z) = exp(ikz)u(z), with u(z) a function with the periodicity of the pinning potential. The resulting probability distribution is shown schematically on the right side of Fig.3.
Wandering of a single flux line (solid curve), leading to an extended probability distribution P (z) given by a k = 0 Bloch wavefunction. Other vortex trajectories (represented by the dashed curve) will generate similar probability distributions, unless interactions lead to crystalline or smectic order. Now suppose an additional line is added to the system. As suggested by the trajectory of the dashed curve in Fig.3, it too will wander from plane to plane. Although the two flux lines interact repulsively, they can wander and still avoid each other by using the x-coordinate or by never occupying the same minimum at the same value of the "imaginary time" coordinate y. Thus both flux lines generate a delocalized probability distribution and occupy the same k = 0 Bloch state. At high temperatures or when the lines are dilute, we expect for similar reasons macroscopic occupation of the k = 0 Bloch state in the equivalent boson many body quantum mechanics problem, similar to Bose-Einstein condensation. In this sense, the flux liquid is indeed a "superfluid." The presence of numerous "kinks" in the vortex trajectories insures a large tilt response for fields alongẑ and a large resistivity for currents alongx. The various symmetrybreaking crystalline or smectic states which appear at low temperatures or higher densities arise because of the localizing tendency of the interactions. The density of kinks is greatly reduced in these phases.
The remainder of the paper is organized as follows. In section II, several models are introduced which will be used to analyze the layered superconductor. Sections III and IV discuss the effect of intrinsic pinning on the liquid and crystal phases, respectively, and show how smectic ordering is encouraged on approaching the intermediate regime from these two limits. A Landau theory for the liquid-smectic transition is introduced in section V, and the critical behavior is determined within this model. The nature of the commensurate smectic phase itself is explored in section VI, through a computation of the response functions. In section VII, it is shown to have "supersolid" order, similar to the supersolid crystal phase recently proposed at high magnetic fields along thê c axis. 19 The additional phases which arise for large incommensurate fields are described in section VIII. Section IX details the modifications of the phase diagram when weak point disorder is present, and, in particular, demonstrates the stability of the smectic state. Concluding remarks and the implications of these results for helium films on periodically ruled substrates are presented in section X.
II. MODELS
At a fundamental level (within condensed matter physics), layered superconductors may be modeled as a positively charged ionic background and a collection of conduction electrons, which can pair via the exchange of phonons, excitons, magnons, etc. Since such a microscopic theory of high temperature superconductors is lacking, we must resort to more phenomenological methods. There are, nevertheless, a variety of differing levels of description available, several of which will be used in the remainder of this paper.
A. Static Models
The most basic of our models is the familiar Ginzburg-Landau theory, which is an expansion of the free energy of the superconductor in powers of the order parameter field Ψ GL , where Here H is the applied external magnetic field. To establish notation, we choose x, y, and z respectively along the a, b, and c axes of the underlying cuprate crystal. The diagonal components of the effective mass tensor are then m x = m y ≡ m and m z = M . For convenience in what follows, we also define the anisotropy ratio γ ≡ m/M = λ ab /λ c = ξ c /ξ ab ≪ 1.
Eq.1 provides a powerful means of understanding superconducting behavior, including the effects of anisotropy. In layered materials, however, the theory must be modified to allow for coupling of the superconducting order to the crystalline lattice. One such description, in which the superconductor is regarded as a stack of Josephson-coupled layers, is the Lawrence-Doniach model. For our purposes, however, it is sufficient to consider a "soft" model for the lattice effects, in which the coupling α is allowed to be a periodic function of z with period s equal to the periodicity of the copper-oxide planes.
Because the high temperature superconductors are strongly type II, it is appropriate to use London theory over a large range of the phase diagram. In this limit, variations of the magnitude of the order parameter are confined to a narrow region within the core of each vortex. Because the resulting London equations are linear, a complete solution can be obtained for the free energy of an arbitrary vortex configuration. 20 For our purposes, it is sufficient to consider an approximate form in which the tilt moduli are local and the interactions between vortices occur at equal y, 21 , and the stiffness constants obtained from anisotropic Ginzburg-Landau (GL) theory areǫ = ǫ 0 γ andǫ ⊥ = ǫ 0 /γ (see, e.g.
taking into account the effects of the layering. Either Eq.1 or Eq.2 may be used at finite temperature, by calculating the partition function where the trace is a functional integral over Ψ GL or the set of vortex trajectories {r i (z)}, for the Ginzburg-Landau and London limits, respectively. In the London case, this trace may be formally performed by recognizing Eq.3 as mathematically identical to the Feynmann path integral for the first quantized imaginary time Green's function of interacting bosons. Within this boson analogy, 4 the Green's function may also be calculated using a coherent state path integral representation. The "action" for these bosons is where ψ is the complex coherent state boson field, and n(r) = ψ † (r)ψ(r). It is convenient to rescale x → (ǫ ⊥ /ǫ ) 1/2 x and z → (ǫ /ǫ ⊥ ) 1/2 z to obtain the isotropic Laplacian ∇ 2 ⊥ ≡ ∂ 2 x + ∂ 2 z . Eq.4 becomes withǫ ≡ ǫ ǫ ⊥ and the rescaled potentialsṼ P (z) = V P (zγ 2 ) andṼ (r ⊥ ) = V (x/γ 2 , zγ 2 ). The action is used via to calculate the grand canonical partition function Z at chemical potential µ per unit length of vortex line. Eqs.6 and 5 may also be obtained directly froma limiting case of Eq.1 via a duality mapping. 23
B. Dynamical Models
Calculation of dynamical response functions such as the resistivity requires a model for the time dependence of the superconductor. We will do this within the London framework, treating the vortex lines as the dynamical degrees of freedom. In the overdamped limit, the appropriate equation of motion is then where Γ is a damping constant, and f i is the force on the i th flux line, including both external forces and random thermal noise. Eq.7 is useful in describing the properties of small numbers of vortices. To understand the bulk behavior of dense phases, however, we need an extensive theory. Such a model for the liquid phase was constructed on physical and symmetry grounds in Ref. 24 in the hydrodynamic limit. Because we intend to go beyond simple linearized hydrodynamics, we require some knowledge of the non-linear form of the bulk equations of motion.
We proceed by first defining the hydrodynamic fields Conservation of magnetic flux is embodied in the constraint Eq.2 may be rewritten as a function of n and τ , formally Specific forms for F [n, τ ] will be used as needed. Eq.7 completely specifies the dynamics of n and τ . Differentiating Eqs.8 and 9 then leads immediately to hydrodynamic equations of motion. Details are given in appendix A. One finds where the density and tangent currents are where Because the constitutive relation for the tangent current includes τ βγ , Eqs.13-17 do not form a closed set. Vortex hydrodynamics, however, leads us to expect that n and τ provide a complete long-wavelength description of the system. We therefore adopt the truncation scheme τ βγ → τ βγ (i.e. τ βγ is preaveraged at equilibrium). Averaging via Eq.2 gives To complete the dynamical description, we must specify Γ and f . Matching to the liquid hydrodynamics of Ref.
24 relates Γ to the Bardeen-Stephen friction coefficient γ BS = n 0 Γ, and gives the driving force Here J is the applied transport current density and η(r) is a random thermal noise.
III. INTRINSIC PINNING IN THE VORTEX LIQUID
To better understand the interplay of thermal fluctuations and layering of the superconductor, it is useful to first consider the behavior at high temperatures in the liquid state. As discussed by Marchetti, 25 the physics of a single vortex line in the liquid is well described by a hydrodynamic coupling to the motion of other vortices. In the dense limit, this medium is approximately uniform, and does not significantly effect the wandering of an isolated vortex. To estimate the effects of intrinsic pinning, it is thus appropriate to consider a single vortex line, oriented along the a − b plane. For In this limit, the copper-oxide planes act as a smooth periodic potential on the vortex. The magnitude of this potential per unit length is 26 For a single vortex, Eq.2 reduces to The periodic potential is a smooth periodic function with magnitude of order unity. The x-displacement decouples in Eq.21, and may be integrated out to yield with D x = k B T /ǫ . The z-dependent part of Eq.21 is identical to the Euclidean action of a quantum particle of massǫ ⊥ in a one-dimensional periodic potential V P (z), with y playing the role of imaginary time. The single flux line partition function, with fixed endpoints, maps to the Euclidean Green's function for the particle, with k B T replacingh.
In the quantum-mechanical analogy, the particle tunnels between adjacent minima of the pinning potential, leading, as discussed in the Introduction, completely delocalized Bloch wavefunctions even for extremely strong pinning. The "time" required for this tunneling maps to the distance, L kink , in the y-direction between kinks in which the vortex jumps across one CuO 2 layer. The WKB approximation gives Eq.24 may also be obtained from simple scaling considerations. The energy of an optimal kink is found by minimizing over the width (in the y direction) w, giving w * ∼ ǫ ⊥ /U p s and f * 1 ∼ ǫ ⊥ U p s. Such a kink occurs with a probability proportional to exp When the sample is larger than L kink along the field axis, the flux line will wander as a function of y, with where the "diffusion constant" D z ≈ s 2 /L kink . For ǫ ⊥ U p s < ∼ k B T , the pinning is extremely weak, and the WKB approximation is no longer valid. Instead, the diffusion constant D z ≈ k B T /ǫ ⊥ , as obtained from Eq.21 with U p = 0. At much lower temperatures, when ξ c ≪ s, the energy in Eq.20 must be replaced by the cost of creating a "pancake" vortex 27 between the CuO 2 planes. In this regime, L kink ∼ ξ ab (s/ξ c ) ǫ0s/kBT .
For T ≈ 90K, as in the experiments of Kwok et. al. 12 ξ c /s ≈ 2.3, and Eq.20 gives ǫ ⊥ U p s/k B T ≪ 1, indicative of weakly pinned vortices in the liquid state. The transverse wandering in this anisotropic liquid is described by a boson "wavefunction" with support over an elliptical region of area k B T L y / ǫ ǫ ⊥ with aspect ratio ∆x/∆z = γ −1 ≈ 5 for YBa 2 Cu 3 O 7 . For L y ≈ 1mm, a typical sample dimension alongŷ, the dimensions of this ellipse are of order microns. Since typical vortex spacings at the fields used in Ref. 12 are of order 400Å, these flux lines are highly entangled.
To understand the bulk propertices of the vortex liquid, it is useful to employ the hydrodynamic description of section II B. In the liquid, the appropriate form of the free energy is 24,28 where δn = n − n 0 , with n 0 = B y /φ 0 the mean density.
Here the compression modulus c 11 and tilt moduli c 44,⊥ and c 44, are regular functions of q with finite values at q = 0.
On physical grounds, we expect intrinsic pinning to enter Eq.27 both through an increase in c 44,⊥ , which decreases fluctuations perpendicular to the layers, and through theṼ P term which tends to localize the vortices near the minima in the periodic potential.
The former effect only acts to increase the anisotropy of the liquid. The latter term, however, explicitly breaks translational symmetry along the z axis, inducing a modulation of the vortex density, This modulation corrects the static structure function, where S 0 (q) is the static structure function for V P = 0. Since V P [z] is a periodic function, the correction term shows peaks at the discrete reciprocal lattice vectors for which q z is an integral multiple of 2π/sẑ. The situation is somewhat analogous to applying a weak uniform field to a paramagnet, inducing a proportionate magnetization. Unlike the magnetic case, however, the layering perturbation leaves a residual translational symmetry under shifts z → z + s. It is the breaking of this discrete group which we will identify with the freezing of the vortex liquid.
IV. THE CRYSTAL PHASE
Considerable work already exists on intrinsic pinning in vortex crystals. 11 We review the essential ideas here, and discuss its implications for thermal fluctuations at low temperatures.
A. Zero temperature properties
To study the effect of layering upon the vortex state, we first consider the limit of a weak periodic modulation of the order parameter along the z-axis. In this case, the resulting (zero temperature) configuration is only slightly perturbed from the ideal lattice predicted by Ginzburg-Landau (or London) theory. The free energy in this case may be written in terms of the phonon coordinates u(r), as where i and j = x, y, α and β = x, y, z, and the contribution to the free energy of the layering potential is In general the elasticity theory is quite complex due to the anisotropy and wavevector dependence on the scale of λ. Rather than work with a specific form of the elastic moduli, we will obtain general expressions in terms of an unspecified set of K ij (q) ≡ K αβij q α q β . To obtain the correct continuum limit of Eq.31, we consider the possible commensurate states of the layers and vortex array. The triangular equilibrium lattice in this orientation is described by the two lattice vectors a 1 = Cγ −1ẑ , a 2 = Cγ −1 /2(ẑ+ √ 3γ 2x ), with C 2 = 2φ 0 / √ 3B y . Commensurability effects occur when the minimum zdisplacement between vortices, C/(2γ) = (n/m)s, where n and m are integers, chosen relatively prime for definiteness. This gives the commensurate fields For simplicity we consider here only the integral states with m = 1. In this case, , and we can straightforwardly take the continuum limit Such an expression may be derived explicitly from the Abrikosov solution for fields near H c2 , 11 in which case the pinning potential is where Here κ = λ ab /ξ ab is the usual Ginzburg-Landau parameter, and β A ≈ 1.16. From Eqs.30 and 33, it is clear that for these commensurate fields, the ground state is unchanged, i.e. u = 0. Away from these fields, however, the fate of the lattice is less obvious. Ivlev et. al. 11 have shown that, for a small deviation from a commensurate field, it is energetically favorable for the vortex lattice to shear in order to remain commensurate with the copper-oxide plane spacing. Because such a distortion requires some additional free energy, it will generally be favorable, in addition, for the internal magnetic induction B to deviate from the applied field H to allow a better fit to the crystal. This Meissner-like effect will be discussed in more detail in section VIII.
For strong layering, such as that described by the Lawrence-Doniach model, the pinning effects are much more pronounced. When ξ c ≪ s, the magnetic field remains essentially confined between the CuO 2 layers, and the vortex array is thus automatically commensurate at all applied fields. Although such strong confinement of vortices can lead to interesting non-equilibrium states, 29 we will confine our discussion to equilibrium.
B. Thermal Fluctuations about the commensurate state
Thermal fluctuations of the vortex lattice are described by the partition function In three dimensions, phonon fluctuations are small, and expanding the pinning potential around its minimum for small u, gives the quadratic free energy where ∆ ≡ −V ′′ P [z = 0]. The displacement field fluctuations can be calculated from Eq.37 by equipartition, yielding the general result where BZ indicates an integral over the Brillouin zone.
The effect of the periodic potential is thus to uniformly decrease the fluctuations of u z at all wavevectors. For ∆ > ∼ B 2 /λ 2 , this decrease is substantial over the entire Brillouin zone, and In stronger fields, for ∆ < ∼ B 2 /λ 2 , the only the contributions from q < ∼ √ ∆/B are strongly suppressed. For 1 − B/H c2 ≪ 1, Eqs.34 and 35 can be combined to give the ratio At lower fields and temperatures, one expects the meanfield estimate above to break down and ∆λ 2 /B 2 to increase, possibly settling down to a constant value at low temperatures.
For magnetic fields oriented along the c axis, the Lindemann criterion has been used to estimate the melting point of the vortex lattice 4 by requiring that As is clear from Eqs.38-39, once layering is included, the increased stiffness for u z makes the two ratios u 2 x /a 2 x and u 2 z /a 2 z unequal. Indeed, the second ratio is strongly suppressed relative to the first. Extending the Lindemann criterion to this situation suggests that the strains in u x might be alleviated by a partial melting of the lattice without affecting the broken symmetry leading to the u z displacements. Such a scenario corresponds to the unbinding of dislocations with Burger's vectors along the x axis. The phase in which these dislocations are unbound is the smectic.
C. Strongly Layered Limit
To further elucidate the nature of the smectic phase, it is helpful to discuss the limit of very strong layering. In this case, the vortex lines are almost completely confined within the spaces between neighboring CuO 2 layers. For moderate fields, occupied layers will be separated by separated by several unoccupied ones, and the interactions between vortices in different layers may be considered weak. Because of the strong layering, the out-of-plane component of the displacement field u z is suppressed, so that the free energy of the system may be written to a first approximation as where q ⊥ = (q x , q y ) and u n (q ⊥ ) = u x (q ⊥ , ns). For a qualitative discussion of smectic ordering, it is sufficient to take K x and K y independent of q. Eq.43 neglects both inter-layer interactions and hopping. The former are included perturbatively via the free energy where v IL is an inter-layer interaction energy and a is the lattice spacing in the x direction. The periodic form of the interaction is required by the symmetry under lattice translations u → u + a within each layer. Once hopping of flux lines between neighboring occupied layers is included, u n is no longer single valued within a given layer. In fact, a configuration in which a single line hops from layer n to layer n + 1 corresponds to a dislocation in layer n paired with an anti-dislocation in layer n + 1, since for a contour surrounding the hopping point (see Fig.4). Such dislocation-antidislocation pairs, which we will refer to as large kinks, can be created in neighboring layers with a dislocation fugacity y d = exp(−E lk /k B T ), where the core energy as estimated from Eq.25 with s → ms. Note that the dislocation and anti-dislocation must have the same x and y coordinates, since misalignment is accompanied by an energy cost proportional to the extra length of vortex between the occupied layers. The full theory described by Eqs.43 and 45 plus dislocations can be studied using a perturbative renormalization group (RG) expansion in v IL and y d , using techniques developed for the XY model in a symmetrybreaking field. 30 For y d = v IL = 0, the Gaussian free energy of Eq.43 describes a fixed line of independently fluctuating vortex layers parameterized by the dimensionless ratio K x K y /k B T . To characterize the order along this fixed line, we define a translational order parameter (characterizing correlations along the x axis) within the n th layer by summing over vortex lines according to where x (n) k (y) is the coordinate of the k th vortex line in layer n at a length y along the field direction. The correlation function C T, (x, y, n) ≡ ρ (x, y, n)ρ (0, 0, 0) is then evaluated by inserting x (n) k (y) = ka + u n (ka, y) and converting the sum to an integral via k → dx/a. One finds i.e. quasi-long-range order within the planes.
This fixed line is always unstable either to interlayer couplings, to dislocations, or to both perturbations. The linear (in y and v IL ) RG flows which determine the stability are where K ≡ K x K y , and l = ln(b/a) is the logarithm of the coarse-graining length scale b.
When k B T < Ka 2 /4π, dislocations are irrelevant at the fixed line, so that y decreases under renormalization.
In this regime, however, v IL increases with l, so that interactions between the layers are important for the large distance physics. To study this regime, one may therefore expand the cosine of Eq.44 in u n+1 − u n , obtaining a discrete version the usual three-dimensional elastic theory. In this limit, for large |x|, |y|, or |n|. At high temperatures, when k B T > Ka 2 /π, v IL scales to zero while the fugacity y d is relevant. Unbound dislocations on long length scales therefore invalidate the elastic theory of Eq.43. Following standard arguments, the translational correlation function in such an unbound vortex plasma becomes exponentially small, i.e.
For temperatures in the intermediate range Ka 2 /4π < k B T < Ka 2 /π, both y d and v IL are relevant operators. The eventual nature of the ordering at long distances presumably takes one of the two above forms, though the critical boundary at which the system loses long-range translational order in the x direction is not accessible by this method.
It remains to discuss translational order along the layering axis. Such order is characterized by the parameter where a z is the distance between occupied vortex layers (and therefore an integral multiple of the CuO 2 plane spacing, i.e. a z = ms). Because we have, by construction, confined the vortices to these layers, however, z (n) k = na z for every k, and the exponential in Eq.53 is always unity. Both phases described above, regardless of the relevance of v IL and y, therefore retain long-range order along the z axis.
Within the strongly layered model, there are still excitations which can destroy this transverse ordering. These are configurations in which a flux line hops out of an occupied layer into one of the (m − 1) unoccupied intermediate planes between it and the next occupied layer. Such an excursion costs an energy proportional to the length of the vortex segment in the unoccupied layer, so that only short intermediate segments occur at low temperatures. These out-of-plane hops reduce the amplitude ρ ⊥ , but do not drive it to zero. At very high temperatures, entropy may counterbalance this energy and drive the free energy cost for such intervening vortices negative. Once this occurs, translational order will be lost along the z axis as well, and the system will be a true liquid. Nevertheless, at intermediate temperatures above the unbinding transition for dislocations in the u field but below the temperature at which infinite vortices enter the intermediate copper oxide planes, we expect the system to sustain "one-dimensional" long-range order along the z axis, i.e. a smectic state.
V. CRITICAL BEHAVIOR
Having established the possibility of a smectic phase approaching both from the crystalline and liquid limits, we now focus on the critical behavior near the putative liquid-smectic transition, using a Landau order parameter theory. A closely related Landau theory which describes a low temperature smectic-crystal transition is discussed in Appendix B. The natural order parameter to describe the smectic ordering is ρ ⊥ defined in Eq.53. To simplify notation, we define a new field Φ = ρ ⊥ , so that, in the continuum notation (i.e. outside the strongly layered limit), where n 0 is the background density, and q = 2π/a z is the wavevector of the smectic layering. The complex translational order parameter Φ(r) is assumed to vary slowly in space. The superconductor is invariant under translations and inversions in x and y, and has a discrete translational symmetry under z → z + s, where s is the CuO 2 double-layer spacing. From Eq.54, these periodic translations correspond to the phase shifts Φ → Φe −iqs . We continue to assume, as in the previous section, that a z = ms, with m an arbitrary integer. The most general free energy consistent with these symmetries is where the coordinates have been rescaled to obtain an isotropic gradient term. The "vector potential" A represents changes in the applied field δH = δH bŷ +H cẑ , with A x = 0, A y = qH c /H b , and A z = qδH b /H b . The form of this coupling follows from the transformation properties of Φ. 14,31 Eq.55 assumes a local form of the free energy. Additional non-local interactions arise due to interactions with long wavelength fluctuations in the density and tangent fields. The most relevant (near the critical point) of these couplings is where the correlations of δn are determined from Eqs.27 and 11. When δH = A = γ = 0, Eq.55 is the free energy of an XY model with an m-fold symmetry breaking term. A second order freezing transition occurs within Landau theory when v > 0 and r ∝ T − T s changes sign from positive (in the liquid) to negative (in the smectic). The renormalization group (RG) scaling dimension, λ m , of the symmetry breaking term is known experimentally in three dimensions to be λ m ≈ 3 − 0.515m − 0.152m(m − 1). 32 For m > m c ≈ 3.41, the field g is irrelevant (λ m < 0), and the transition is in the XY universality class. 33 The magnetic fields used by Kwok et. al. 12 correspond to m = 9 − 11, 34 well into this regime. The static critical behavior is characterized by the correlation length exponent ν ≈ 0.671 ± 0.005 and algebraic decay of order parameter correlations at T s , with η ≈ 0.040 ± 0.003. 35 To study the effects of coupling to long wavelength fluctuations when γ = 0, we first satisfy Eq.11 by defining an auxiliary "displacement"-like field w via After this change of variables, Eq.27 becomes where we have taken the q = 0 limits of the elastic moduli to study the critical behavior, and dropped the V IP term which only couples to w at finite q z . Eq.56 then becomes Eq.60 is an anisotropic form of a coupling studied previously in the context of the compressible Ising model, in which w describes the phonon modes of a compressible lattice on which the spins reside. 36 As shown in appendix C, the techniques developed for that problem give the renormalization group eigenvalue λ γ = α/2ν for this coupling at the critical point. Since α = 2 − 3ν ≈ −0.01 is negative, the long wavelength density fluctuations are irrelevant for the critical behavior.
A. Static Behavior
Deep in the ordered phase (r < 0), amplitude fluctuations of Φ are frozen out. Writing Φ = |r|/ve 2πiu/a , Eq.55 becomes, up to an additive constant, where a = ms, κ = 4π 2 |r|K/a 2 v,g = g(|r|/v) m/2 , and the reduced vector potential is A = A/q. The displacement field u describes the deviations of the smectic layers from their uniform state. The sine-Gordon term is an effective periodic potential acting on these layers. As is well known from the study of the roughening transition, 37 such a perturbation is always relevant in three dimensions. The smectic state is thus pinned at long distances (i.e. the displacements u of each smectic layer are localized in a single minima of the cosine). To further characterize the smectic phase, we consider the transverse magnetic susceptibility, which defines the macroscopic tilt modulus, The field H c attempts to tilt the smectic layers. However, A ∝ H c is an irrelevant operator in the smectic phase (as can be easily seen by replacing the periodic potential by a "mass" term ∝ u 2 ). This implies that the smectic layers do not tilt under weak applied fields, i.e. ∂ ∂ y u /∂H c | Hc=0 = 0. Naively, this implies an infinite tilt modulus. A more careful treatment shows that c 44,⊥ actually remains finite in the smectic phase. To compute c 44,⊥ from first principles, we use the thermodynamic relation where f is the full free energy of the system, including a smooth part f 0 not involving Φ and not included in Eq.55, i.e.
To evaluate Eq.63, we need to consider in detail the dependence of the free energy and the coefficients in Eq.55 on H c . This dependence arises in two ways, because an applied H c can be decomposed into a rotation and a scaling of the full field H. If the system were fully rotational invariant, the rotational part would enter F purely through the "gauge-invariant" coupling to A of Eq.55. However, anisotropy breaks this invariance, leaving instead only an inversion symmetry under z → −z. The inversion symmetry allows for a quadratic dependence of r and of f 0 on H c . 38 The scaling part also contributes quadratic dependence, which may be combined with the previous effect. Taking both into account, and matching the tilt modulus to the tilt modulus of the liquid phase (i.e. with Φ = 0) leads to where c 44,⊥0 is the tilt modulus obtained from anisotropic GL theory (without accounting for the discreteness of the layers) and r ′′ ≡ ∂ 2 r/∂H 2 c | Hc=0 . Eq.65 has a simple physical interpretation. The first term is the contribution to B c from tilting of the layers (described by a phase shift of Φ). This term is zero for small fields H c due to the cosine pinning potential. Even when the layers retain a fixed orientation perpendicular to the c axis, however, the transverse field can penetrate via the second term. Such motion arises microscopically from a non-zero equilibrium concentration of vortices with large kinks extending between neighboring smectic layers, as suggested in section IV C. Eq.65 predicts a non-divergent singularity c 44,⊥ (T ) − c 44,⊥ (T s ) ∼ |T − T s | 1−α at the critical point, where α is the specific heat exponent.
At low temperatures in the smectic phase, we can estimate the tilt modulus in terms of properties of kinks. In zero field, the concentrations of large kinks carrying magnetic field in the +ẑ and −ẑ directions are equal, leading to zero net field along the c direction. For H c = 0, the energy of a kink depends upon its orientation due to the −B c H c /4π term in the GL free energy, yielding The difference in the concentrations of up and down kinks takes the activated form where w lk ∼ ǫ ⊥ /U p ms, estimated from Eq.25 with s → ms. Since B c = (n + − n − )msφ 0 , Eq.62 yields i.e. a large but finite tilt modulus.
B. Dynamical Behavior
Very similar phenomena occur in the dynamics of the smectic phase. To study them, we need the equation of motion for Φ. On the basis of symmetry and the lack of obvious conservation laws, a natural conjecture is that of overdamped "model A" 39 dynamics. Indeed, a careful treatment using the general formalism of section II B gives (see appendix D) where µ = qφ 0 n 0 /c andη(k) = in 0 qη z (qẑ + k). Eq.69 is remarkably similar to the model E dynamics 39 for the complex "superfluid" order parameter Φ, where now J x plays the role of the "electric field" in the Josephson coupling. 40 The actual electric field is E x = j v,z φ 0 /c, leading via Eq.15 to (see appendix D) where ρ xx,n is the normal state resistivity in the x direction, whose appearance in the last term follows from the relation (n 0 φ 0 /c) 2 /γ BS ≈ (B/H c2 )ρ xx,n .
FIG. 6. Sliding of a kink (thick curved line) along the field direction, viewed along the x axis for the case m = 2. Dashed lines indicate the copper-oxide layers. As the kink moves along the y axis, net vorticity is transported in the z direction. Such motion produces a finite resistivity in the smectic phase.
Eq.70 is interpreted in close analogy with Eq.65. In the absence of pinning due to the periodic potential in Eq.61, an applied force induces a uniform translation of the layers, and thus a net transport of vortices. In the ordered phase, where Φ = |r|/ve 2πiu/a , the first term in Eq.70 becomes proportional to the velocity ∂ t u. The second term contributes even when the layers are constant. It results microscopically from the motion of equilibrium vortex kinks, which can slide unimpeded along the y axis and thereby transport vorticity along the z axis (see Fig.6). Such flow at "constant structure" is analogous to the permeation mode in smectic liquid crystals. 14 The presence of this defective motion implies a small but non-zero resistivity at the L-S transition. Near T s , Eq.70 predicts a singular decrease of the form ρ xx (T ) − ρ xx (T s ) ∼ |T s − T | 1−α , similar to the behavior of the tilt modulus. At lower temperatures (but still within the S phase) transport occurs via two channels. The permeation mode gives an exponentially small linear resistivity Non-linear transport occurs in parallel to the above linear processes, via thermally activated liberation of vortex droplets, inside which u (or u z in the crystal phase) is shifted by s (see Fig.7). Such a droplet costs a surface energy, due to the creation of a domain wall between smectic regions shifted by s. The domain wall surface tension σ 0 is estimated from where w is the width of the domain wall. The first term represents the elastic cost of the shift in u, while the second is the pinning energy. Minimizing Eq.71 gives w ∼ κ/gs and σ 0 ∼ √ κgs. This surface energy must be balanced against the Lorentz force in the interior, so that the energy of a droplet of linear size L is Eq.72 gives a critical droplet size L c ∼ √ κγc/(JB) and an energy barrier E B ∼ (c/JB) 2 (κγ) 3/2 s. Thermal activation therefore gives where J c ∼ (c/B)(κg) 3/4 (s/k B T ) 1/2 . Similar nonlinear IV relations have been obtained previously for vortex/Bose glasses, 6,7 but our result is more closely related to surface mobility below the roughening transition on crystal surfaces. 37 Unlike these proposed glass phases, the smectic should always exhibit a nonzero linear resistivity as J → 0. 7. A two dimensional cut through a droplet configuration of the smectic layers, drawn for the case m = 2. The full three-dimensional droplet has a spherical droplet. Inside the droplet (outlined in gray), the layers are shifted by one CuO2 double layer spacing, u → u + s.
VII. SUPERSOLID ORDER AND THE SMECTIC TO CRYSTAL TRANSITION
A. Supersolid Nature of the Smectic Phase In sections VI A and VI B, we have seen that the response functions in the smectic phase retain many of the features of the vortex liquid. Both the tilt modulus and conductivity remain finite, despite the pinning of the smectic density wave by the CuO 2 layers. As discussed earlier, both phenomena are explained by the existence of an equilibrium concentration of large vortex kinks extending between successive occupied vortex layers. These kinks facilitate both transverse magnetic penetration and dissipation for currents along the x axis.
This behavior is strikingly similar to the picture of "supersolid" vortex arrays recently proposed in Ref. 19, for fields parallel to the c-axis. In the supersolid, a finite concentration of interstitials or vortices are present in the vortex lattice, and both the tilt modulus and conductivity in the presence of weak pinning are finite. Such a supersolid phase is distinct from the Abrikosov solid in that it supportslong range crystalline order coexists with a finite expectation value of the boson order parameter ψ, i.e. ψ(r)ψ * (0) → Const. (74) as r → ∞. The supersolid must occur at sufficiently high magnetic fields, but its existence elsewhere in the phase diagram seems unlikely. 19 Using this characterization of broken U (1) symmetry (under ψ → ψe iθ ), the vortex smectic is always in a supersolid phase. As in Ref. 19, this can be seen by considering the correlation function of ψ's. Note that ψ(r) destroys a vortex line at position r are ψ * (r) creates a line in the coherent state path integral formalism. Because there is always a finite probability of finding a kink connecting the points 0 and r, Eq.74 is indeed satisfied. With this understanding, the second terms in Eqs.70 and 65 have an additional complementary interpretation. They correspond to the contributions from the "superfluid fraction" of a two-fluid system with "superfluid" (kink) and "normal" (smectic) parts.
In addition, the concept of symmetry breaking implies that a continuous transition from the flux liquid state (with ψ = 0) to a smectic (translationally ordered in one direction) phase must necessarily retain supersolid order. For the smectic phase to appear with ψ = 0 would require simultaneous breaking of the discrete translation group and restoration of the U (1) symmetry. Such a double critical point can only occur by tuning two parameters (one in addition to the temperature) or through a first order transition. The physical arguments of sections IV C and VI A-VI B, of course, imply the stronger condition that the smectic phase must be supersolid at all temperatures.
B. Consequences for Further Transitions at Low Temperatures
At lower temperatures, provided point disorder remains negligible, the vortices will order along the x axis as well. 41 What is the nature of this two-dimensionally ordered phase?
The different possibilities may be classified by the order in which the symmetries are broken. At the lowest temperatures, we expect the system to prefer a true solid phase, with broken translational order in both directions (in particular ρ = 0, where ρ is the amplitude for periodic density variations along the x-axis. Recall that ρ ⊥ ≡ Φ is the amplitude for density waves along z.) and a restored U (1) symmetry (i.e. no interstitials). 42 To connect this state with the smectic phase in which ρ = 0 and ψ = 0 requires two changes of symmetry.
Three monotonic choices of symmetry breaking are shown in Fig.8. In the scenario (a), upon lowering the temperature from the smectic phase first the U (1) symmetry is restored, and the translational symmetry along the x axis is broken at a lower temperature. As remarked in the previous section, however, the intermediate nonsupersolid smectic phase that appears in this sequence is impossible, so this sequence cannot occur.
Two physical choices remain. The smectic may go directly to the normal solid in a first order transition which breaks the translational symmetry and restores the U (1) invariance simultaneously, as shown in Fig.8(b). The last possibility, illustrated in Fig.8(c), is that of an intermediate supersolid phase between the smectic and the interstitial-free solid. In this case both the low temperature phase transitions may be second order. The supersolid-solid critical behavior is described in Ref. 19. The smectic-supersolid transition is once again a freezing transition at a single wavevector, and is potentially describable by a Landau theory like Eq.55. Because the modulating effect of the underlying crystal lattice is much weaker in the x direction, we expect g ≈ 0 is a good approximation in this case, which leads to pure XY behavior.
VIII. INCOMMENSURATE PHASES
As is well known from the study of the sine-Gordon model, 15 a large incommensurability can be compensated for by energetically favorable "solitons", or walls across which u → u + s (see Fig.9). Solitons begin to proliferate when their field energy per unit area σ field ∼ −κAs exceeds their cost at zero field, σ 0 ∼ √ κgs (estimated from Eq.61).
Physically, these solitons correspond to extra/missing flux line layers and walls of aligned "jogs" for δH along the b and c axes, respectively (see Fig.9). In the former case, this leads to an incommensurate smectic (IS) phase, whose periodicity is no longer a simple multiple of s. For δH ẑ, the solitons induce an additional periodicity along the y axis. This tilted smectic (TS) phase has long range translational order in two directions. 43 The analogous tilted crystal (TX) phase is qualitatively similar, but has long range order in 3 directions.
For larger H c , as the angle between the field and the CuO 2 layers becomes large, intrinsic pinning and anisotropy no longer favor the smectic state. As shown in Fig.2, we therefore expect the L-TS and TS-TX phase boundaries to merge in this regime. The direct L-TX transition is necessarily first order. In conventional CITs, entropic contributions generate additional interactions between domain walls which actually dominate over the bare energetic repulsions when the inter-soliton spacing ℓ → ∞. To estimate their magnitude here, we use the well known logarithmic roughness of a 2d interface, 37 where h is the height of the interface and the coordinate x parameterizes its position in the base plane. For solitons spaced by ℓ, collisions between neighbors generally occur only once h > ∼ ℓ, so that the size of roughly independently fluctuating regions x ∼ exp(ℓ 2 ). The entropy loss due to this constraint scales with the number of collisions (L/x) 2 , so that the areal free energy cost per wall is Since the energetic interactions in the smectic scale exponentially (like exp(−ℓ/w)) at long distances, the collision free energy is actually negligible as ℓ → ∞, unlike the situation for lines in 1 + 1 dimensions. 15 The free energy density in the incommensurate phases is thus where σ ≡ σ field + σ 0 < 0 is the total areal free energy of the soliton; ∆ and w set the energy and length scales of the soliton interactions. At low temperatures we expect w ∼ λ and ∆ ∼ ǫ 0 /a x , while near T s , Eq.61 gives w ∼ κ/gs (c.f. Eq.71) and ∆ ∼gℓ. Minimizing Eq.77 gives a soliton separation l ∼ w ln(∆/|σ|) near the CIT.
In the TS phase, net vortex motion along the c axis occurs by sliding soliton walls along the b direction. The resulting electric field is proportional to J and the soliton density, leading to an additional contribution to the resistivity which vanishes at the CIT like ρ soliton xx ∼ ρ soliton 0 / ln(∆/|σ|). A single soliton wall in the IS phase, because it is parallel to the CuO 2 layers, experiences a periodic potential along z. From studies of the roughening transition, 37 it is known that such a periodically pinned wall may be in either a rough or smooth phase. If the walls are individually smooth, thermal fluctuations are negligible, and the assembly of solitons is well described by an effectively one-dimensional elastic chain in a periodic potential. 15 Because they are pinned separately into minima of the potential, they do not contribute to ρ xx . If they are rough, they wander logarithmically and eventually interact with their neighbors. The appropriate coarse-grained description beyond this interacting length scale is an elastic stack of domain walls. The configuration of such a stack is described by a second displacement field u dw , with a free energy of the same form as Eq.61 (but with different values of κ andg). The statistical mechanics for the u dw field is thus equivalent to that of the original u variable. The preceding analysis must then be repeated within the new effective free energy.
Because of the aforementioned complexity of the onedimensional problem, we have not attempted to determine the true long distance behavior of the soliton array in the IS phase. Because the permeation mode in the commensurate smectic already provides a finite tilt modulus and non-zero resistivity, however, we expect that these more subtle effects will have only weak experimental implications.
As the temperature is increased within the IS or TS phases, the system melts into the liquid. To study such transitions, we perform the dilation Φ → Φ exp(iA · r). Only the g term is not invariant under such a gaugelike transformation. It becomes oscillatory and therefore does not contribute to the critical behavior at long wavelengths. The IS-L and TS-L phase transitions are thus XY-like.
The shape of the CIT phase boundary is of particular experimental interest. In the mean field regime, this is obtained from the condition σ = 0 as δH ∼ |r| Υ , with Υ MF = (m−2)/4. By the usual Ginzburg criterion, mean field theory breaks down for |r| < ∼ (k B T v/K 3/2 ) 2 . To determine the shape of the phase boundary in this critical regime, we follow the RG flows out of the critical region and repeat the preceding analysis with the renormalized couplings determined by matching when |r| is order one. Then δH R ∼ ξ λH δH and g R ∼ ξ λm g, with ξ ∼ |r| −ν . Rotational invariance at the rescaled fixed point (g = 0) implies that the field exponent is exactly λ H = 1 (see appendix E). Using these renormalized quantities, we find for the fields used in Ref. 12.
The IS-L and TS-L phase boundaries are non-singular and are determined locally by the smooth δH dependence of r. In particular, for small H c , the TS-L phase critical temperature is where r ′ ≡ ∂r/∂T | T =Ts,δH=0 .
IX. INFLUENCE OF DISORDER
Lastly, we consider the effects of weak point disorder, which couples to the density of vortices according to where V d (r) is a random potential, which, for point impurities, is short range correlated in space and narrowly (e.g. Gaussian) distributed at each point. Using Eq.54, F d can be rewritten, up to less relevant terms, as in the smectic phase and in the liquid sufficiently near T s . To bring Eq.81 into a more standard form, we define a complex random fieldṼ d ≡ V d e iqz , in terms of which Because of the oscillatory e iqz factor,Ṽ d andṼ * d are essentially uncorrelated at long wavelengths. Eq.82 is the simplest "random field" XY perturbation of Eq.55, and the resulting model is known in statistical mechanics as a random field XY model with an m-fold symmetry breaking term.
Before discussing the critical behavior of such a theory, it is natural to consider the effect upon the ordered state. In the smectic phase, using Φ = |r|/v exp(2πiu/a), Eq.82 becomes If the disorder is weak relative to the periodic potential (i.e. |Ṽ d | ≪g), it is naively justified to replace the cosine in Eq.61 by the "mass" term (2π/s) 2g u 2 /2. Such a mass term gives a large penalty for excursions of the layers with u > ∼ s, so the randomness in F d appears irrelevant. By ignoring the periodicity of the cosine, the above approach does not consider the possibility of disorderinduced solitons. To study the stability of the smectic to such topological defects, consider a region of size L in which the displacement field u is shifted by s, so as not to incur any bulk energy cost from the intrinsic pinning. Within this region, F d contributes an energy of random sign of order |V d |L d/2 in d dimensions. On the boundary of the region, however, the cosine does contribute, costing an energy ∼gL d−1 . For d > 2, the boundary energy grows more rapidly with L, and the net energy is always positive, providedg > |V d |s 1−d/2 . Thus we see that the smectic phase remains stable to weak disorder, even once solitons are taken into account.
Note that this result is in strong contrast to the Larkin-Ovchinikov argument that the Abrikosov lattice is unstable to arbitrarily weak pinning. 16 Physically, the instability is prevented, at least for weak disorder, by the periodic pinning potential which increases the stiffness of the smectic displacement field. The result can be understood, however, on more general symmetry grounds. For non-zero g, the system does not have a true continuous translational symmetry in the z direction, but only the discrete symmetry under translations by s. Because the symmetry is discrete, there is no Goldstone mode in the ordered (smectic phase) -i.e. phonons are massive. It is now well known that for random field models with discrete symmetries (e.g. the random field Ising model, to which our model corresponds when m = 2), the ordered phase survives above two dimensions. 44 Indeed, the argument given above for stability against droplet solitons is a restatement of the Imry-Ma argument first used for the random field Ising model. 45 In the incommensurate (IS and TS) phases, where the periodic pinningg is effectively zero, the Imry-Ma argument no longer applies. In these phases, the original Larkin-Ovchinikov picture holds, and the distortions in |u(r)| 2 must grow on long length scales, destroying the long range translational order of the layers. The nature of the resulting phase is unclear: it may be a "smectic glass", analogous to the proposed vortex glass phase for more isotropic systems, or it may simply be a strongly correlated liquid, with slow relaxation times. The same considerations hold for the more ordered phases at low temperatures, since the translational order along the x axis lacks the intrinsic pinning required to prevent the Larkin-Ovchinikov instability.
Turning to the critical behavior of the L-S transition, the analysis becomes more subtle. The random field perturbation in Eq.82 is a relevant perturbation at the XY fixed point (and indeed at any O(n) fixed point), so the critical behavior is certainly altered. Naively, a 6 − ǫ expansion may be made for the critical behavior of the O(n) random field model, which has a zero temperature critical point. 46 Within such a perturbative expansion near 6 dimensions, the symmetry-breaking term appears irrelevant. There are two potential problems with this approach. Firstly, if the symmetry-breaking term indeed remains irrelevant at the new critical point, it would be an example of a three dimensional random field XY critical point. The random field XY model, however, because it has a continuous symmetry, does not even have a stable ordered phase in three dimensions. Although there may not be an obvious contradiction involved in this scenario, the physical meaning is certainly unclear. One possible resolution is that the symmetry-breaking term becomes relevant at some higher dimension (greater than 4) . The second problem is the status of the 6 − ǫ expansion itself, which has been proven to break down, at least via nonperturbative corrections (but possibly more strongly) for the case of the random field Ising model. 47 The consistency of the 6 − ǫ expansion, even perturbatively, has not yet been determined. Regardless of the success or failure of this theoretical approach, experimental work on random field Ising systems has demonstrated the subtle types of behavior possible for such zero temperature critical points. Fortunately, as discussed in sections VI A-VII, the supersolid nature of the smectic (with the relatively fast permeation mode) implies that slow dynamics for the smectic ordering will not have a strong impact on transport and magnetization experiments.
Finally, consider the L-IS and L-TS transitions in the presence of disorder. Disorder strongly effects the behavior at such CITs, because it modifies the wandering of a single domain wall. 48,49 Fortunately, the effects of random field disorder on a single interface are known exactly. 50,51 In contrast to Eq.75, the height fluctuations of the interface grow like where ζ = (5 − d)/3 = 2/3 in three dimensions. Also unlike the pure interface, the free energy fluctuations within a region grow with length scale, so that the cost per collision scales like |x| θ , where θ = d − 3 + 2ζ = 4/3. The areal collision free energy per wall is thus This dominates over exponential energetic interactions for large ℓ. The full soliton free energy is thus where ∆ d measures the strength of the disorder-induced collision interactions. Minimization of Eq.86 gives ℓ ∼ 1/|σ|.
X. CONCLUSIONS AND APPLICATIONS TO HELIUM FILMS
We have studied the behavior of vortex arrays subjected to a one-dimensional periodic potential transverse to the magnetic field. Such a potential, which is induced by the layered structure of the high temperature copper oxide superconductors for fields oriented in the a-b plane, favors an intermediate smectic phase between the vortex lattice and flux liquid. The commensurate smectic state is supersolid, and has in consequence a nonzero finite resistivity and tilt modulus, despite being pinned by the periodic potential. Including incommensurability effects leads to the rich phase diagrams of Fig.2. The experimental signature of the smectic is the appearance of Bragg peaks in the structure function along a single ordering axis interleaving the trivial peaks induced by the layering. We also expect a greatly reduced resistivity for currents transverse to both the layering and magnetic field axes, and a cusped phase boundary describing the response to small fields perpendicular to the layering direction. The qualitative behavior of the tilt modulus and resistivity near the liquid-to-smectic transition in a field aligned perfectly with the ab-plane is shown in Fig.10. FIG. 10. Tilt modulus and resistivity at Ts for a perfectly aligned commensurate field. Not to scale: the resistivity could appear to drop to zero and the tilt modulus could appear to diverge to infinity in all but the most precise experiments. The open circles denote |T − Ts| 1−α singularities, where α is the specific heat exponent.
Using the boson mapping, 4,23 these results can be extended to real two dimensional quantum mechanical bosons at zero temperature. The analogous quantum smectic phase might be studied in helium on a periodically ruled substrate. Such a substrate might be approximated by crystalline facets exposing a periodic array of rectangular unit cells with large aspect ratio. There are, however, a number of difficulties inherent in this extension. In particular, the interaction between helium atoms is not purely repulsive; it is reasonably well described by a Lennard-Jones potential with a minimum at an interatomic spacing of a few Angstroms. To obtain a substrate with a small enough period to affect the physics on these length scales is an experimental challenge. Were these interactions purely repulsive, one could probably overcome this difficulty by working with a dilute system. With an attractive tail to the potential, however, a low density helium film would likely phase separate into helium rich and helium poor regions, making intermediate densities inaccessible. Appropriate experimental conditions may nevertheless be achievable for small values of m (the number of periods of the potential per period of the smectic density wave). The depth of the minimum in the effective pair potential, moreover, could be reduced somewhat by a careful choice of substrate.
The case m = 2 has been explored numerically in a Bose Hubbard model in Ref. 52. These authors indeed find a smectic phase, which they denote a "striped solid", with order in reciprocal space at q = (π, 0). Their results for the structure function and superfluid density appear to be in good agreement with the predictions of section VI (see in particular their Fig.11. The superfluid density of the boson system maps onto the inverse tilt modulus c −1 44 of the flux lines. 7 ). Unfortunately, a detailed comparison of the critical behavior with the theory is beyond the resolution of the available data. At a more general level, the liquid-smectic transition treated here appears to be the only known case of continuous quantum freezing in 2 + 1 dimensions. The smectic phase, moreover, is perhaps the simplest example of a quantum phase intermediate between solid and liquid. The techniques developed here may be useful in understanding other quantum phases of mixed liquid/solid character. One particularly intriguing example is the "Hall solid" proposed in Ref. 53. Through the Chern-Simons mapping, 54 it can be related to a supersolid phase of composite bosons (electrons plus flux tubes). 55 This phase, an analogous "Hall smectic" and "Hall hexatic," and the modifications of the current theory to account for the long-range Coulomb and Chern-Simons interactions are discussed in Ref. 55.
ACKNOWLEDGMENTS
It is a pleasure to acknowledge discussions with George Crabtree, Daniel Fisher, Matthew Fisher, Randall Kamien, Wai Kwok, Leo Radzihovsky, and John Reppy. This research was supported by the National Science Foundation, through grant No. DMR94-17047 and in part through the MRSEC program via grant DMR4-9400396. L.B.'s work was supported at the Institute for Theoretical Physics by grant no. PHY89-04035.
APPENDIX A: DERIVATION OF HYDRODYNAMIC EQUATIONS
The continuity equation (Eq.13) follows from differentiation of Eqs.8. For example, where Eq.14 is derived analogously, giving the tangent current tensor Eqs.A2 and A3 are completely general, and do not depend upon the detailed dynamics of the vortex system. This additional physics is included in the constitutive equations (Eqs. 15 and 16). To derive them, we need the equation of motion, Eq.7. Inserting this into Eq.A2 gives The functional derivative with respect to r ⊥i can be transformed via the chain rule δF = d 3 r δF δn(r) δn(r) + δF δτ (r) · δτ (r) , where the variations δn and δτ are δn(r) = −∇ ⊥ · i δ(r ⊥ − r ⊥i )δr ⊥i (y), δτ (r) = −∂ α i δ(r ⊥ − r ⊥i ) dr ⊥i dy δx α i (y) Substituting Eqs.A5 and A6 into Eq.A4 then gives Eq.15. The constitutive equation for j βα is derived analogously, by premultiplying Eq.7 with ∂x β i /∂y and carrying out the same steps as before. We assume that H is in the ab-plane and commensurate smectic order is already well established, and ask how a two-dimensional vortex modulation then arises at a lower temperature. The modulated vortex density now takes the form n(r) = n 0 Re 1 + Φ(r)e −iqz + ψ 1 (r)e −iG1·r where Φ(r) is the (large) smectic order parameter, and G 1 and G 2 are reciprocal lattice vectors lying in the (x− z) plane with G 1x = −G 2x = 0 satisfying The six vectors ±qẑ, ±G 1 , and ±G 2 form a distorted hexagon of minimal reciprocal lattice vectors. All other reciprocal lattice vectors in the crystalline phase are linear combinations of this set, which reflects an anisotropic vortex lattice in real space. The corresponding set of reciprocal lattice vectors for a square lattice is illustrated in Fig.1. The complex amplitudes ψ 1 (r) and ψ 2 (r) are small near the transition, and the Landau free energy difference δF between the smectic and crystalline phases takes the form δF = d 3 r K 2 |∇ψ 1 | 2 +K 2 |∇ψ 2 | 2 +g∇ψ 1 · ∇ψ 2 + r 2 (|ψ 1 | 2 + |ψ 2 | 2 ) +w(Φψ 1 ψ 2 + Φ * ψ * 1 ψ * 2 ) + · · · . (B3) We have equated the coefficients of gradients in all three directions for simplicity. Within mean field theory, crystalline order can arise via a continuous phase transition whenever r < 0. The neglected higher order terms fix the magnitudes of ψ 1 and ψ 2 , |ψ 1 | = |ψ 2 | ≡ ψ 0 below the mean-field transition temperature. To study the true transition (which occurs for r = r c < 0 due to thermal fluctuations), we set ψ 1 = ψ 0 e iθ1(r) , ψ 2 = ψ 0 e iθ2(r) .
Upon neglecting a constant, the free energy becomes where K =Kψ 2 0 , g =gψ 2 0 , w =w|Φ|ψ 2 0 , and we have assumed the phase of the smectic order parameter Φ is locked to zero by the periodic pinning potential. Eq.B5 represents two coupled XY models with phases locked by the cosine. This term forces θ 1 ≈ θ 2 ≡ θ, and the phase transition falls in the universality class of a threedimensional XY model with effective free energy δF XY ≈ (K + g) d 3 r|∇θ| 2 .
(B6) Φ → Φ exp(iA · r), the term involving A may be completely removed from F . The free energy is thus independent of A. After integrating out short-wavelength modes, no dependence on A can appear, so precisely the same transformation must eliminate the field dependence in the renormalized free energy. This operation is Φ → Φ exp(iA · rξ), using the rescaled r, so the renormalized vector potential must be which implies λ H = 1. | 2018-04-03T02:12:44.318Z | 1995-03-15T00:00:00.000 | {
"year": 1995,
"sha1": "98b836e80f626500dd8dddeac50ce802fe242879",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9503084",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "99338e55b9751a9588ebef97ef6da0bf12c89af4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Medicine"
]
} |
410972 | pes2o/s2orc | v3-fos-license | Targeting hypoxic cancer stem cells (CSCs) with Doxycycline: Implications for optimizing anti-angiogenic therapy
Here, we report new mechanistic insight into how chronic hypoxia increases ‘stemness’ in cancer cells. Using chemical inhibitors, we provide direct experimental evidence that ROS production and mitochondrial biogenesis are both required for the hypoxia-induced propagation of CSCs. More specifically, we show that hypoxic CSCs can be effectively targeted with i) simple mitochondrial anti-oxidants (Mito-TEMPO) and/or ii) inhibitors of mitochondrial biogenesis (Doxycycline). In this context, we discuss the idea that mitochondrial biogenesis itself may be a primary driver of “stemness” in hypoxic cancer cells, with metabolic links to fatty acid oxidation (FAO). As Doxycycline is an FDA-approved drug, we propose that it could be re-purposed to target hypoxic CSCs, either alone or in combination with chemotherapy, i.e., Paclitaxel. For example, we demonstrate that Doxycycline effectively targets the sub-population of hypoxia-induced CSCs that are Paclitaxel-resistant, overcoming hypoxia-induced drug-resistance. Finally, anti-angiogenic therapy often induces tumor hypoxia, allowing CSCs to survive and propagate, ultimately driving tumor progression. Therefore, we suggest that Doxycycline could be used in combination with anti-angiogenic agents, to actively prevent or minimize hypoxia-induced treatment failure. In direct support of this assertion, Paclitaxel is already known to behave as an angiogenesis inhibitor.
INTRODUCTION
Hypoxia in the tumor microenvironment is a critical negative prognostic factor that ultimately promotes cancer progression, tumor recurrence and distant metastasis, as well as chemo-and radio-resistance [1,2]. Therefore, many medicinal chemists, pharmacologists, biologists and clinicians have all sought to invent new "hypoxia-specific" therapeutics to target the hypoxic tumor microenvironment [3,4]. However, these "hypoxia-specific" strategies and anti-cancer drugs still remain elusive.
Perhaps another approach would be to better understand what are the basic cellular factors that make hypoxia such a strong driver of a lethal tumor microenvironment. For example, it is known that hypoxia can also induce stem cell characteristics in cancer cells [1][2][3][4]. Thus, an increase in "stemness" may actually explain the clinical association of hypoxia with a poor prognosis and drug-resistance [1][2][3][4][5][6][7].
Here, we provide further mechanistic evidence to support the role of hypoxia in driving CSC propagation. In addition, we show that chronic hypoxia in CSCs induces
Research Paper
a specific stress response, which leads to i) increased ROS production and ii) elevated mitochondrial biogenesis. Remarkably, treatment of these hypoxic CSCs with ROS scavengers (Mito-TEMPO) [8] or an FDA-approved inhibitor of mitochondrial biogenesis (Doxycycline) [9][10][11] effectively blocks the propagation of hypoxic CSCs.
We discuss the possibility that Doxycycline could be re-purposed as an anti-cancer agent to specifically target hypoxic CSCs, either alone or in combination with other chemo-therapeutic agents, such as Paclitaxel. Moreover, we directly demonstrate that Doxycycline effectively targets the sub-fraction of hypoxic CSCs that are Paclitaxel-resistant.
Chronic hypoxia stimulates mitochondrial biogenesis and CSC propagation
In this report, we systematically examine the role of chronic hypoxia and oxidative stress in the propagation of breast CSCs, using a human tumor cell-line (MCF7 cells) -as a model system (summarized in Figure 1).
As a first step, MCF7 cell monolayers were subjected to hypoxia (1% oxygen) for increasing periods of time (0, 6, 24, 48, 72 and 96 hours). MCF7 cell monolayers were then trypsinized and subjected to FACS analysis with MitoTracker Deep-Red-FM as a probe, to estimate their mitochondrial mass. Figure 2 shows that 6, 24 and 48 hours of hypoxia have little or no effect on mitochondrial mass. In contrast, chronic hypoxia increased mitochondrial mass by up to ~2-fold, at 72 and 96 hours of oxygen-deprivation. A representative FACS tracing is shown in Figure 2F, demonstrating a clear shift to the right. This was confirmed by immuno-blot analysis with TOMM20, an established marker of mitochondrial mass. Therefore, it appears that chronic hypoxia, for 3 to 4 days, preferentially stimulates mitochondrial biogenesis.
We next performed a time course of hypoxia versus CSC activity, using the mammosphere assay as a readout. Initially, MCF7 cell monolayers were cultured under conditions of acute and chronic hypoxia. Then, the cells were trypsinized and re-seeded into low-attachment plates, to detect and quantitatively measure 3D mammosphere forming activity.
Remarkably, Figure 3A shows that acute hypoxia (6 hours) actually inhibits mammosphere formation by >60%. In contrast, Figure 3B, 3C demonstrates that chronic hypoxia (72 and 96 hours) clearly stimulates mammosphere formation, by >1.5-fold. As such, chronic hypoxia appears to drive mitochondrial biogenesis and an increase in cancer stem cell activity, suggesting that these two processes may be functionally linked.
Doxycycline, an inhibitor of mitochondrial biogenesis, targets and halts the propagation of hypoxia-induced CSC activity
To test the hypothesis that mitochondrial biogenesis is required for hypoxia-induced CSC propagation, we next used the FDA-approved antibiotic Doxycycline. Doxycycline inhibits protein synthesis in bacteria by targeting their ribosomes [6,7,9]. However, because of the conserved structural similarities between bacterial and mitochondrial ribosomes, Doxycycline also inhibits mitochondrial biogenesis, as an off-target side-effect in mammalian cells [6,7,9]. Importantly, Figure 4 shows that Doxycycline treatment effectively inhibits hypoxia-induced mammosphere formation, even more effectively than under normoxic conditions. Therefore, Doxycycline is effective after both normoxic and hypoxic pre-treatment conditions, but is actually more effective after chronic hypoxia treatment. Therefore, Doxycycline could be re-purposed to target the propagation of hypoxic CSCs, which are normally strongly resistant to conventional chemotherapy.
Figure 1: Experimental approach.
To understand the mechanism(s) underpinning the effects of hypoxic stress on CSC propagation, we used an unbiased metabolic phenotyping approach. Briefly, MCF7 cells were first subjected to acute hypoxia (for 6, 24 and 48 hrs) or chronic hypoxia (for 72 and 96 hrs) grown as monolayers. Then, the hypoxic cells were harvested and subjected to anchorage-independent growth assays (mammosphere formation), to measure cancer stem cell activity. A variety of other metabolic parameters were also quantitated to pinpoint possible targets for therapeutic interventions. www.impactjournals.com/oncotarget or 96 h C. and then seeded in low attachment plate for 5 days before counting. Data shown are the mean ± SEM of 3 independent experiments performed in triplicate. (***) p < 0.001. www.impactjournals.com/oncotarget Note that 3 mitochondrial ribosomal proteins (MRPL4, MRPS35 and MRPL47) were all up-regulated in response to chronic hypoxia. Other proteins related to mitochondrial biogenesis were up-regulated, including: HYOU1, YARS2, NDUFV2, LONP1, POLRMT, COQ9, SARS2, HSPA9, HSPD1, ATP5J, and ATPAF1. A specific inhibitor of mitophagy, namely LRPPRC, which prevents the autophagic digestion of mitochondria, was also up-regulated.
Doxycycline increases the sensitivity of hypoxic CSCs to conventional chemotherapies, such as paclitaxel
We next investigated the implications of our findings for clinical treatments with chemotherapy. Hypoxic CSCs are known to be highly resistant to conventional chemotherapies, such as Paclitaxel [1][2][3][4]12]. We were also able to demonstrate this drug-resistance, in the context of hypoxia. Figure 5 directly demonstrates that a significant fraction of CSCs are clearly resistant to treatment with Pactlitaxel and that this chemo-resistance is exacerbated, especially after MCF7 cells are exposed to chronic hypoxia. If we use 0.1 μM Paclitaxel, approximately 50% of the hypoxic CSCs remain Paclitaxel-resistant ( Figure 5B). Remarkably, addition of as little as 2 μM Doxycycline removes 50% of the Paclitaxel-resistant CSC activity; similarly, addition of 10 μM Doxycycline inhibits >75% of the Paclitaxel-resistant CSC activity ( Figure 5C). Therefore, we propose that Doxycycline could be used as an add-on to Paclitaxel-treatment, to combat Paclitaxel drug-resistance, normally induced by the hypoxic tumor microenvironment.
Metabolic phenotyping and proteomic profiling of cancer cells exposed to chronic hypoxia
To better assess the metabolic state after chronic hypoxia treatment (96 hours), we next subjected MCF7 cell monolayers to metabolic flux analysis, using the Seahorse XFe96. Interestingly, oxygen-consumption rates (OCR) in normoxia were severely impaired after 96 hours of hypoxic treatment, resulting in a ~60% reduction in ATP production ( Figure 6). Similarly, glycolysis rates, as measured by ECAR (extracellular acidification rate), were also dramatically reduced by >60% ( Figure 7). Therefore, MCF7 cells after chronic hypoxia appeared to be in a more quiescent metabolic state.
Consistent with these functional metabolic observations, unbiased proteomics analysis revealed the up-regulation of 45 mitochondrial-related metabolic proteins. This is most likely related to a hypoxiainduced stress response, driving increased mitochondrial biogenesis.
Interestingly, HYOU1 (Hypoxia up-regulated protein 1) was over-expressed by >170-fold. Importantly, HYOU1 (a.k.a., Orp150) is a mitochondrial chaperone protein that belongs to the heat shock protein 70 family, which is known to be involved in mitochondrial protein folding and confers cyto-protection under hypoxic conditions [13,14]. In addition, other proteins that are part of the OXPHOS complexes were also up-regulated, such as NDUFV2, which was increased by nearly 7-fold.
Chronic hypoxia induces CSC markers, such as ALDH
ALDH activity is now a well-established CSC marker for detecting and enriching CSC activity by flowcytometry [15][16][17]. ALDH activity can be measured by FACS, analysis using the Aldefluor fluorescent probe [15][16][17]. Consistent with our current observations that chronic hypoxia increases mammosphere formation by >1.5-fold, we also observed a >1.5-fold increase in the number of ALDH(+) cells by FACS ( Figure 8).
Chronic hypoxia induces oxidative stress: Mito-TEMPO, a mitochondrial anti-oxidant, halts mammosphere formation
To better understand the mechanism(s) underpinning how chronic hypoxia drives an increase in cancer stem cell activity, we hypothesized that oxidative stress might be the 'root cause'. To test this hypothesis directly, we quantitatively measured ROS production after acute hypoxia (6 hours) and after chronic hypoxia (96 hours), using the probe CM-H2DCFDA by FACS analysis. Interestingly, Figure 9A-9D shows that chronic hypoxia induces a >1.5-fold increase in ROS production, but that there is no increase in ROS production after acute hypoxia.
To determine if oxidative stress drives the observed hypoxia-induced increase in 'stemness', we also examined if simple anti-oxidants can inhibit mammosphere formation. For this purpose, we used TEMPO-derivatives that behave as membrane-permeable SOD-mimetic agents, which scavenge superoxide anions and other free radicals. Figure 9E demonstrates that both i) 4-hydroxy-TEMPO and ii) Mito-TEMPO effectively inhibited mammosphere formation by >70%, at a concentration of 100 μM. Importantly, Mito-TEMPO is a mitochondriallytargeted form of TEMPO, which contains a chemical mitochondrial targeting signal [8]. Therefore, we conclude that mitochondrial oxidative stress appears to be one of the key underlying causes of hypoxia-induced 'stemness'. Chronic hypoxia activates p44/42-MAPK signalling, without modifying HIF-1α expression HIF1-alpha is a well-known transcriptional mediator of the acute effects of hypoxia, but its functional role in chronic hypoxia is less well defined [18][19][20][21][22]. Therefore, we monitored the expression levels of HIF1-alpha in this context, by immunoblot analysis. We also measured p-ERK-1/2 activation, using phospho-specific antibody probes, for comparison purposes.
Interestingly, Figure 10 shows that HIF1-alpha is strongly upregulated during acute hypoxia, as expected, but it remains undetectable during chronic hypoxia. Conversely, the levels of activated phospho-ERK-1/2 were unchanged by acute hypoxia, but were significantly elevated by chronic hypoxia. Thus, these two very different signalling molecules may contribute to metabolic signalling at different phases of the hypoxia-induced stress response. The activation of ERK-1/2 by chronic hypoxia may provide a key stimulus for enhancing anchorageindependent growth.
Investigating the role of fatty acid oxidation (FAO) in mitochondrial biogenesis and CSC propagation
Fatty acid oxidation (FAO) is the process by which fatty acids are catabolized in mitochondria and peroxisomes, to generate Acetyl-CoA, which can then enter the TCA/Krebs cycle. In this process, the energy generated for each Acetyl-CoA molecule oxidized, results in 1 GTP and 11 ATP molecules.
Further bioinformatics analysis of the highresolution proteomics data presented in Table 1 reveals that a significant number of metabolic enzymes related to mitochondrial FAO are up-regulated during chronic hypoxia. More specifically, twelve mitochondrial proteins involved in FAO were induced by chronic hypoxia, including: HIBADH, ACADSB, ACAD9, ACADVL, HADH, PCCB, DECR1, ACOT9, ACADM, ACSM2B, SUCLG2 and CPT2. This is summarized in more detail in Table 2.
These results suggest that FAO may be intimately related to mitochondrial biogenesis and CSC propagation. To test this hypothesis independently of hypoxia, we next used another distinct, more direct, stimulus to drive increased FAO and mitochondrial biogenesis. For this purpose, MCF7 cells were treated with Valproic acid, an FDA-approved drug commonly used for the treatment of epilepsy. In this context, Valproic acid is thought to behave as a fatty acid, stimulating FAO. In fact, Valproic acid is chemically-classified as a branched short-chain fatty acid. Figure 11A shows that treatment with increasing concentrations of Valproic acid (0, 1, 2.5 and 5 mM) is indeed sufficient to stimulate mitochondrial biogenesis, resulting in an up to 3-fold increase in mitochondrial mass. Similarly, addition of Valproic acid increased mammosphere formation, by up to 2-fold ( Figure 11B). As the effects we observed were near maximal at 2.5 mM and some toxicity was observed at 5 mM, we decided to perform all subsequent experiments with 2.5 mM Valproic acid. At 2.5 mM, Valproic acid increased ALDH activity by >1.5-fold, consistent with an increase in 'stemness' (Figure 11C, 11D).
To validate the idea that Valproic acid increased CSC propagation by a metabolic mechanism, we used two distinct inhibitors of FAO, namely Etomoxir and Perhexiline, both of which specifically target the enzyme MCF7 cells which were treated with Vehicle or increasing concentrations of Valproic Acid (1 to 5 mM) for 72 h. Then, treatments were removed and cells were incubated in regular medium for additional 72 h before seeding in low attachment plate for 5 days in the presence of treatments. C. ALDEFLUOR activity, an independent marker of CSCs. MCF7 cells were treated with vehicle and Valproic Acid (2.5 mM) for 72 h. Each sample was normalized using diethylaminobenzaldehyde (DEAB), a specific ALDH inhibitor, as negative control. The tracing of representative samples is shown D. In panels A-C, data shown are the mean ± SEM of 3 independent experiments performed in triplicate. (***) p < 0.001. www.impactjournals.com/oncotarget CPT (carnitine O-palmitoyltransferase). Interestingly, Perhexiline is used clinically in New Zealand and Australia, as a preventative treatment for ischemic heart disease. Figure 12A, 12B shows that Etomoxir (200 μM) and Perhexiline (0.1, 1 and 10 μM) effectively inhibit both basal and Valproic acid augmented CSC propagation. Similar results were also obtained with Doxycycline (50 μM), which functions to inhibit mitochondrial biogenesis ( Figure 12A).
Glycolysis is normally required to provide additional TCA cycle intermediates for the mitochondrial processing of Acetyl-CoA. Consistent with this idea, treatment with glycolysis inhibitors (2-DG or Vitamin C (ascorbic acid)) was also sufficient to inhibit Valproic acid augmented CSC propagation ( Figure 12C). Under these conditions the IC-50 for 2-DG was 10 mM, while the IC-50 for Vitamin C was ~0.5 mM. If we compare with our previously published results under basal conditions [10], Vitamin C was 2X as potent under Valproic acid augmented conditions.
In summary, it appears that both mitochondrial biogenesis and CSC propagation are metabolically linked to FAO, which can be functionally stimulated by Valproic acid (an FDA-approved drug) and inhibited by CPT inhibitors, such as Etomoxir and Perhexiline. Interestingly, the use of doxycycline or glycolysis inhibitors (2-DG and Vitamin C) was also sufficient to override the stimulatory effects of Valproic acid on CSC propagation.
Repositioning the FDA-approved antibiotic Doxycycline to target hypoxic CSCs
Here, we report a new mechanism underlying how prolonged hypoxia drives the onset of enhanced stem cell characteristics in cancer cells. This prolonged or chronic hypoxia leads to elevated ROS production and oxidative stress, which in turn drives increased mitochondrial biogenesis as a stress response. More specifically, this hypoxic stress response fosters an increase in overall mitochondrial mass in CSCs. Based on these new mechanistic observations, we next employed two wellestablished small molecules to directly target oxidative stress and mitochondrial protein synthesis, in hypoxic CSCs. As a consequence, we now demonstrate that i) Mito-TEMPO (a mitochondrial anti-oxidant) [8] and ii) Doxycycline (an antibiotic that inhibits mitochondrial protein translation) [9][10][11] can both be used to functionally target hypoxic CSCs (summarized in Figure 13).
Therefore, we propose that Doxycycline is a nontoxic FDA-approved antibiotic that could be re-positioned to specifically eradicate hypoxic CSCs. We envision that Doxycycline would be used alone or in combination with other chemotherapy agents, such as Paclitaxel. In fact, we show here that Doxycycline can be used to target the Paclitaxel-resistant sub-population of hypoxic CSCs.
Implications of Doxycycline for combating antiangiogenic therapy resistance
Over the last decade, anti-angiogenic therapies have emerged as promising anti-cancer agents, based on their ability to target tumor blood vessels, depriving cancer cells of essential nutrients [23][24][25][26]. However, clinical and pre-clinical data now questions the long-term benefits of anti-angiogenic therapies. For example, administration of anti-angiogenic agents has been shown to actually increase tumor invasiveness and metastasis [23][24][25][26]. The mechanistic explanation for the failure of angiogenesis inhibitors is their ability to generate intra-tumoral hypoxia, which then stimulates CSC survival and propagation [23][24][25][26].
Indeed, environmental stressors like chronic hypoxia activate a complex response, that includes the stimulation of an "emergency" biochemical and biological programs. Under these conditions, cancer cells survive by entering into a transient quiescent state, which is reversed when appropriate environmental conditions are sufficient to support cell proliferation and ultimately metastasis [27]. As such, the effectiveness of anti-angiogenic agents could be improved by using combination strategies aimed at inhibiting both cancer-and cancer stem-like cells. Based on our current observations, we suggest the combined use of Doxycycline, with angiogenesis inhibitors, such as Bevacizumab (Avastin). This new proposed combination therapy would effectively block both: i) blood vessel formation and ii) CSC propagation, ultimately making anti-angiogenic therapy more effective. Additionally, our data indicate that Doxycycline's inhibitory action on CSC propagation is particularly efficient after a hypoxic stress as compared with normoxic conditions, suggesting that anti-angiogenic agents, by generating intratumoral hypoxia, could sensitize the dormant CSCs to the inhibitory effects of mitochondria targeting agents. Although further validation of this therapeutic strategy is needed, the use of mitochondrial biogenesis inhibitors could ultimately allow antiangiogenic drugs to fulfill their therapeutic potential.
In order to eradicate hypoxic cancer cells, many other investigators have sought to generate novel therapeutics that specifically interfere with HIF1-alpha function, the major hypoxia-induced transcription factor Here, we show that mitochondrial biogenesis and the propagation of CSCs can be stimulated by Valproic acid (VA, an FDA-approved drug), which is a branched short-chain fatty acid. Moreover, we demonstrate that VA-induced CSC propagation can be blocked by inhibitors of i) FAO (Etomoxir and Perhexiline), ii) mitochondrial biogenesis (Doxycycline) and/or iii) glycolysis (2-DG and Vitamin C). Importantly, Perhexiline and Doxycycline are used clinically for other medical indications, so they could be re-purposed. Vitamin C is a micronutrient that is also readily available (over-the-counter) and can be used safely, at relatively high oral or i.v. dosages. [18,19]. However, based on our current findings, HIF1apha is preferentially expressed only during the acute phase of hypoxia (6 hours), but is virtually undetectable or completely absent, under conditions of chronic hypoxia (96 hours). Instead, we find that phospho-ERK-1/2 is hyper-activated during chronic hypoxia, but not during acute hypoxia, showing just the opposite activation pattern as HIF1-apha. Based on these findings, we conclude that HIF1-apha inhibitors may not be that clinically relevant for the targeting of hypoxic CSCs, especially under chronic conditions, since the actions of HIF1-alpha appear to be largely confined to acute hypoxia.
Linking mitochondrial biogenesis to CSC propagation and asymmetric cell division
During our hypoxia experiments, we observed that MCF7 cells subjected to chronic hypoxia (for 96 hours) have dramatically reduced metabolic activity, characterized by very low levels of both i) mitochondrial oxygen consumption and ii) glycolysis; this finding is suggestive of a more quiescent metabolic state. In contrast, chronically hypoxic MCF7 cells showed marked increases in mitochondrial mass, a surrogate marker of mitochondrial biogenesis. Therefore, increased mitochondrial biogenesis may be a primary driver of increased "stemness", rather than augmented cell metabolism.
In direct support of this 'working hypothesis', Sabatini, Weinberg and colleagues have shown that new mitochondrial biogenesis is absolutely required for asymmetric cell division in CSCs [28]. More specifically, they observed that during asymmetric cell division, new CSCs retain the newly-generated mitochondria, while the old mitochondria are transferred during cytokinesis to the non-stem cells (or daughter cells) [28]. Therefore, inhibition of mitochondrial biogenesis should block asymmetric cell division, reducing "stemness". This mechanistic interpretation would directly explain why Doxycycline is so effective at halting mammosphere formation, which is strictly dependent on asymmetric cell division in CSCs.
Dissecting the functional role of FAO in mitochondrial biogenesis and CSC propagation
Since we observed that hypoxia specifically induced >10 mitochondrial enzymes associated with FAO (Table 2), we used Valproic acid as an independent and experimentally convenient means to activate FAO.
Although Valproic acid has been shown to elicit anti-proliferative and anti-apoptotic actions in diverse cancer cell systems [33][34], its actual effectiveness as an anti-cancer agent is still a controversial and hotly-debated topic, as suggested by several recent clinical trials [35][36][37]. On the other hand, Valproic acid has been shown to increase the CSC population in glioblastoma cells and to "reprogram" differentiated triple-negative breast cancer cells to become quiescent stem-like cancer cells [38][39]. Supporting these studies, an increase in cell migration, invasion and a switch towards the epithelial-mesenchymal transition (EMT) phenotype has been detected, in various cancer cells treated with Valproic acid [40][41].
Interestingly, we observed here that treatment of MCF7 cells with Valproic acid was indeed sufficient to increase both i) mitochondrial biogenesis and ii) CSC propagation. The Valproic acid induced increase in CSC propagation that we observed could be halted by using well-established inhibitors of FAO, namely Etomoxir and Perhexiline, that target the enzyme CPT (summarized in Figure 14). Importantly, Etomoxir and Perhexiline also inhibited basal CSC propagation, that was not induced by Valproic acid.
Therefore, based on the above experimental evidence, we believe that FAO is also a critical source of metabolic energy for fueling the propagation of CSCs. Perhexiline is used clinically, in New Zealand and Australia, as a prophylactic anti-anginal agent. As such, Perhexiline could be re-purposed to inhibit FAO in CSCs. Similarly, treatment with Doxycycline, another FDAapproved drug, was sufficient to combat the stimulatory effects of Valproic acid on CSC propagation.
CONCLUSIONS
In summary, we provide new functional evidence that Doxycycline-mediated inhibition of mitochondrial biogenesis may indeed be sufficient to eliminate hypoxic CSCs. Based on these new findings, we believe that future Phase II clinical trials may be warranted, to re-purpose Doxycycline as an anti-cancer agent targeting chronic hypoxia.
Cell cultures
MCF7 breast cancer cells were obtained from ATCC and cultured in DMEM (Sigma Aldrich). For hypoxic stimulation MCF7 cells were cultured in low glucose DMEM (Sigma Aldrich) in a multi-gas N 2 /CO 2 hypoxic chamber at 1 % pO 2 ; parallel, MCF7 cells were cultured in low glucose DMEM at 21 % O 2 to serve as a normoxic control.
Mammosphere formation
A single cell suspension of MCF7 cells previously exposed to Normoxia (21% O 2 ) or Hypoxia (1% O 2 ) for 6h, 72h or 96h was prepared using enzymatic (1x Trypsin-EDTA, Sigma Aldrich), and manual disaggregation (25 gauge needle) [42]. Cells were then plated at a density of 500 cells/cm 2 in mammosphere medium (DMEM-F12/ B27 / 20-ng/ml EGF/PenStrep) in nonadherent conditions, in culture dishes coated with (2-hydroxyethylmethacrylate) (poly-HEMA, Sigma), in the presence of treatments, were required. Cells were grown for 5 days and maintained in a humidified incubator at 37°C at an atmospheric pressure in 5% (v/v) carbon dioxide/air. After 5 days for culture, spheres > 50 μm were counted using an eye piece graticule, and the percentage of cells plated which formed spheres was calculated and is referred to as percent mammosphere formation. Mammosphere assays were performed in triplicate and repeated three times independently.
Evaluation of mitochondrial mass and function
To measure mitochondrial mass by FACS analysis, cells were stained with MitoTracker Deep Red (Life Technologies), which localizes to mitochondria regardless of mitochondrial membrane potential. Cells were incubated with pre-warmed MitoTracker staining solution (diluted in PBS/CM to a final concentration of 10 nM) for 30-60 min at 37 °C. All subsequent steps were performed in the dark. Cells were washed in PBS, harvested, re-suspended in 300 μL of PBS and then analyzed by flow cytometry (Fortessa, BD Bioscience). Data analysis was performed using FlowJo software (Tree star Inc.). Extracellular acidification rates (ECAR) and real-time oxygen consumption rates (OCR) for MCF7 cells were determined using the Seahorse Extracellular Flux (XFe-96) analyzer (Seahorse Bioscience). After exposure to Normoxia (21% O 2 ) or Hypoxia (1% O 2 ) for 96 h, 15, 000 MCF7 cells per well were seeded into XFe-96 well cell culture plates for 24h. Then, cells were washed in pre-warmed XF assay media (or for OCR measurement, XF assay media supplemented with 10mM glucose, 1mM Pyruvate, 2mM L-glutamine and adjusted at 7.4 pH). Cells were then maintained in 175 µL/well of XF assay media at 37C, in a non-CO 2 incubator for 1 hour. During the incubation time, 5 µL of 80 mM glucose, 9 µM oligomycin, and 1 M 2-deoxyglucose (for ECAR measurement) or 10 µM oligomycin, 9 µM FCCP, 10 µM Rotenone, 10 µM antimycin A (for OCR measurement), were loaded in XF assay media into the injection ports in the XFe-96 sensor cartridge. Data set was analyzed by XFe-96 software after the measurements were normalized by protein content (SRB). All experiments were performed three times independently.
ALDEFLUOR assay and separation of the ALDH positive population ALDH activity was assessed by FACS analysis in MCF7 cells cultured for 72 h in Normoxia (21% O 2 ) or Hypoxia (1% O 2 ), as well as in MCF7 cells treated with Vehicle or Valproic Acid for 72 h. The ALDEFLUOR kit (StemCell Technologies) was used to isolate the population with high ALDH enzymatic activity by FACS (Fortessa, BD Bioscence). Briefly, 1 × 10 5 MCF7 cells were incubated in 1ml ALDEFLUOR assay buffer containing ALDH substrate (5 μl/ml) for 40 minutes at 37°C. In each experiment, a sample of cells was stained under identical conditions with 30 μM of diethylaminobenzaldehyde (DEAB), a specific ALDH inhibitor, as negative control. The ALDEFLUOR-positive population was established in according to the manufacturer's instructions and was evaluated in 3 × 10 4 cells. Data analysis was performed using FlowJo software (Tree star Inc.).
Label-free semi-quantitative proteomics analysis
Cell lysates were prepared for trypsin digestion by sequential reduction of disulphide bonds with TCEP and alkylation with MMTS. Then, the peptides were extracted and prepared for LC-MS/MS. All LC-MS/MS analyses were performed on an LTQ Orbitrap XL mass spectrometer (Thermo Scientific, San Jose, CA) coupled to an Ultimate 3000 RSLC nano system (Thermo Scientific, formerly Dionex, The Netherlands). Xcalibur raw data files acquired on the LTQ-Orbitrap XL were directly imported into Progenesis LCMS software (Waters Corp., Milford, MA, formerly Non-linear dynamics, Newcastle upon Tyne, UK) for peak detection and alignment. Data were analyzed using the Mascot search engine. Five technical replicates were analyzed for each sample type [10,11]. www.impactjournals.com/oncotarget
Evaluation of reactive oxygen species
Reactive oxygen species (ROS) production was measured by FACS analysis using CM-H2DCFDA (C6827, Life Technologies), a cell-permeable probe that is non-fluorescent until oxidation within the cell. MCF7 cells were cultured upon Normoxia (21% O 2 ) or Hypoxia (1% O 2 ) for 6 hrs or 96 hrs. Thereafter, cells were washed with PBS, and incubated at 37°C for 20 min with 1 μM CM-H2DCFDA, diluted in PBS/CM. All subsequent steps were performed in the dark. Cells were rinsed, harvested, re-suspended in PBS/CM and then analyzed by flow cytometry (Fortessa, BD Bioscience). ROS levels were estimated by using the mean fluorescent intensity of the viable cell population. The results were analyzed using FlowJo software (Tree star Inc.).
Statistical analysis
Data is represented as the mean ± standard error of the mean (SEM), taken over ≥ 3 independent experiments, with ≥ 3 technical replicates per experiment, unless otherwise stated. Statistical significance was measured using the t-test. P ≤ 0.05 was considered significant.
Author contributions
Professor Michael Lisanti and Dr. Federica Sotgia conceived and initiated this collaborative project.
ACKNOWLEDGMENT AND FUNDING
We are grateful to the University of Manchester, which allocated start-up funds and administered a donation, to provide the all the necessary resources required to start and complete this drug discovery project (to MPL and FS). Dr. Ernestina M. De Francesco was supported by a fellowship from the Associazione Italiana per la Ricerca sul Cancro (AIRC) co-funded by the European Union. The Lisanti and Sotgia Laboratories are currently supported by private donations, and by funds from the Healthy Life Foundation (HLF) and the University of Salford (to MPL and FS). We also wish to thank Dr. Duncan Smith, who performed the proteomics analysis on whole cell lysates, within the CRUK Core Facility. MM was supported by the Associazione Italiana per la Ricerca sul Cancro (AIRC, IG 16719). | 2018-01-24T17:25:33.048Z | 2017-06-12T00:00:00.000 | {
"year": 2017,
"sha1": "c92912fd6a68ba05dd6fcc6457412bc304179a07",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=18445&path[]=59279",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43b49f7aebf674244a2a87858ed07650977b93dd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257110965 | pes2o/s2orc | v3-fos-license | Spinal Motion Segments – I: Concept for a Subject-specific Analogue Model
Most commercial spine analogues are not intended for biomechanical testing, and those developed for this purpose are expensive and yet still fail to replicate the mechanical performance of biological specimens. Patient-specific analogues that address these limitations and avoid the ethical restrictions surrounding the use of human cadavers are therefore required. We present a method for the production and characterisation of biofidelic, patient-specific, Spine Motion Segment (SMS = 2 vertebrae and the disk in between) analogues that allow for the biological variability encountered when dealing with real patients. Porcine spine segments (L1–L4) were scanned by computed tomography, and 3D models were printed in acrylonitrile butadiene styrene (ABS). Four biological specimens and four ABS motion segments were tested, three of which were further segmented into two Vertebral Bodies (VBs) with their intervertebral disc (IVD). All segments were loaded axially at 0.6 mm·min −1 (strain-rate range 6×10 −4 s −1 – 10×10 −4 s −1 ). The artificial VBs behaved like biological segments within the elastic region, but the best two-part artificial IVD were ~15% less stiff than the biological IVDs. High-speed images recorded during compressive loading allowed full-field strains to be produced. During compression of the spine motion segments, IVDs experienced higher strains than VBs as expected. Our method allows the rapid, inexpensive and reliable production of patient-specific 3D-printed analogues, which morphologically resemble the real ones, and whose mechanical behaviour is comparable to real biological spine motion segments and this is their biggest asset.
Introduction
Patient-specific analogues are needed in the modern fields of forensic and injury biomechanics [1][2][3] because human cadaver specimens are variable and difficult to preserve for biomechanical testing [4] and their use is subject to stringent ethical considerations. Accordingly, mammalian quadruped spines or spine analogues are used instead [5] . Several spine analogues are currently available, but most are not intended for biomechanical testing. They are used for training, for drilling and implant fixation trials, and to demonstrate the range of motion by handling and manipulation [1,3,6] .
Replicating the mechanical behaviour of real spines is particularly important in relation to fixing and testing implants. Standardised tests such as ASTM 1717 simply fix implants on blocks of material typically constructed from ultra-high-molecular-weight polyethylene (UHMWPE) without a "spine segment" being present in parallel with the construct [7][8][9] . These are intended for both static and dynamic implant testing but make no attempt to evaluate the implants under loading in the presence of a spine (matching the in vivo environment) and do not replicate the shape, architecture or geometry of the vertebrae in the experimental design. For example, although standardised test methods have been proposed for the reproducible comparison of the stiffness or strength of implants, it is doubtful whether these are indicative of the in vivo behaviour of the spine.
The biofidelity of the standard testing environments must be improved to allow the comparison of devices for orthopaedic applications [9] . A few commercially availa-ble spine analogues are similar both geometrically and biomechanically (only in so far as the range of motion is concerned) to human cadaveric spines. For example, a generic biomechanical spine model (Sawbones, USA) is available as a full section (T12-sacrum), a small section (L2-L5) or a single motion segment (L3-L4). However, in these the Vertebral Body (VB) is a smooth block and as for the bone no effort is made to replicate its internal structure [3] . Spinal implants have previously been usefully tested in conjunction with these models as an alternative to human or animal cadavers [8] . Its advantages include the low variability of the model properties and the long testing life, but limiting factors include the cost, lead time (process time), and lack of patient specificity. In addition, any destructive testing such as the installation of implants cannot be reversed. There is also a commercial option to order a patient-specific analogue, but it only replicates the spine shape and form, not its properties, and of course this is even more expensive. A more accessible spine analogue is needed to (i) make biomechanical testing readily available, (ii) at a low cost, (iii) provide reproducible samples and a number of them, (iv) be bespoke for a patient (patient specific), and (v) biofidelic as far as load response is concerned; for all these advantages the answer may be provided by using 3D printing [2,10,11] .
The analogue model proposed in this paper is patient-specific and was produced from micro-CT (computed tomography) data by producing a 3D model of the vertebrae in acrylonitrile butadiene styrene (ABS). The intervertebral disc (IVD) was constructed using topographical data from the endplates. Once the distance between the endplates was determined, liquid polyurethane was injected to form the IVD. Both the biological spinal motion segment and its man-made analogue counterpart were mechanically tested in parallel in axial compression. The stiffness was produced in the linear elastic region from data from virtual and actual extensometers. Surface displacements and strains were determined by Digital Image Correlation (DIC) analysis for each segment. The present approach proposes a methodology using accessible protocols and equipment to create a novel 3D-printed analogue spinal motion segment model, which can easily be applied in future research projects focusing on the prediction and model-ling of bone behaviour, either on its own or in conjunction with implants, and in both healthy and diseased conditions.
Biological specimens
A porcine spine (from a specimen less than 12 months old, intended for the food supply) was obtained from a local butcher. Four motion segments (L1-L4), i.e. two VBs with their adjoining IVDs from the lumbar region, were prepared from this material. Porcine spine samples were chosen because they are already deemed suitable substitutes for human cadavers and are similar both in geometric and biomechanical properties [12,13] . From the fresh spine, we measured the Shore hardness [14] of an intact IVD (lateral to medial) and a sectioned IVD (superior to inferior), giving values of 63.2 (14, 8.4) and 72.7 (10, 7.4), respectively. This led to the choice of PT Flex 70 (Polytek Development Corp., USA), a two-part polyurethane with a Shore A hardness of 70, as the silicone IVD simulant.
Three motion segment samples were sectioned to each produce two VB samples and one IVD sample, and one motion segment was left whole (Fig. 1). Part of the pre-CT sample preparation involved placing samples in a water bath to remove any external tissue. Biological VBs 1-4 were maintained at 40 ˚C for 90 min, which has little to no effect on bone properties and thus does not compressive stiffness [15][16][17] . Biological VBs 5 and 6 were immersed at ~ 80 ˚C for 16 h to determine if there was a noticeable change in bone properties. IVDs were sectioned within the endplate to ensure the disc was intact. All unnecessary tissue was then removed. The superior and inferior planes of the IVD and VB samples were then ground on a polisher with constant cooling to produce parallel surfaces for mechanical testing. The whole and sectioned motion segment samples were then scanned (0.0412 mm at 70 kV and 90 µA) using a XT H225 CT scanner (Nikon Metrology UK Ltd, UK) and reconstructed using CT Pro 3D (Nikon Metrology UK Ltd).
Analogue preparation
All analogue components were printed on a uPrint SE 3D printer (Stratasys Inc., USA) using ABSplus-P430, a production-grade thermoplastic ABS. The vertebral sections were imported to the printer software using Stratasys CatalystEX v4.5 (Stratasys Inc.) as a stereolithographic file (.stl) which was then modified by importing the CT volume file (.vol) into ScanIP (Synopsys Inc., USA) and manipulating the file. Specifically, the background data were duplicated and resampled from 32-bit to 8-bit in an effort to reduce the file size. The applied threshold was based on the distinct histogram peaks and included small trabeculae while minimising soft tissue. All values were selected in order to maintain the morphology while reducing the number of elements and any errors in the resulting model.
After the .stl was generated, an ABS sample described hereafter as an Analogue Motion Segment (AMS) was printed for each Real (biological) Motion Segment (RMS). An additional sample was produced from RMS4, designated AMS4ii. The origins of all segments, both biological and analogue, are summarised in Table 1. The setting used for model interior was "solid" and support fill "basic". Any support materials deposited during the printing process were removed manually and with the use of a support cleaning apparatus (PADT Inc., USA) combined with WaterWorks P400SC (Stratasys Inc.).
Earlier tests conducted on 3D-printed ABS cubes revealed their compressive strength. These data showed that the layer orientation influenced the elastic modulus by less than 5.4% from maxima to minima, yet when comparing similar orientations the effect was negligible [18,19] . Because all the ABS samples in this study were printed and loaded in the same orientation, we assumed Table 1 Origins of all biological and analogue segments. RMS = Real (biological) Motion Segment. AMS = Analogue Motion Segment. RVB = biological vertebral body. RVD = biological intervertebral disc. AVB = Analogue Vertebral Body. AVD = analogue intervertebral disc Motion segment Biological segment ABS segment there was a no significant difference in stiffness due to the directionality of the ABS layers. The IVD of each AMS was formed from PT Flex 70. The design priority for the analogue IVD was to use a suitable rubber compound which, by adjusting its constitution, could be matched to the properties of biological IVD initially on the basis of its Shore A hardness value. Inevitably, this kind of rubber analogue would only provide a uniform layer because the inner design and fibrous architecture of natural IVDs (with their woven collagen fibre layout) is too complex to replicate. This was also true when considering the nucleus pulposus; this being a mucoid protein that allows for distribution of forces. Due to its complex structure and nature, but also the difficulty in printing graded multi moduli materials on a single run, the IVD was replicated as a single phase module with a uniform loading response.
PT Flex 70 was chosen because of its rapid curing time and the Shore A hardness value which matched that of an IVD. An individualised cast was built around the superior and inferior endplates of the IVDs using Sugru mouldable glue (FormFormForm Ltd, UK). The correct height of the IVD (based on CT reconstructions) was ensured by placing PT Flex 70 struts at three points on the endplates. PT Flex 70 was then prepared and injected into the moulds (Fig. 2).
A speckle pattern was applied to all samples for DIC analysis [20][21][22][23][24] . White high-contrast paint was used as a base coat on the ABS samples and a black speckle pattern was then applied manually to all samples, with speckle size ranging from 0.35 mm to 6.35 mm in diameter (Fig. 3). The optimal speckle size was 3 -5 pixels [25,26] , which was equivalent to 0.78 mm -1.3 mm. All biological samples were removed from the freezer to thaw 6 h before testing as previously recommended [27,28] .
Loading experiments
An Instron 5567 tensile testing machine (Instron, UK ) fitted with a 10-kN load cell was used to compress each sample at a quasi-static loading rate of 0.6 mm·min −1 (strain rate range 6 ×10 −4 s −1 -10×10 −4 s −1 ). All samples were subjected to a 10 N -50 N preload before compression to reduce contact errors [28] . The top platen featured a spherical joint to further minimise contact errors and bending moments, and to ensure consistent loading across the sample. The experimental setup is shown in Fig. 4.
Phantom V1212 and V2010 high-speed cameras (Vision Research Inc., USA) fitted with 50 mm f/1.4 lenses (Nikon, Japan) recording at 1000 fps were used to obtain DIC data throughout the loading process. The cameras were positioned 25˚ apart and 450 mm from the sample. Calibration was performed with a simple 175 mm 140 mm panel and ARAMIS software (GOM GmbH, Germany). PCC software (Vision Research, USA) was used to interface with the cameras. The DIC analysis facet size was 9 pixels, with a facet step of 3 pixels for all sections. The small facet step increased the measuring point density and also the computational time required, bringing the analysis time per specimen to more than 120 min. The strain calculation method was selected to match the non-uniform thickness of the specimens. Artificial lighting was provided by four LED light sources (Cree, USA), which produced negligible 751 heat. All the natural light sources within the testing area were covered to produce consistent illumination for all tests. All DIC data collected by the high-speed cameras were analysed using ARAMIS software. Due to the large number of frames, high-speed image data were simplified by selecting one in every 10 images.
We performed 31 tests, 20 on ABS samples and 11 on biological samples. All samples were compressed at 0.6 mm·min −1 . Each test was constrained to a total displacement calculated based on the height of the sample (Eq. (1)). The compression tests on all sections are summarised in Table 2.
Total displacement = Height of vertebral bodies 0.05 + Height of vertebral disc 0.1 (1) RMS4 was compressed to 1.75 mm due to load constraints, and greater compression was unnecessary because a sufficient portion of the elastic region was measured, and the yielding of samples was not required.
Isolated segments -vertebral bodies
Compression (load vs displacement) data for each analogue section were compared directly to the corresponding data for each biological section. Yielding and plastic deformation was observed in the biological vertebral body (RVB) samples but not the Analogue Vertebral Body (AVB) samples at the tested constraints, as shown in Figs. 5 and 6. Stiffness was calculated from the linear elastic region of the curves for all samples. RVB samples had a mean stiffness of 8892.6 N·mm −1 (N = 6, σ = 3375.7) and AVB samples had a mean stiffness of 9720 N·mm −1 (N = 6, σ = 2614.7).
The two samples left in the hot water bath (~ 80 ˚C) for 16 h (RVB5 and 6) had a noticeably lower stiffness, probably due to the effect of the high temperature. There is a direct relationship between the loss of collagen and alterations to bone mineral structure caused by the boiling of bovine bone tissue, resulting in a three-fold increase in micrometre-scale porosity and an 18% reduction in bulk density [15,16] . Such an effect would thus reduce the magnitude and breadth of the CT-histogram.
Overall there was a very good one-to-one similarity between each RVB and its synthetic analogue counterpart, as shown for a representative RVB in Fig. 7. The two load/deformation curves followed each other until the start of the plastic region (for the biological samples) after which the analogue specimen tolerated an increasing the load whereas the biological specimen yielded and was crushed. The difference in stiffness in five out of six cases between biological and man-made analogue samples was less than 10% (Fig. 8). Fig. 9 shows the response of intervertebral disks (RVD) and their physical models (AVD). The three real disks behaved similarly and exhibited a J-shaped curve with an ever-increasing stiffness due to the presence of the collagen fibres in the annulus fibrosus. The PT Flex 70 analogue specimens were slightly softer (Fig. 9 shows the envelope of the mean behaviour ± one SD of three curves) and exhibited the typical first stage of an elastomer load/deformation (F/d) curve. These are S-shaped curves, which show a softening behaviour in the early region and then stiffen up later on. S-shaped and J-shaped curves cannot be made to match each other throughout the whole range of F/d values, only within certain regions. In our case, we chose to match the curves in the initial F/d region, starting with the selection of a compound of similar Shore A hardness, which seemed to work well for loads below 1000 N. Fig. 10 shows the load/displacement data for two RMSs and their analogue counterparts (two VBs and an artificial IVD in between). The analogues were on the whole softer than the natural motion real segments. For example, the stiffness of RMS4 was 4585.49 N·mm −1 (2,262), whereas the mean stiffness of AMS4_1 and AMS4ii_1 was 1867.6 N·mm −1 (4, 82.1). The discrepancy between AMS models and the biological segments from which the models were created mainly reflects the imperfect matching of the IVD properties in models and biological specimens. This is because the much lower stiffness of IVDs compared to VBs means that much of the compression strain is concentrated in the IVD regions.
Motion segments
To demonstrate this effect, we used DIC to focus on the strains for VBs, IVDs and the total strain across the whole motion segment (Figs. 11 and 12). This was done via use of markers along the length of the component tested. DIC was used over extensometers as 3D DIC can better react to off plane movements; such the ones experienced in IVD bulging under compression. In addition, the final output was filtered using the Bezier interpolation that uses Bernstein polynomials to weight the points.
Calibration was performed using ARAMIS before the test sequence. The static error was ± 4.2% as calculated using Eq. (2).
Static error Displacement maximum Displacement minimum Length
DIC was performed on all biological and analogue samples with displacement measured between two points (virtual extensometer). Lines were drawn vertically over each VB (upper and lower) section and IVD section. Three sets of points were chosen on each motion segment: the top and bottom of each VB, the top and bottom of the IVD, and the top and bottom of the entire motion segment. The average major strain of each part was represented within each specific strain stage (capture image). Figs. 11 and 12 show representative data obtained for RMS4 and AMS4, with the highest strain on the IVD and the lowest strain on the upper and lower VB. As expected, the total full-field strain was between the maxima and minima.
Benefits and drawbacks of the model
Reliable and inexpensive patient-specific analogues are needed in the fields of forensic and injury biomechanics, but it has been challenging to develop models which are both straightforward and accurate. We used micro-CT data to develop analogue models of VBs, IVDs and spinal motion segments and then tested them by compression, comparing like for like. The stiffness of the RVBs and AVBs was similar in magnitude. Each RVB behaved in a similar manner to the corresponding AVB section on an individual basis. However, the AVB samples tended to be less variable than the RVB samples, with standard deviations of 2614.7 N·mm −1 (9720 ±2614.7, N = 6) and 3375.7 N·mm −1 (8892.6 ± 3375.7, N = 6) respectively. This is consistent with previously tested spinal analogues [9,27,28] . The higher variability of the RVB specimen probably reflects the fact that biological samples vary in both material properties and structural architectural design. The AVB samples were made from the same grade of industrial material (ABS) with only the structure matching the natural one.
The differences in F/d behaviour between the biological and analogue specimens were more pronounced once the biological specimens were taken beyond the yield point. Whereas the RVB samples yielded, the much stronger AVB samples continued into the elastic region. Shearing was observed in one sample (AVD1), which resulted in delamination of the disc from the endplate and thus a lower stiffness than AVD 2 and AVD 3. We can deduce that facet-less motion segments tend to be less stiff than the complete comparative segments, as previously reported [29] .
Strains over the motion segment samples were taken in four different areas: Upper Vertebral Body (UVB), Lower Vertebral Body (LVB), IVD and Total. The strains over the IVD were measured from the superior to the inferior endplate because the surface layer of high-contrast medium bearing the fiducial marks was prone to delamination/deterioration during the loading of the AVD (analogue IVD). These four strains were plotted for motion segment testing and each section was plotted for all segments. All four strains correlated: LVB and UVB experienced the least amount of strain and the IVD experienced noticeably more strain, with the total strain between these values. Strains measured over the UVB followed a similar trend for both AMS and RMS, which was also true of the LVB measurements (Figs. 11 and 12). Strains measured over the IVD were noticeably different, with the AMS samples experiencing significantly more strain than the RMS samples.
Not all micro-CT scans were conducted at the same time and marginal differences in greyscale and, more importantly, the shading correction may therefore be present. These differences may cause minor changes in thresholding values during the manipulation and generation of the .stl files. To reduce the size of the .stl files, resampling the data to 0.1 mm was necessary, as well as several other manipulations described in the methods section. These manipulations affect small morphologies in the samples. A higher-resolution printer might achieve a more detailed representation of the internal structure of the sample, which could generate more accurate results. Furthermore, the moulding of the IVD analogue produced small bubbles within the polyurethane which also may affect the mechanical characteristics of the disc.
Another effect not considered here was the testing of the biological discs under hydrated conditions. Research using sheep vertebrae revealed that the stiffness of ovine IVDs differs significantly when tested in a saline bath environment compared to air alone, with this being true in most loading modes such as torsion, flexion and bending: the IVDs were stiffer in air and more pliant in a saline environment [30] .
The quality of DIC was limited by the lenses available because the minimum focus distance produced a large viewing field which was suboptimal for data collection. The high-contrast media applied to the ABS disc samples delaminated in some cases during compression and then folded. This delamination and folding affected how much coverage was received from DIC during the later stages of compression.
In the future, further work should be conducted on the stiffness of the motion segment by varying the po-lyurethane that makes up the IVD and the construction of the facet joints. Polyurethane with a higher Shore hardness value should produce a motion segment with greater stiffness. Facet joints could be made more realistic by adding a cartilage analogue. If the stiffness of AVD and AMS samples can be improved, the method could be applied to human spinal motion segments with a higher degree of agreement.
Conclusion
The method described in this article produced artificial SMS in which the VB analogues had stiffness values similar to biological VBs and with the added advantage on no biological variability. The polyurethane material chosen for the IVD analogue was significantly less stiff than the material of the biological IVDs because it was originally chosen to match only the Shore A hardness values. Further work is needed to find a more suitable IVD analogue, allowing the production and validation of more accurate SMS analogues. DIC data revealed that the biological and the analogue specimens deformed in the same manner with IVDs naturally deforming more than the VBs because of their lower material stiffness. However, the biological IVDs deformed significantly less than the analogue IVDs because the material analogue for IVD did not match the biological IVD closely. The new methodology proposed here produces a simple SMS analogue with a biofidelic behaviour within the elastic regions and in quasi-static axial compressive loading. Further efforts to precisely match the IVD material (analogue to match the biological) will inevitably expand the scope and the usefulness of the man-made analogue introduced here. This product overall promises to improve our ability to build and use accurate patient-specific models for biomechanical testing.
University, and Jolyon Cleaves of Vision Research for providing the high-speed cameras. Ethical approval was granted by Cranfield University Research and Ethics committee (CURES) approval reference CURES/1014/2016. This paper is dedicated to one of the authors, Dr Mike Gibson, whose untimely death is a great loss to us all.
Data accessibility
Data for this manuscript is available through the Cranfield University CORD data depository and preservation system (https://cranfield.figshare.com). | 2023-02-24T14:42:30.594Z | 2020-06-24T00:00:00.000 | {
"year": 2020,
"sha1": "7bc8cc14109024b857fb57182f154c462acc76c1",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.lib.cranfield.ac.uk/bitstream/1826/15522/4/Spinal_Motion_Segments-2020.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "7bc8cc14109024b857fb57182f154c462acc76c1",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": []
} |
181630546 | pes2o/s2orc | v3-fos-license | THE AWARENESS AND CONSCIOUSNESS OF YOUNG STUDENTS ABOUT THE THREAT OF RISK FACTORS OF DEVELOPMENT OF NON-INFECTIOUS DISEASES – MODERN STATUS OF THE PROBLEM
The awareness and consciousness of young students about the threat of risk factors of development of non-infectious diseases – modern status of the problem. Serdyuk A.M., Gulich M.P., Petrenko O.D., Lyubarskaya L.S., Koblyanskaya A.V. The purpose of this study was to analyze students’ awareness and consciousness about the threat to health of risk factors of chronic non-infectious diseases development, to determine whether they have the skills of a healthy lifestyle and to develop and scientifically substantiate the Algorithm for introducing health-saving educational technologies in the educational process of higher educational institutions of Ukraine. A sociological survey was conducted among students of higher educational institutions of Ukraine regarding the levels of awareness and consciousness about certain factors of non-infectious diseases development. 430 students of the Kiev National University of Trade and Economics and 216 students of Sumy State Pedagogical University were interviewed. A specially designed questionnaire was used. A high level of awareness of students about the main factors in the development of non-infectious diseases – poor nutrition, low physical activity, smoking and alcohol abuse has been established. At the same time, students are not sufficiently conscious about the risk of developing diseases and are unsufficiently motivated to a healthy lifestyle. It is shown a significant difference in indicators of a healthy lifestyle among students of institutions of various profile. The Algorithm for the introduction of health-saving educational technologies into the educational process of higher educational institutions, which is a scientifically substantiated system containing the main tasks, principles and measures aimed at raising the level of awareness and consciousness of young students about the threat to health of risk factors of chronic noninfectious diseases development has been developed. The data obtained are the basis for the improvement of measures for the prevention of non-infectious diseases among students in Ukraine. Реферат. Осведомленность и осознание студенческой молодежью угрозы для здоровья факторов риска развития неинфекционных заболеваний современное состояние проблемы. Сердюк А.М., Гулич М.П., Петренко Е.Д., Любарская Л.С., Коблянская А.В. Целью данной работы было: провести анализ осведомленности и осознания студентами угрозы для здоровья факторов риска развития хронических неинфекционных заболеваний, установить наличие у них навыков здорового образа жизни; разработать и научно обосновать Алгоритм внедрения здоровьесберегающих образовательных технологий в учебный процесс высших учебных заведений Украины. Проведен социологический опрос среди студентов высших учебных заведений Украины касательно уровней осведомленности и осознания отдельных факторов развития неинфекционных заболеваний. Было опрошено 430 студентов Киевского национального торговоэкономического университета и 216 студентов Сумского государственного педагогического университета. Была использована специально разработанная анкета. Установлен высокий уровень осведомленности студентов касательно основных факторов развития неинфекционных заболеваний – нерационального питания, низкой физической активности, курения, злоупотребления алкоголем. При этом студенты недостаточно осознают степень риска развития заболеваний и недостаточно мотивированы к ведению здорового образа жизни. Показано существенное различие показателей ведения здорового образа жизни среди студентов разнопрофильных вузов. Разработан Алгоритм внедрения здровьесберегающих образовательных МЕДИЧНІ ПЕРСПЕКТИВИ / MEDICNI PERSPEKTIVI 5 19/ Vol. XXIV / 1 технологий в учебный процесс высших учебных заведений, являющийся научно обоснованной системой, которая содержит основные задачи, принципы и меры, направленные на повышение уровня осведомленности и осознания студенческой молодежью угрозы для здоровья факторов риска развития хронических неинфекционных заболеваний. Полученные данные являются основанием для совершенствования мероприятий по профилактике неинфекционных заболеваний среди студентов в Украине. The current public health problem in Ukraine is the rapid and steady growth of chronic non-communicable diseases (NCDs), which negatively affects demographic indicators, ability to work and disability of the population. A particular threat is the fact that the age of non-communicable diseases grew young [18]. According to the World Bank, only 81% of Ukrainians who reached the age of 15 in 2017 would reach 60 years of age [13]. This indicator clearly shows the threat of NCD to the health and life of the population. As an integral part of the European process for combating chronic non-communicable diseases, Ukraine has joined the main international initiatives for the preservation of the health of the population [3, 6, 15]. Supporting the WHO policy in Ukraine, based on international documents, there was developed and implemented the “Sustainable Development Strategy "Ukraine 2020" and the “National Action Plan for Non-Communicable Diseases to achieve the Global Sustainable Development Goal for the period up to 2030” [11, 12, 17]. Proceeding from the basic provisions of these documents, one of the most important factors in the formation and preservation of human health is the way of life, namely behavioral factors, which play a decisive role in the development of NCDs. This, above all, is irrational nutrition, low physical activity, tobacco smoking, alcohol abuse. According to the WHO strategy, prevention of NCD by changing lifestyles is becoming increasingly important [3. 15]. World practice has proved that, based on the doctrine of "risk factors", there is a real opportunity to prevent NCD not only at the individual level, but also at the population level, especially among young people, due to the formation of their needs in a healthy lifestyle. Focusing on the prevention of non-communicable diseases for young people is the preservation of the potential of the future health of the nation, an opportunity to significantly influence the formation of a healthy lifestyle, which gives prospects for the implementation of nationwide preventive measures against non-communicable diseases. Recently, the state, society and scientists have paid greater attention to the problems of the formation and preservation of health of young students. It is a rather specific group of people, which is characterized by increased levels of mental stress and psycho-emotional stress, a sharp change in the lifestyle, a change in social relations, a tendency to risky behavior, etc. [1, 8, 20]. Of particular concern is the increasing incidence of NCDs among students. According to the research conducted, a significant part of the students at the end of professional training suffer from a number of cardiovascular, gastrointestinal diseases, etc. [2, 5, 19]. The best practices based on the latest evidence suggest that the primary focus of the preventive activities of public health facilities on NCDs is to increase awareness about risk factors of the development of NCDs among different social groups. Undoubtedly, raising awareness and awareness among young people about the risk factors of NCD can be a significant component of preventive measures for chronic non-communicable diseases. Therefore, one of the important tasks of students' training in higher education institutions should be the formation of a healthy lifestyle culture and the mastering of basic knowledge about the risk factors for the development of chronic non-communicable diseases. Unfortunately, in higher educational institutions teaching elements of healthcare-saving disciplines has a non-systematic, situational character [2, 7]. However, today, when the system of higher education in Ukraine undergoes significant transformations, there are new opportunities for raising the awareness and understanding by the students of basic principles of a healthy lifestyle and risk factors for the development of NCD [2]. The purpose of this work was to: analyze knowledge and awareness of students about the health risk factors for the development of NCDs, to establish their skills in healthy lifestyle and to develop and scientifically substantiate the Algorithm for the introduction of health-saving educational technologies into the educational process of higher educational institutions of Ukraine MATERIALS AND METHODS OF RESEARCH The research was conducted on the basis of a standardized survey using the questionnaire method. The study used our questionnaire, which takes into account WHO general principles and rules, international requirements for documents that can be used in conducting epidemiological studies [4, 9, 10, 14]. The sociological survey was conducted at two higher educational institutions of Ukraine – the Kyiv National University of Trade and Economics THEORETICAL MEDICINE 6 Licensed under CC BY 4.0 (KNUTE) and the Sumy State Pedagogical University (SumSPU). There was studied awareness of future professionals who will implement policy of healthy nutrition in foodservice industry and of future teachers whose professional activities are aimed at advancement of their knowledge and life priorities among future students about healthy lifestyle and factors for NCD development. 430 questionnaires were distributed, received and processed in the KNUTE and 216 questionnaires – in the Sumy SPU. All respondents, participating in the survey got questionnaires with the detailed instruction with an emphasis on the necessity for the most accurate filling them in. The survey evaluated, first of all, the degree of prevalence of behavioral risk factors of NCD among young students and their awareness about the factors leading to the development of non-communicable diseases. The received personal data were processed using traditional statistical methods [16]. RESULTS AND DISCUSSION Non-communicable chronic diseases are the main causes of mortality
The current public health problem in Ukraine is the rapid and steady growth of chronic non-communicable diseases (NCDs), which negatively affects demographic indicators, ability to work and disability of the population. A particular threat is the fact that the age of non-communicable diseases grew young [18]. According to the World Bank, only 81% of Ukrainians who reached the age of 15 in 2017 would reach 60 years of age [13]. This indicator clearly shows the threat of NCD to the health and life of the population.
As an integral part of the European process for combating chronic non-communicable diseases, Ukraine has joined the main international initiatives for the preservation of the health of the population [3,6,15]. Supporting the WHO policy in Ukraine, based on international documents, there was developed and implemented the "Sustainable Development Strategy "Ukraine -2020" and the "National Action Plan for Non-Communicable Diseases to achieve the Global Sustainable Development Goal for the period up to 2030" [11,12,17].
Proceeding from the basic provisions of these documents, one of the most important factors in the formation and preservation of human health is the way of life, namely behavioral factors, which play a decisive role in the development of NCDs. This, above all, is irrational nutrition, low physical activity, tobacco smoking, alcohol abuse. According to the WHO strategy, prevention of NCD by changing lifestyles is becoming increasingly important [3. 15]. World practice has proved that, based on the doctrine of "risk factors", there is a real opportunity to prevent NCD not only at the individual level, but also at the population level, especially among young people, due to the formation of their needs in a healthy lifestyle.
Focusing on the prevention of non-communicable diseases for young people is the preservation of the potential of the future health of the nation, an opportunity to significantly influence the formation of a healthy lifestyle, which gives prospects for the implementation of nationwide preventive measures against non-communicable diseases.
Recently, the state, society and scientists have paid greater attention to the problems of the formation and preservation of health of young students. It is a rather specific group of people, which is characterized by increased levels of mental stress and psycho-emotional stress, a sharp change in the lifestyle, a change in social relations, a tendency to risky behavior, etc. [1,8,20]. Of particular concern is the increasing incidence of NCDs among students. According to the research conducted, a significant part of the students at the end of professional training suffer from a number of cardiovascular, gastrointestinal diseases, etc. [2,5,19].
The best practices based on the latest evidence suggest that the primary focus of the preventive activities of public health facilities on NCDs is to increase awareness about risk factors of the development of NCDs among different social groups. Undoubtedly, raising awareness and awareness among young people about the risk factors of NCD can be a significant component of preventive measures for chronic non-communicable diseases.
Therefore, one of the important tasks of students' training in higher education institutions should be the formation of a healthy lifestyle culture and the mastering of basic knowledge about the risk factors for the development of chronic non-communicable diseases. Unfortunately, in higher educational institutions teaching elements of healthcare-saving disciplines has a non-systematic, situational character [2,7].
However, today, when the system of higher education in Ukraine undergoes significant transformations, there are new opportunities for raising the awareness and understanding by the students of basic principles of a healthy lifestyle and risk factors for the development of NCD [2].
The purpose of this work was to: analyze knowledge and awareness of students about the health risk factors for the development of NCDs, to establish their skills in healthy lifestyle and to develop and scientifically substantiate the Algorithm for the introduction of health-saving educational technologies into the educational process of higher educational institutions of Ukraine
MATERIALS AND METHODS OF RESEARCH
The research was conducted on the basis of a standardized survey using the questionnaire method. The study used our questionnaire, which takes into account WHO general principles and rules, international requirements for documents that can be used in conducting epidemiological studies [4,9,10,14].
The sociological survey was conducted at two higher educational institutions of Ukraine -the Kyiv National University of Trade and Economics 6 Licensed under CC BY 4.0 (KNUTE) and the Sumy State Pedagogical University (SumSPU). There was studied awareness of future professionals who will implement policy of healthy nutrition in foodservice industry and of future teachers whose professional activities are aimed at advancement of their knowledge and life priorities among future students about healthy lifestyle and factors for NCD development.
430 questionnaires were distributed, received and processed in the KNUTE and 216 questionnairesin the Sumy SPU. All respondents, participating in the survey got questionnaires with the detailed instruction with an emphasis on the necessity for the most accurate filling them in. The survey evaluated, first of all, the degree of prevalence of behavioral risk factors of NCD among young students and their awareness about the factors leading to the development of non-communicable diseases.
The received personal data were processed using traditional statistical methods [16].
RESULTS AND DISCUSSION
Non-communicable chronic diseases are the main causes of mortality in Ukraine today, but they are not the inevitable result of socio-economic develop-ment. They can be avoided by transforming the social, economic and physical environment that determines behavior related to health. The causes of the NCD are well known and can be effectively addressed, both by individuals and groups of people. To do this, they must be informed and motivated to lead a healthy lifestyle to maintain their health and prevent the development of chronic non-communicable diseases.
According to the comparative analysis of the results of the questionnaire conducted among the students of KNUTE and SumSPU, the level of students' knowledge about the main factors of NCD development was determined ( fig. 1). Thus, it is established that the overwhelming majority of respondents are aware of the negative impact of NCD factors on the state of health.
The negative impact of inappropriate nutrition is known to 87.4% of KNUTE students and to 85.6% of SumSPU students. Only 12.6% of the respondents of KNTEU and 14.4% of those of SumSPU do not know about such a factor of chronic non-communicable diseases development. The number of people informed about the negative impact on the health of low physical activity among the surveyed students of both educational institutions also did not differ significantly. Thus, 80.9% of students of KNUTE and 75.5% of those of SumSPU are aware of the impact of this factor.
The indicators of students' awareness about the negative impact of tobacco smoking and alcohol consumption are somewhat different. Almost all of KNUTE students surveyed indicated that they knew about such factors of NCD development as smoking tobacco and alcohol consumption (91.6% and 92.2% 7 19/ Vol. XXIV / 1 respectively). The share of SumSPU students, getting the idea about tobacco smoking impact was 73.6%, alcohol consumption -72.2%. Thus, more than a quarter of the surveyed students of SumPPU are unfamiliar with the negative effects on health of tobacco smoking and alcohol consumption (26.4% and 27.8% respectively). The above results indicate a rather high level of students' awareness of the main factors of NCD development, in particular, inappropriate nutrition, low physical activity, tobacco smoking, alcohol consumption, etc. However, the fact that more than a quarter of SumPPU students are unaware of the risk of smoking smoking and alcohol consumption undoubtedly suggests the need to raise awareness among the contingent, as well as raising the issue of harming impact of tobacco smoking and alcohol consumption into the relevant syllabi of SumSPU.
Comparison of data obtained during the survey has allowed to establish some features of students' nutrition. Thus, about a quarter of respondents studying at KNUTE (25.8%), and about a third of students of SumSPU (31.5%) do not eat fresh fruits and vegetables at all. Sweet carbonated beverages are consumed by 75% of students of KNUTE and by 82.9% of SumSPU students. At the same time, a large proportion of those consuming sweet carbonated beverages among students of KNUTE noted that they consume them sometimes, periodically (41.2%), while most students of SumSPU consume sweet carbonated drinks constantly (45.4%) .
Comparing data on salt intake it was found that the relative number of students consuming salt at the level of 5 and 25 grams per day in both educational establishments did not differ significantly and makes up 70.5% and 24.8% among students of KNUTE and 73.6% and 25.0% among SumSPU students. However, among the students of KNUTE there is a small proportion of people consuming salt at the level of 35 g (3.5%), which is absent among those of SumySPU. The proportion of students who consume sugar, adding it to hot drinks, is 77.3% among the students of SumSPU, which is by 16.4% higher than the same rate among students of KNUTE (60.9%).
We also studied the obtained data, taking into account the awareness and gender of the students interviewed.
In the distribution of the data obtained, it was found that, as compared to girls who study at KNUTE, students of SumySPU consume excessive amounts of sugar (table 1). Thus, the consumption of sugar in excessive amounts by girl-students of SumSPU (45.2%) exceeds the same indicator among girl-students of KNUTE (29.3%) more than by 1.5 times. In assessing the obtained data, taking into account the awareness of young people, it should be noted that, compared to the students of KNUTE, students of SumSPU consume less amount of carbonated beverages, as well as there is significantly bigger (by 1.4 times) proportion of people who do not consume fresh vegetables and fruits daily (table 2).
T a b l e 1
Differentiation of girls' answers to the question about improper nutrition as a risk factor for NCD development of different groups of awareness, % The most popular sports among students of KNUTE and SumSPU are work-out in the gym, fitness (among girls) and running. It should be noted that among SumSPU students, especially boys, com-pared with the students of KNUTE, running is much more popular, probably because it is the most appropriate kind of sports (table 3).
T a b l e 3
Comparative analysis of students who keep fit, by gender,% According to the results of the questionnaire, we received information on the prevalence of such harmful habits as smoking and alcohol consumption. According to the data, smoking is the most common among students of KNUTE (table 4). The largest relative number of smokers consumes from 1 to 10 cigarettes a day. Also, if among the students of KNUTE there were no significant gender differences regarding the use of electronic cigarettes, in SumSPU the share of girls who smoke electronic cigarettes is 3.8 times higher than the corresponding indicator among boys. Also, in SumSPU, a 9 19/ Vol. XXIV / 1 significant proportion of students who indicate that they are smoking, but could not determine the frequency (sometimes -very rarely). Among boy-students of SumSPU, this indicator was three times higher, and among girls -5.1 times, compared with students of KNUTE. Alcohol consumption is an important negative factor affecting the health of young people. Answers of students made it possible to identify the peculiarities of this phenomenon, taking into account gender, awareness, university and course of study. Thus, informed students of both universities less often use alcohol and low alcohol drinks with a certain frequency than non-informed ones. However, the relative number of students who indicated that they consumed alcohol, but could not indicate the frequency (very rarely, on holidays, etc.), in both higher educational establishments is highe among the well-informed students (table 5).
T a b l e 5
Groups of students who use alcohol and low alcohol drinks,% Sex differentiation of people who consume alcoholic and low alcohol drinks allowed to detect the following. In both universities under study, the relative share of boys who consume alcohol was higher compared to girls.
However, if this prevalence was insignificant in KNUTE (by 9%), among the young fellows of SumSPU alcohol consumption was found to be 1.5 times more frequent than that of girls ( fig. 2).
The distribution of the relative number of students who consume alcohol, according to the year of study has its peculiarities in every university under the study (fig. 3). Thus, among first and second course students of SumSPU, the number of those consuming alcohol is the smallest, but in senior courses, it rises and at the end of training reaches 66.7%.
Licensed under CC BY 4.0
The relative number of KNUTE students who consume alcohol is the highest in the first year -69.0%, which is by 1.8 times higher than that of SumSPU students. In the last year of study this number is 64%.
. Comparison of students who drink alcohol and low alcohol drinks by gender, %
Comparing the data on the attitude to drugs and drug abuse, it should be noted that among KNUTE students the experience of consuming such substances is much higher as compared to SumSPU students. Thus, 2.8% of respondents of SumSPU reported about the experience of drug use, which is 3.7 times lower than the same rate among students of KNUTE (10.3%). In contrast to the students of SumSPU, 2% of KNUTE students stated about positive attitude towards drug use. The proportion of those who are indifferent to the use of drugs in SumSPU is also significantly lower (10.6%) compared to students of KNUTE (18%). 75% of of KNUTE students and 89.4% of SumSPU students have negative attitude to drugs. Studies have shown that despite the students' high awareness, far from all, they are aware and motivated to act according to available knowledge. It has been established that among student youth, in her opinion, there are a number of barriers, stereotypical representations and habits that impede a healthy lifestyle. During the interviews with such factors, students identified the lack of free time (43.8%), lack of desire (30.5%) and low income (30.2%).
We have found that in the educational process of these higher educational institutions there are disciplines that contain elements of knowledge about a healthy lifestyle. It turned out that in KNUTE such disciplines as "Hygiene and sanitation", "Health nutrition" "Technology of special food products" are taught. And in SumSPU there are such departments as "Department of Health, Physical Therapy, Rehabilitation", "Department of Medical and Biological Foundations of Physical Culture". Thus, in both HEE there are disciplines that contain elements of knowledge about a healthy lifestyle.
However, there are differences in the teaching of these disciplines. If in KNUTE more attention is paid to healthy nutrition as a risk factor for the development of diseases, SumSPU focuses more on physical development as a factor of a healthy lifestyle. Therefore, in our opinion, the sociological indicators on the behavioral risk factors concerning smoking and alcohol use obtained in two HEE differ significantly.
Based on own research and literature data, it has been established that there are significant differences in the curriculum of higher educational establishments in the teaching of health-saving disciplines. Often, in the curriculum of training specialists in the fields of knowledge that are not related to the human sciences, in general, such disciplines are absent. In most HEEs which educational profile does not relate to human health, such disciplines are not included in the curriculum, or knowledge on them is presented partly and one-sidedly, in accordance with the requirements of a particular profession. The consequence of this process is an absolute or partial ignorance of a healthy lifestyle and risk of NCD development by a significant number of students.
There is no doubt that one of the important tasks of students' training in higher educational institutions should be the formation of a healthy lifestyle culture in them, and the mastering of basic knowledge about the risk factors for the development of chronic non-communicable diseases.
The solution of the above-mentioned tasks may be the development and implementation of healthsaving educational technologies in the educational process of HEE. The results of the conducted researches and the analysis of the scientific literature on this problem allowed us to develop an algorithm for the implementation of health-saving educational technologies in the educational process of higher educational institutions of Ukraine, which is represented by a number of step-by-step tasks that will ensure the effectiveness of the algorithm and its sensitivity to changes in the relevant sociological indicators ( figure 4 ) The Algorithm for the implementation of healthsaving educational technologies in the educational process of higher educational institutions of Ukraine was developed in accordance with and in support of the "National Plan of Measures for Non-Communicable Diseases to Achieve Global Sustainable Development Goals for the Period until 2030" (2018) and "Communication Strategy Prevention of Non-Communicable Diseases in Ukraine until 2025", approved by the experts of the Ministry of Health and the European Bureau of the WHO in Ukraine.
To achieve the main goal of the Agorithm -to raise awareness among young people about the risk factors for NCD development, the formation of healthy lifestyle skills, eight successive tasks have been proposed. At the first stage of the implementation of the algorithm, a sociological study is required to determine the students' awareness about the factors of NCD development, the identification of groups of risk behavior and the analysis of basic educational programs of higher educational institutions that contain components of health-saving knowledge.
On the basis of the obtained data, the development of educational and information technologies on the risk factors of NCD development, taking into account the specificity of both sociological indicators and the availability of basic educational healthcare programs in a particular university.
Taking into account the data obtained, in the development of educational information technologies, it is necessary to take into account the prevalence of stereotypical beliefs about the obstacles to healthy lifestyles and to disclose the main motivational factors of life extension, improvement of the appearance and health, the success of people who lead healthy way of life, its influence on marital status, etc.
Increasing the level of awareness of the risk of NCD development, increasing the motivation of students to lead a healthy lifestyle and reducing risk behaviors is the main expected result of the implementation of the Algorithm. It takes place under the influence of the educational process, organization of extra-curricular classes for students, media support, improving the availability of healthy eating and sports facilities.
12
Licensed under CC BY 4.0
Fig. 4. Algorithm of implementation of health-saving education technologies in training process of higher education establishments of Ukraine
Evaluation of the effectiveness of Algorithm implementation in a comparative analysis of sociological indicators of awareness of NCD development by students at the beginning of Algorithm implementation and at the end of the training course with the use of educational and information technologies. Monitoring of performance indicators of the Algorithm is provided by adaptability of educational and information technologies regarding the risk factors for NCD development, the possibility of promptly making changes to individual curricula and their improvement.
CONCLUSIONS
1. It was established that the overwhelming majority of the students surveyed both of KNUTE and SumSPU are aware that inappropriate nutrition (87.4% and 85.6% respectively), insufficient physical activity (80.9% and 75.5% respectively), alcohol consumption (92.2% and 72.2% respectively) and tobacco smoking (91.6% and 73.6% respectively) are risk factors for NCD development. However, their awareness does not lead to aware-ness of the threat to health of these factors, which are recognized as leading ones in NCD development.
2. It has been determined that student youth is not aware of the health risks and the increasing risk of NCD developing from abuse of salt and sugar, sweet beverages, margarines and spreads, insufficient daily intake of fruits and vegetables, lack of physical activity, tobacco smoking, alcohol abuse, drug abuse, etc.
3. The results of the conducted research indicate that there are significant differences depending on the profile of the HEE regarding knowledge and awareness by the students of the factors of NCD development, healthy lifestyle habits, prevalence of risk behavior, etc. Such differences, in our opinion, are due to different approaches and directions of teaching health-preserving disciplines in the system of higher education and indicate the need for further refinement and improvement of information and training programs for young people and students in order to increase their awareness of the danger of the main behavioral risk factors of NCD development in terms of maintaining further health. 4. The developed Algorithm of introducing health-saving educational technologies into the educational process of universities will significantly increase the awareness of the student youth of the risk of the main behavioral risk factors for the development of NCDs in terms of preserving further health, as well as awareness of the need for a healthy lifestyle. 5. In order to increase the effectiveness of the developed Algorithm and its modification, further research in this direction is needed, which will promote the orientation of the pedagogical process of high school to the formation of healthy lifestyle and health behavior skills of Ukrainian student youth. | 2019-06-07T23:04:11.202Z | 2019-04-02T00:00:00.000 | {
"year": 2019,
"sha1": "bdd1ddb876ec0ba2d1efe5067b2fd9768e49e901",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/index.php/2307-0404/article/download/162168/168092",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d539d10d3c959d6a276ebf59672fe0e2c820cf63",
"s2fieldsofstudy": [
"Sociology",
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
267467978 | pes2o/s2orc | v3-fos-license | CTISL: a dynamic stacking multi-class classification approach for identifying cell types from single-cell RNA-seq data
Abstract Motivation Effective identification of cell types is of critical importance in single-cell RNA-sequencing (scRNA-seq) data analysis. To date, many supervised machine learning-based predictors have been implemented to identify cell types from scRNA-seq datasets. Despite the technical advances of these state-of-the-art tools, most existing predictors were single classifiers, of which the performances can still be significantly improved. It is therefore highly desirable to employ the ensemble learning strategy to develop more accurate computational models for robust and comprehensive identification of cell types on scRNA-seq datasets. Results We propose a two-layer stacking model, termed CTISL (Cell Type Identification by Stacking ensemble Learning), which integrates multiple classifiers to identify cell types. In the first layer, given a reference scRNA-seq dataset with known cell types, CTISL dynamically combines multiple cell-type-specific classifiers (i.e. support-vector machine and logistic regression) as the base learners to deliver the outcomes for the input of a meta-classifier in the second layer. We conducted a total of 24 benchmarking experiments on 17 human and mouse scRNA-seq datasets to evaluate and compare the prediction performance of CTISL and other state-of-the-art predictors. The experiment results demonstrate that CTISL achieves superior or competitive performance compared to these state-of-the-art approaches. We anticipate that CTISL can serve as a useful and reliable tool for cost-effective identification of cell types from scRNA-seq datasets. Availability and implementation The webserver and source code are freely available at http://bigdata.biocie.cn/CTISLweb/home and https://zenodo.org/records/10568906, respectively.
Introduction
Single-cell RNA (scRNA)-seq techniques have been widely applied to profile transcriptomic data at the single-cell level (Tang et al. 2009).Identification of the cell types has therefore become a critical step in scRNA-seq data analysis (Huang and Zhang 2021).Traditionally, the types of cells were annotated based on their shapes, sizes, and other features observed via microscope anatomy, histology, and pathology (Arendt et al. 2016, Kiselev et al. 2019).The advances in scRNA-seq techniques have allowed the fast accumulation of sequencing data, which requires the assistance of computational and artificial intelligence-guided approaches for the accurate and robust annotation of cell types (Ma et al. 2021, Sun et al. 2022).Particularly, machine learning (ML)-based approaches have shown their great capability of handling large-scale datasets for annotating cell types.In recent years, various ML-based approaches have been developed to annotate cell types using scRNA-seq datasets.These methods can be broadly classified into two main categories: unsupervised and supervised approaches.
Some comprehensive reviews have thoroughly evaluated a number of supervised models for the identification of cell types (Abdelaal et al. 2019, Huang and Zhang 2021, Ma et al. 2021).Most supervised-based methods first used a scRNA-seq dataset with known cell types to train a multiclass classification model and then loaded the trained model to predict the type of each cell in a new scRNA-seq dataset.Despite that supervised methods generally outperform unsupervised cell clustering methods (Ma et al. 2021), most of these supervised methods are based on a single classifier.On the other hand, it has been widely accepted that ensemble multiple classifiers usually outperform the performance of a single learner (Zhou 2012).For example, scDetect (Shen et al. 2021) achieved outstanding cell-type identification performance by implementing multiple k-Top Scoring Pairs classifiers with a weighted majority voting strategy to identify cell types from scRNA-seq data.
In this study, we developed a two-layer dynamic ensemble learning model, termed CTISL (Cell Type Identification by Stacking ensemble Learning), for accurate cell-type prediction in scRNA-seq data.In CTISL, cell-type identification is regarded as a multi-class classification task.A variety of features were extracted to build multiple base learners for different cell-type categories.We selected the best-performing classifiers as base learners in CTISL according to our extensive experiments and a recent study evaluating the performance of various individual classifiers in cell-type identification using scRNA-seq datasets (Huang and Zhang 2021).Our extensive performance benchmarking experiments on scRNA-seq datasets of various species, tissues, batches, and protocols show that CTISL achieved outstanding performance and strong stability compared to state-of-the-art traditional ML and DL methods, demonstrating that the stacking ensemble learning technique is an effective approach to achieving more robust performance for cell-type annotation.Overall, our dynamic ensemble learning model provides a promising approach for accurately identifying cell types in scRNA-seq datasets, with the potential to advance biomedical research and clinical applications.
Dataset collection
We collected in total 17 scRNA-seq datasets of two species (Homo sapiens and Mus musculus) from various tissues, batches, and protocols to train and evaluate CTISL and other benchmarked methods.Among these datasets, seven are derived from human peripheral blood mononuclear cell (PBMC) samples (Ding et al. 2020) and were used for interand intra-dataset evaluations.Several pairs of datasets with different batches were extracted from various cells, including human blood dendritic cells (Villani et al. 2017), namely Dendritic_batch1 and Dendritic_batch2, and mouse retinal bipolar cells (Shekhar et al. 2016), namely Retina(5)_batch1 and Retina(5)_batch2, and Retina(19)_batch1 and Retina (19)_batch2.Each of the three pairs was used for cross-batch evaluations.In addition, two datasets from human and mouse airway (Plasschaert et al. 2018) and pancreas (Baron et al. 2016) were extracted, namely "HumanAirway," "MouseAirway," "HumanPancreas," and "MousePancreas," respectively.Each pair of datasets is from the same tissue of two species and was used for cross-species evaluations.A detailed description of these 17 scRNA-seq datasets is provided in Supplementary Table S1.
The CTISL framework
As illustrated in Fig. 1, the construction of CTISL consists of four steps: dataset pre-processing, feature selection, stacking model construction, and model evaluation.
Step 1. Dataset pre-processing: We employed the Scanpy package (Wolf et al. 2018) to perform scRNA-seq data preprocessing (Fig. 1A).In the cell normalization step, the expression value of each gene in each cell was divided by the total sum of gene expression values in that cell and then multiplied by a constant 10e4.Other pre-processing steps remain consistent with the previous study (Hu et al. 2020).Note that the dataset pre-processing step is performed separately for training and testing datasets.
Step 2. Feature selection: A scRNA-seq dataset D can be represented as an n � m matrix, where n and m are the numbers of cells and genes, respectively.
Þ denotes the expression value of the j-th gene in D i .As there is usually more than one cell type in D, cell-type identification is formulated as a multi-class classification task.Generally, D contains high-dimensional features (genes) and small size of samples (cells).The high number of redundant genes might decrease the predictive performance of the model.To find a better gene subset for identifying cell types, we employed v 2 (Forman 2003), a popular feature selection method, to select the genes related to cell types.Suppose D contains t cell types, and C represents the set of cell types, the procedure of feature selection (Fig. 1B) is described as follows: (i) a dataset D 0 k with two classes was constructed for each cell type C k 1 � k � t ð Þ-samples with the cell type of C k are regarded as the positive class of D 0 k , and remaining samples as the negative class; and (ii) v 2 was employed to select the top 300 genes with stronger identification ability of C k , and let f k denote the set of these 300 genes.We then repeated the above two steps until each f k 1 � k � t ð Þ was obtained as informative genes for the cell type of C k .
Step 3. Stacking model construction: Among traditional ML-based methods, SVM with the radial basis function (RBF) or linear kernel, and LR classifiers achieved generally better performances than other classifiers in annotating cell types (Abdelaal et al. 2019, Alquicira-Hernandez et al. 2019, Huang and Zhang 2021, Ma et al. 2021).In this work, we employed the stacking strategy (Zhou 2012) to integrate SVM and LR classifiers as individual base learners.It has been demonstrated that the stacking approach can achieve powerful performance in various bioinformatic tasks, such as anti-cancer peptide identification (Liang et al. 2021), prokaryotic lysine acetylation site prediction (Basith et al. 2022), and long ncRNA subcellular localization prediction (Cao et al. 2018).In addition, to conduct a more comprehensive study, we integrated another two base learners, including RF (Breiman 2001) and gradient boosting classifier (GBC) (Friedman 2001).We also built other variations for our CTISL framework, including v 2 with multilayer perceptron (Popescu et al. 2009) (v 2 þMLP) and CTISL with marker genes.Refer to Supplementary Section S1 for more information.In this work, we constructed two layers of stacking ensemble models.The first layer combines SVM and LR as the base classifiers via the stacking strategy and the second level employs LR as the meta-classifier fed by the outputs of the stacking layer.Details of the procedure are described as follows (Fig. 1C).In the first step, given a scRNA-seq dataset D with n cells and the set C with t kinds of cell types, we obtained a set of informative genes Þ using the proposed feature selection strategy.Subsequently, we extracted these gene columns in f k and removed other genes from D to form a sub-dataset D f k ð Þ.In the second step, to avoid overfitting, we implemented our ensemble strategy using the stacking crossvalidation algorithm provided in the "mlxtend" package (Raschka 2018).In this procedure, D f k ð Þ was first split into 3-folds.The first-level classifier SVM with RBF kernel was trained on the 2-folds, and prediction results (t probability values) on the remaining fold as new features of the fold.After three rounds, The same procedure was used to fit the first layer LR classifier.Thus, D f k ð Þ was transformed into a new dataset D 0 f k ð Þ with 2t features through the first layer classifiers.After repeating the above two steps for each Step 4. Model evaluation: To effectively evaluate and compare the performance of CTISL and other state-of-the-art approaches, we used four evaluation strategies including intra-dataset, inter-dataset, cross-batch, and cross-species evaluation (Fig. 1D).For intra-dataset validation, we used the 5-fold cross-validation on seven PBMCs of H.sapiens.Five-folds were stratified to preserve the ratios of samples/ cells for each cell type.When validating the inter-dataset performance, we used seven human PBMC datasets, which were generated by seven different protocols (i.e.10Xv2, 10Xv3, CEL-Seq2, Drop-Seq, inDrop, SMART-Seq2, and SeqWell, respectively).To maximize the use of the existing datasets, we conducted seven inter-group experiments, with each Using CTISL for cell-type annotation on scRNA-seq data experiment using one dataset as the testing set and the remaining six datasets for training.For cross-batch validation, we used three datasets, Dendritic, Retina(5), and Retina (19).For each dataset, we conducted experiments with one batch for training and the other one for testing.In total, we conducted six cross-batch experiments.To assess the crossspecies performance, we conducted experiments on a pair of datasets, which were obtained from the same tissue of two different species by the same protocol.The model was trained on one dataset of original species and predicted cell types in another dataset of target species, and vice versa.In this work, we used two datasets generated by the inDrop protocol from the pancreas tissues of H.sapiens and M.musculus and two datasets generated by the inDrop protocol from the airway tissues of H.sapiens and M.musculus.
Benchmarking against state-of-the-art ML-based cell-type prediction methods
In our benchmark experiments, we compared CTISL with nine state-of-the-art ML-based methods including ACTINN (Ma and Pellegrini 2020) scCapsNet (Wang et al. 2020b), scDetect (Shen et al. 2021), TripletCell (Liu et al. 2023), scmap-cluster (Kiselev et al. 2018), scmap-cell (Kiselev et al. 2018), CellTypist (Dom� ınguez Conde et al. 2022), scBERT (Yang et al. 2022), and SingleR (Aran et al. 2019).For ACTINN, scDetect, TripletCell, scmap-cluster, scmap-cell, CellTypist, scBERT, and SingleR, we used the pre-processing methods provided in their studies to process the original data and used the processed data as input for these models.While for scCapsNet, we used the method in this study to pre-process the data.All compared methods were trained and tested using the same training and testing datasets to ensure a fair comparison.We employed three popular performance evaluation metrics, including accuracy, median F1-score, and macro F1-score (Supplementary Section S2) to evaluate and compare the predictive performance of CTISL with state-of-theart approaches.Accuracy is defined as the percentage of correctly predicted cell type among all cells.Median F1-score is defined as the median value of F1-scores of all cell types and macro F1-score denotes the average of F1-scores of all cell types.Therefore, macro F1-score is suitable for scRNA-seq data with highly imbalanced proportions of cell types (Ma et al. 2021).
Performance evaluation on selected feature genes
Most feature gene selection algorithms were designed to choose highly variable genes (HVGs).However, cell-type identification is a multi-class classification task and HVGs might not be related to cell types.This section therefore aimed to evaluate the performance of the base classifiers on different feature gene sets, including feature genes selected by various methods and HVGs.The feature gene selection methods, we evaluated in this study include the v 2 method, limma (Ritchie et al. 2015), and GeneClust (Deng et al. 2023).While the v 2 method and limma can select feature genes for each cell type, GeneClust selects a subset of highly representative genes that are relevant to each cluster.For each cell type, we used the v 2 algorithm and limma to select the top k (k ¼ 100, 200, 300, 400, and 500) genes that are related to cell types.Thus, for scRNA-seq with t cell types, t � k genes were chosen as features.To evaluate the performance of genes selected by our proposed feature selection method, we compared the selected genes in our work with 2000, 3000, 4000, and 5000 highly variable genes, respectively.We trained our base classifiers on seven human scRNA-seq datasets using the selected genes via a 5-fold cross-validation test-the feature genes were selected using the 4-folds and the performance was tested using the rest fold.Performance comparison between SVM and LR using 300 v 2 -selected genes and 5000 HVGs is shown in Fig. 2A and B, respectively.Detailed performance values of all the numbers of selected genes and feature gene selection methods are shown in Supplementary Tables S2 and S3, respectively.From the average accuracy, and macro and media F1-scores, the v 2 method outperformed other gene selection methods and HVGs.However, the performance did not always improve with the increase in the number of features selected by the v 2 method.Therefore, we further evaluated the performance of the whole CTISL framework on various numbers of selected genes (from 10 to 2000) selected by the v 2 approach for all experimental scenarios, including intra-dataset, inter-dataset, cross-batch dataset, and cross-species.As illustrated in Supplementary Fig. S1, not all performance values generally improved with the increase in the number of selected genes, posing the challenge of selecting a universally optimized number of genes.Specifically, we counted the number of times when each number of selected genes achieved the highest accuracy and macro and median F1-scores.As a result, 2000 v 2 -selected genes achieved the highest performance 20 times, followed by 1500 genes (18 times) and 300 genes (16 times).However, when using the selected 2000 and 1500 genes, CTISL took a significantly longer time to build the model.Considering balancing the running time and performance, in our CTISL framework, we used 300 as the default number of top genes based on the experiments.We further used the 10Xv2 dataset as an example to explore the robustness of CTISL concerning different numbers of distinct genes selected for each cell type using the v 2 approach (Supplementary Table S4 and Supplementary Fig. S2).Notably, we observed a reduction in the number of distinct genes for certain cell types with an increase in the number of selected genes.Moreover, when the number of selected genes was above 1000, the number of distinct signature genes for each cell type did not always increase along with the increase of the number of selected genes by the v 2 approach.Refer to Supplementary Section S1 for more details.
We also examined the genes selected by the v 2 method and found that some of them are marker genes of the corresponding cell types according to the CellMarker 2.0 database (Hu et al. 2023).As shown in Supplementary Table S5, 31 out of 62 marker genes in B cells of the PBMC_10Xv2 dataset were selected by the v 2 method.These findings confirm that the v 2 method is capable of selecting indicative feature genes for cell-type identification.We then conducted a performance comparison of CTISL by using selected feature genes and marker genes, respectively.We found that CTISL achieved better performance when using the selected feature genes in comparison to the marker genes (Supplementary Table S6).Although maker genes have a strong ability to identify cell types, the limited number of known marker genes in some cell types cannot help the model achieve satisfactory performance of cell-type identification.Refer to Supplementary Section S2 for more information.
Performance evaluation of CTISL
We then evaluated and compared the performance improvement achieved by the stacking learning technique.We systematically compared CTISL (using stacking learning to ensemble LR and SVM) with selected genes and marker genes, respectively, v 2 þMLP (multilayer perceptron) (Popescu et al. 2009), and CTISL with LRþSVMþRFþGBC (Friedman 2001) in intra-dataset experiments, inter-dataset experiments, cross-batch experiments, and cross-species experiments.Refer to the methodological details of integrating MLP, RF, and GBC in Supplementary Sections S1 and S3.The performance of CTISL stacking LR and SVM using 300 selected feature genes is demonstrated in Fig. 2 and all detailed performance values are illustrated in Supplementary Table S6.
Although CTISL did not outperform the ensemble base learners on all three metrics in all 24 benchmarking experiments, the results demonstrate that CTISL achieved consistently robust and stable average performance across all cases, except for the comparative prediction performance based on cross-species.Overall, we conclude that the stacking learning technique can effectively improve the prediction performance of base classifiers on intra-dataset, inter-dataset, and crossbatch scenarios of cell type-identification.In addition, to examine the effectiveness of CTISL, we employed UMAP (McInnes et al. 2018) to visualize the cell types represented by features extracted from each step in our model (Fig. 3).As can be seen from Fig. 3, all cells were mixed in the original dataset.After selecting the feature genes, the same type of cells began to form a blurry cluster, which then gradually separated after the first layer.After the output layer, the same type of cells was well clustered into the same group.These results confirm that CTISL can effectively extract informative features representing cell types, thereby achieving outstanding prediction performance.
Model interpretation
In this section, we used the SHAP package (Lundberg and Lee 2017) to interpret the output of CTISL (Supplementary Fig. S3) and compared the selected feature genes with the marker genes from the CellMarker 2.0 database (Hu et al. 2023).We plotted the distribution of the impact of the top 20 selected genes from the output of CTISL.Positive SHAP values indicate the identification of the current cell type, while negative SHAP values indicate the identification of other types.Taking the B cell in 10Xv2 e.g.(Supplementary Table S5 and Supplementary Fig. S3A), most genes with high expression achieved positive SHAP values.Among these 20 selected top genes, 8 are marker genes according to CellMarker 2.0, including CD79A, IGHM, MS4A1, TNFRSF13C, IGHD, IGKC, CD74, and CD79B.These genes have high expression values in most B cell samples, and their SHAP values from the output of the model are positive, meaning that these genes are favorable to the identification of B cells.Similar results of other cell types can also be found in 10Xv2 (Supplementary Fig. S3).For example, the high expression of IL7R and LTB are marker genes of CD4þ T cells according to CellMarker 2.0 and are favorable for the recognition of CD4þ T cells (Supplementary Fig. S3I).They were among the top 20 selected genes for this cell type according to the output of our CTISL.Similarly, the high expression of PF4, PPBP, TUBB1, and MYL9 positively impacted the identification of megakaryocytes (Supplementary Fig. S3F), and (A and B) Performance of SVM and LR using different feature gene selection approaches including the v 2 method (300 genes selected) and the 5000 highly variable genes used in the work by Huang and Zhang (2021).(C-F) Performance comparison among base classifiers LR, SVM, and CTISL (using the stacking ensemble learning technique) models based on the 300 selected feature genes by the v 2 method in terms of accuracy, macro F1-score, and median F1-score using (C) intra-dataset, (D) inter-dataset, (E) cross-batch, and (F) cross-species evaluation strategy.
Using CTISL for cell-type annotation on scRNA-seq data they were marker genes for this cell type based on our SHAP analysis.In addition, high expression of marker genes, such as KLRF1, SPON2, KLRB1, GNLY, CCL4, FCGR3A, CD247, GZMB, CD7, KLRD1, and TRDC are more indicative of identifying natural killer cells.These genes are all marker genes based on the annotations of CellMarker 2.0 and were also among the top 20 selected genes for this cell type based on the outputs of CTISL (Supplementary Fig. S3G).For the Plasmacytoid dendritic cell (Supplementary Fig. S3H), out of the 6188 samples in the PBMC-10Xv2 dataset, only 38 samples are available.As such, there are only eight plasmacytoid dendritic cells in the test subdataset using a 5-fold cross-validation test on the 10Xv2 dataset.As a result, there are few data points with higher SHAP values depicted in Supplementary Fig. S3H.
Benchmarking CTISL against state-of-the-art methods in intra-and inter-dataset scenarios
We first performed 5-fold cross-validations on seven human PBMC datasets and compared CTISL with nine state-of-theart approaches in intra-and inter-dataset scenarios.The evaluated approaches included ACTINN, scCapsNet, scDetect, TripletCell, scmap-cluster, scmap-cell, CellTypist, scBERT, and SingleR. Figure 4A demonstrated that CTISL outperformed all other nine benchmarked methods in terms of accuracy on five out of seven intra-dataset experiments.Among the remaining two datasets, scmap-cell achieved the highest accuracy of 95.9%, while CTISL achieved the second-best performance of 95.1% on the 10Xv3 dataset.Additionally, CTISL achieved the third-best performance of 87.5% on the SeqWell dataset.In terms of macro F1-score (Fig. 4B), CTISL outperformed all other nine benchmarked methods on four out of seven intra-datasets.For the remaining three datasets, CTISL achieved the second-best performance of 0.949 on the 10Xv3 dataset, 0.911 on the CELSeq dataset, and 0.802 on the inDrop dataset.Similar results are observed in terms of median F1-score (Fig. 4C).To evaluate the performance generalization of CTISL, we further performed inter-dataset tests on seven human PBMC datasets generated by different protocols.We first trained our model on six of seven datasets and tested the model on the remaining dataset.Each dataset was used as the testing data once [in line with Wang et al. (2020b) and Yang et al. (2022)] and the evaluation therefore contained seven sub-tasks.As can be seen in Fig. 4A, CTISL achieved the best accuracy on two out of seven tests.In the remaining five datasets, CTISL achieved the second-best performance of 87.0% on the DropSeq test dataset, the secondbest performance of 85.6% on the inDrop test dataset, the fourth-best performance of 90.1% on the 10Xv2 test dataset, the third-best performance of 92.4% on the 10Xv3 test dataset, and the seventh-best performance of 74.1% on the SeqWell test dataset.Additionally, CTISL and scmap-cell achieved the highest macro F1-score on two out of seven test results (Fig. 4B), followed by scCapsNet, scDetect, and TripletCell on one test, respectively.scCapsNet, and scBERT achieved the highest median F1-score on two tests, followed by CTISL, TripletCell, and scmap-cell on one test separately (Fig. 4C).Overall, these comparison results demonstrate that CTISL is capable of accurately and robustly identifying cell types in both inter-and intra-dataset experiments.
Performance comparison in cross-batch and cross-species scenarios
To evaluate the performance of CTISL on different batches of datasets, we ran all 10 predictors on 3 datasets with 2 batches (Supplementary Table S1).We trained all models on one batch and assessed their performance using the dataset of another batch.In terms of accuracy (Fig. 4A), CTISL, ACTINN, and scmap-cell achieved the highest accuracy (100%) across one batch of the Retina(5) dataset (b2-b1).Similarly, CTISL achieved the highest performance of 98.0% on the Retina( 19) dataset (b2-b1) and the second-best performance of 96.8% (b1-b2).Similarly, CTISL achieved the highest performance of 98.1% on the dendritic dataset (b2-b1) and the second-best performance of 97.1% (b1-b2).CTISL also achieved the highest macro and median F1-scores (Fig. 4B and C) on four and three out of six experiments, respectively.
We then conducted cross-species cell-type identification using human and mouse pancreas and airway datasets.CTISL achieved the fourth-best accuracy of 84.3% from human to mouse on the pancreas datasets (Fig. 4A).Except for CellTypist, which achieved the highest accuracy of 88.8%, all other models performed poorly when they were trained on the mouse pancreas dataset to predict cell types in the human pancreas dataset.This is presumably because those cell types are not completely identical across human and mouse pancreas tissues.It is a common phenomenon in cross-species scenarios that some unique cell types only exist in the test dataset.For example, several cell types (e.g.acinar, epsilon, and mast) only appear in the human pancreas dataset and are not present in the mouse pancreas dataset.Similarly, two cell types (e.g.B_cell and immuse_other) solely appear in the mouse pancreas dataset and are not present in the human pancreas dataset (Supplementary Table S7).This C) median F1-score on seven human PBMC datasets in intra-and inter-dataset, cross-batch, and cross-species experiments."b1 to b2" and "b2 to b1" mean that the model was trained on the first/second batch of dataset and tested on the second/first batch of dataset, respectively.While "h to m" and "m or h" mean that the model was trained on the human/mouse dataset and was tested using the mouse/human dataset, respectively.
Using CTISL for cell-type annotation on scRNA-seq data phenomenon greatly reduces the prediction performance of all the models compared.On the other hand, in the airway dataset, the human data comprised three cell types-Basal, Ciliated, and Secretory, with similar sample sizes of 252, 258, and 280, respectively.When using human airway datasets for training the model to predict the cell types in mouse airway datasets, all 10 models performed well, with CTISL ranked third in accuracy (Fig. 4A).However, the mouse airway datasets exhibited a huge imbalance in sample sizes of the cell types, including Basal, Ciliated, and Secretory, with sample sizes of 6009, 1333, and 4792, respectively.Therefore, we applied the "RandomUnderSampler" function from the "imblearn" library (Lemaître et al. 2017), which is a down-sampling strategy based on the method proposed by Laurikkala (2001) to balance the class distribution in the dataset (Supplementary Section S3).We set the sample size of each cell type to 1333 and trained the four models using the down-sampled data to predict human data.Five rounds of down-sampling were then conducted for each experiment and the average results for each trial were calculated.CTISL achieved the highest accuracy of 99.7% (Fig. 4A) and the highest macro (Fig. 4B) and median F1-score (Fig. 4C) of 0.998 on the human airway dataset.In addition, according to cross-species experiments (Supplementary Table S6), v 2 þMLP achieved overall better performance than CTISL, and CTISL with LRþSVMþRFþGBC outperformed CTISL in terms of average macro and median F1-score.Therefore, we recommend v 2 þMLP when performing cross-species celltype identification.Taken together, despite that v 2 þMLP performed better than CTISL in cross-species experiments, these results showed that CTISL possesses a strong predictive ability in cell-type identification in intra-dataset, inter-dataset, and cross-batch experiment scenarios.
Discussion and conclusions
For identifying cell types using highly sparse single-cell expression matrix data, it is crucial to perform feature selection prior to model construction.Our feature selection method consists of multiple iterations, with each iteration selecting representative features for a specific cell type.We compared our selected features with publicly available cell-type marker genes and found that some of our selected marker genes could accurately identify the corresponding cell types and have significant biological relevance.Furthermore, we analyzed the performance of CTISL using various feature gene selection methods and different numbers of selected genes (Supplementary Tables S2 and S3, and Supplementary Section S1).It is worth noticing that the prediction performance of CTISL is still limited by supervised learning.When training and testing CTISL across datasets, the cell types in the predicted results of CTISL are solely based on the available types in the training set, meaning that the model cannot identify new cell types in the test dataset.Although some existing models are able to identify new cell types (such as ACTINN, scCapsNet, scDetect, TripletCell, scmap-cell, and scmap-cluster), they may only classify them as a "novel"/ "other" type without accurately determining their actual types.In the future, we will endeavor to incorporate more cell types by combining more single-cell omic data, such as scATAC-seq data, thereby enriching the prediction capacity of CTISL.CTISL utilizes LR and SVM as default base learners for each cell type in the training dataset.However, other classifiers can also be integrated into CTISL as base learners (Supplementary Sections S3).Moreover, as the number of cell types increases, CTISL can dynamically increase the number of base learners (Supplementary Section S5).In addition, the performance of CTISL, like most state-of-the-art predictors, may suffer from imbalanced cell types in the training dataset when performing cross-species predictions.However, this issue can be to some extent mitigated by employing downsampling techniques in the training dataset.It is worth noticing that the down-sampling strategy does not work if the rarest cell type (i.e. the cell type with the least number of cells compared to other cell types) in the training dataset has an insufficient number of cells.We also discussed the effect of celltype imbalances on different experiment scenarios, such as cross-batch and intra-dataset in Supplementary Section S4.
We developed a user-friendly web-based application for CTISL at http://bigdata.biocie.cn/CTISLweb/home to facilitate community-wide efforts to identify cell types using users' datasets.Additionally, users have the option to choose different models other than LR and SVM as base learners or add additional models to the CTISL framework, via our webserver and locally runnable software (https://zenodo.org/records/10568906).Alternatively, given that v 2 þMLP achieved better performance than the original CTISL in cross-species cell-type identification, we provided the MLP option on our webserver and GitHub repository for users to replace the stacking model.In conclusion, as a dynamic stacking ensemble learning-based model for robust multiclass classification of cell types using scRNA-seq data, CTISL achieved better or competitive, and more robust performances compared to other state-of-the-art predictors based on our extensive benchmarking experiments on intra-and interdataset, cross-batch, and cross-species datasets.Altogether, we anticipate that CTISL will serve as a prominent computational tool for the accurate identification of cell types using scRNA-seq data, thereby facilitating scRNA-seq data analysis and hypothesis generation.
where D 0 is an n � ðt � 2tÞ matrix.Finally, D 0 was fed into the second layer classifier LR.Note that as the numbers of cell types vary across different scRNAseq datasets, CTISL dynamically combines different numbers of base classifiers (i.e.SVM and LR) on different scRNA-seq training datasets.
Figure 1 .
Figure 1.The construction of the CTISL framework includes four major steps, including (A) dataset pre-processing, (B) feature selection, (C) dynamic stacking model construction, and (D) model evaluation.
Figure 2 .
Figure 2. Feature gene selection and performance comparison of base learners and our CTISL framework (stacking LR and SVM).(A and B) Performance of SVM and LR using different feature gene selection approaches including the v 2 method (300 genes selected) and the 5000 highly variable genes used in the work byHuang and Zhang (2021).(C-F) Performance comparison among base classifiers LR, SVM, and CTISL (using the stacking ensemble learning technique) models based on the 300 selected feature genes by the v 2 method in terms of accuracy, macro F1-score, and median F1-score using (C) intra-dataset, (D) inter-dataset, (E) cross-batch, and (F) cross-species evaluation strategy.
Figure 3 .
Figure 3. Visualizing the cell-type identification results on seven datasets, including (A) 10Xv2, (B) 10Xv3, (C) CELSeq, (D) DropSeq, (E) inDrop, (F) SMARTSeq2, and (G) SeqWell.For each dataset, the four panels, from top to bottom, represent the visualizations without feature selection, the results after feature selection, the output of the first layer of the model, and the output of the last layer of the model, respectively.
Figure 4 .
Figure 4. Performance evaluation and benchmarking between CTISL and nine state-of-the-art cell-type predictors in terms of (A) accuracy, (B) macro F1score, and (C) median F1-score on seven human PBMC datasets in intra-and inter-dataset, cross-batch, and cross-species experiments."b1 to b2" and "b2 to b1" mean that the model was trained on the first/second batch of dataset and tested on the second/first batch of dataset, respectively.While "h to m" and "m or h" mean that the model was trained on the human/mouse dataset and was tested using the mouse/human dataset, respectively. | 2024-02-06T17:10:12.980Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "48afd940912aeb7f6f32b1826b856ed6737bf019",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btae063/56588212/btae063.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f1bbff03c615d6423755f99bd2161d65b3c2486",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
238852748 | pes2o/s2orc | v3-fos-license | The Evaluation of the Social Studies Curriculum in Turkey: The Guiding Principle of Balance
: This study investigates the social studies curriculum applied in 4th 5th 6th and 7th grades in Turkey according to the principle of balance, which is one of the guiding principles of curriculum development. The research is conducted with a case study of qualitative research methods. The research is designed with a holistic case study design. Research data were collected by the methods of the interview with social studies teachers, observation in 4th 5th 6th and 7th grades and document review of the course objectives in the social studies curriculum and analyzed by content analysis technique. The reliability of the data was calculated with the multi-degree Kappa coefficient based on the agreement between the observers and the opinion agreement for the opinions. According to the results of the study, it was found out the principle of balance was generally neglected, and there is balance with regards to the past and present, different cultures and local culture, close and distant environment, classroom and out-of-class learning while there is no balance in terms of using only written, verbal and visual materials and the other principles. The research consequently suggested recommendations to curriculum development experts and teachers to ensure the principle of balance.
of the day. The curriculum development process is a multidimensional and continuous process consisting of planning, design, pre-application, and evaluation stages (Khan & Law, 2015, p.67). The first step of this process is to design the curriculum. According to Adıgüzel (2017), the purpose, content, educational attainments, and assessment dimensions to be included in the curriculum at the draft stage are arranged logically and systematically.
There are some principles in the curriculum design process. According to Yücel et al. (2017, p.707), these principles are scope, progressiveness, continuity, cohesion, balance, usability, and flexibility. Besides, Hewitt (2006) listed curriculum design principles as scope, progressivity, continuity, and balance. According to Ornstein and Hunkins (2016), curriculum design principles are scope, sequence, continuity, integration, harmony, and balance.
According to the principle of balance, students should be allowed to use and internalize what they have learned, taking into account their mental, personal, and social development (Doğanay, 2008, p.23). Balance means ensuring the adaptation of the sometimes complex curriculum with the students' developmental levels (Hewitt, 2006). When designing a curriculum, educators attempt to give equal weight to each feature of the design as required by the principle of balance. In a balanced curriculum, students gain the opportunity to acquire and use knowledge in personal, social, and intellectual ways (Ornstein & Hunkins, 2016, p.256).
The aim of designing a balanced curriculum in schools is to firstly develop students' artistic aspects such as social, sports, and music and discover their skills and secondly, to improve their academic performance, and thirdly, to balance their learning outside the school (Kibet, 2016). A balanced curriculum also had positive results on student achievement (Squires, 2013). Galton (2000, p.16) stated that one of the four questions to be asked for the success of the curriculum implemented at the national level is how broad and balanced curriculum can be ensured. Porter (1989) stated in his study titled "An unbalanced curriculum: example of primary school mathematics" that teachers could not know what they taught in an unbalanced curriculum and that success was left up to chance.
A balanced curriculum is important for the multifaceted development of students. It offered three alternatives to create a balance in a better program. The first is that the learning objectives are aimed at basic concepts and skills, the second is to ensure a balance in in-school and out-ofschool interaction and the third is to balance between subject-centered traditional education and meaningful, autonomous and activity-based education (Akker,201,p.42).
In the studies on design principles in the literature, the information and technologies curriculum (Geçitli and Bümen, 2020), the foreign language-weighted 5th grade English curriculum (Canlıer & Bümen, 2018) and Primary and Secondary English curricula (Yücel, Dimici, Yıldız and Bümen, 2017) were previously investigated. When the studies on the social studies curriculum are examined, there are primarily studies focusing on the opinions about the curriculum and the comparisons of previous social studies curricula with the current one (Çakmak, Kaçar and Arıkan, 2017;Gürel, 2017;Taş and Kıroğlu, 2018;Sözen and Ada, 2018;Yıldız and Kılıç, 2018). There is no study examining the social studies curriculum in terms of design principles. In this study, the principle of balance, one of the principles of social studies curriculum design, was thoroughly examined to fill this gap in the literature. The research was conducted based on 17 principles suggested by Oliva and Gordon II (2018, p.468-470) to achieve balance in the curriculum. These principles are as follows: 1-Student-centered and subject-centered curriculum, 2-The balance between the students' and society's needs 3-General and customized education balance, 4-Balance between width and depth in content, 5-Establishing a balance between cognitive, affective, and psychomotor development areas, 6-Establishing a balance between individualized education and general education, 7-Ensuring a balance between innovation and tradition, 8-Balance between the logic of the subject and the student's learning psychology, 9-Balance between the needs of extraordinary and non-extraordinary students, 10-Balance in terms of the needs of different students in terms of intelligence, 11-Balance between written, verbal, and visual techniques and materials, 12-Balance between near and far in terms of time and environment, 13-Balance between academic aspects, entertainment, and physical activities, 14-Balance between in-school and out-of-school learning, 15-Balance between disciplines, 16-Balance between curriculum, 17-Balance within the disciplines.
Developing a balanced curriculum is extremely important, as balancing the curricula enables students to develop in many aspects. For this reason, this study evaluates the social studies curriculum based on the principle of balance and seeks answers to the following questions:
Problem Statement:
How is the principle of "balance" as one of the principles of curriculum design reflected in the social studies curriculum?
Sub-Problems:
How is the principle of balance reflected in the social studies curriculum from the following aspects?
1-Being student-centered or subject-centered, 2-Meeting the needs of students and society, 3-Among the sub-branches of the course (history, geography, citizenship), 4-Distribution of goals to cognitive, affective, and psychomotor domains, 5-Suitability for group learning and individualized learning, 6-Compliance of the content with the logic of the subjects and students, 7-The suitability to the students' intelligence level (high, average, etc.), 8-Use of written, verbal, and visual techniques, 9-Including academic aspects, sports, entertainment, and physical activities, 10-Providing opportunities for in-class and out-of-class learning, 11-Compliance with different disciplines, 12-Allowing different learning approaches, 13-Appropriateness to students' development level 14-Being open to innovations and being tradition-bound, 15-Compliance with the near and far and past and current developments
METHOD Research Model:
This study, which examined in depth the suitability of the social studies curriculum to the principle of balance, which is one of the principles of curriculum development, was conducted as a case study from qualitative research methods. Case studies are a qualitative approach concerning real life in which the researcher deals with a situation and collects detailed and in-depth data on this case (Creswell, 2007, p.97). The qualitative research model used to find answers to scientific questions and seen as a distinctive approach is called a case study (Büyüköztürk et al., 2011, p.273). If a single unit of analysis is used in case studies and its specific situations are studied, a holistic case study design is used (Yin, 2014). In this study, since the social studies curriculum is considered as a curriculum and a single case is analyzed, the holistic case study design is used.
The case study design in the study includes the following stages, respectively: 1-Examination of the course objectives in the Social Studies curriculum 2-Conducting semi-structured interviews with social studies teachers according to the balance principles determined by Olivia and Gordon II (2018) 3-Observation by using co-observes, 4-Analysis of data obtained from three data collection tools, 5-Interpretation of the obtained findings.
Data Collection Tools:
The research data were collected by using data triangulation with observation, interview, and document review techniques.
Semi-Structured Interview Form: The interview form consists of 15 open-ended questions prepared for primary school teachers teaching 4th grades and social studies teachers. The researchers prepared the questions within the framework of the principles determined to ensure the balance in the curriculum by Olivia and Gordon II (2018). The questions were revised by submitting them to three experts in the field of education curriculum and teaching. The interviews were conducted with 8 teachers.
Observation Checklist: The checklist was created by the researchers, considering the balance principles suggested by Olivia and Gordon II (2018), and was finalized by submitting to the opinion of three experts in the education curriculum and education field. The observation form consists of 18 items. The items were graded from 1 to 3 from insufficient to sufficient. The observations lasted for 4 weeks, in the form of 2 lesson hours for the 4th grades and 3 lesson hours for the 5th, 6th, and 7th grades. Document: In the study, 131 objectives included in the social studies curriculum updated in 2018 by the Ministry of National Education were examined as documents to determine to what extent the principles of balance are reflected in the social studies course curriculum. Data Analysis: Content analysis was used to analyze the opinions collected from the teachers in the analysis of the data. In the content analysis, opinion agreement among researchers was determined using Miles and Huberman's (1994) opinion agreement formula. In the analysis of the data obtained through observation, the observation forms created as a result of the observation made with the co-observer were analyzed using the multi-degree Kappa coefficient formula, and the document analysis was analyzed with the content analysis technique. The Kappa test is a statistical method that measures the reliability of the agreement between two or more observers regarding the phenomenon they observe. The Kappa coefficient formula is Pr(a) -Pr(e) / 1-Pr(e). Here, Pr(a) refers to the observed agreement, and Pr(e) to the probability of random agreement (Cohen, 1960 as cited in Kılıç, 2015, p.142). In the observation form used in the study, the scoring was 3-graded in the form of 1-2-3, and the multi-degree Kappa coefficient was used instead of the two-degree Kappa coefficient. The Kappa formula applied in this method is the same as the Kappa formulas applied in the two-degree method. However, the data matrix is not in the form of 2x2, and the number of matrices is determined by the number of degrees in the form of 3x3 or 4x4 (Şencan, 2005, p.488).
Validity -Reliability:
Since the validity and reliability of the case studies are the subjects of criticism, and they are regarded as one of their weaknesses, data diversification was used to increase the validity of the study. In this context, the data were collected by examining the lesson observations with the co-observer, face-to-face interviews with the teachers, and the examination of the curriculum objectives using the document technique. The teacher interview form and observation form prepared by the researchers were presented to the opinions of three experts from the curriculum and education field and revised the consequent form to increase the reliability of the study. Besides, while observing in the learning-teaching process, observations were made with a co-observer who received doctorate education in curriculum and teaching.
For the reliability of the data obtained from the teachers using the semi-structured interview technique, the opinion agreement formula of Miles and Huberman (1994) was used. This agreement is formulated as "Opinion agreement = (Consensus / Disagreement + Consensus) * 100." The opinion agreement on teachers' opinions in the study was calculated as 83.33%. According to Miles and Huberman (1994) and Patton (2002), the consensus between coders is expected to be at least 80%.
According to the Kappa Test, the rate of agreement of the opinions expressed by the observers in the observation form for the observation results is as follows: The observed agreement rate for 4th Grade is Pr(a)=14/18= .77 The observed agreement rate for 5th Grade is Pr(a)=15/18= .83 The observed agreement rate for 6th Grade is Pr(a)=15/18= .83 The observed agreement rate for 7th Grade is Pr(a)=14/18= .77 In the interviews with the teachers, there are opinions that the curriculum is subject-centered (f=7). Some of the teachers' opinions are as follows: "Although it seems to be student-centered, when we have a general perspective, it is subject-centered, especially in a history course, which is a sub-branch of social studies." (T2) "The curriculum was tried to be designed as student-centered, but the sequence and level of the content cannot catch the students' interest." (T5) "I think it is subject-centered. There is a lot of lecture and content. The number of activities is very low." (T8) "Curriculum and activities are generally student-centered, but the teacher must have certain competencies to be able to do these activities." (T6)
FINDINGS
When the curriculum objectives are examined, findings that confirm these opinions were accessed. In the observation made with the co-observer, the observers marked an inadequate option in the checklist. In this sense, it is seen that the curriculum is more subject-centered regarding its focus and is not in a balance in terms of being student and subject-centered. In the interviews with the teachers, it is predominant that the curriculum is more oriented towards the needs of the society in terms of the needs of the students and the society (f=6). Some of the opinions of teachers are as follows: "The needs of the society were met in matters such as homeland, nation, and homeland love, but it is not enough for the respect, responsibility, and values that individuals need." (T1) "In the curriculum, issues related to adaptation to society are taught by introducing our culture to students, while the needs of the society are met, the needs of individuals are neglected." (T6) "There are deficiencies in meeting the needs of the students. We cannot take teaching out of the classroom. This is at the discretion of the teacher. Social needs, on the other hand, were met to a great extent." (T8) When the social studies course objectives were examined, only one objective, which is the balance between subject and society needs, was determined. As the social studies course is perceived as preparing students for society, it can be suggested that the needs of the society are important, and there is no balance in terms of student and society needs in this sense. In the interviews with the teachers, the opinions that the curriculum does not have a balance between the sub-branches of the social studies course, such as history, geography, and citizenship, are more prevalent (f=5). The following are some of the opinions of teachers: "There is absolutely no balanced distribution. Citizenship and democracy have little weight in general. History is more weighted than the others. The teaching of geography subjects is insufficient." (T5) "There is the main focus on citizenship. In particular, the field of geography is included in only one theme. I do not think it is balanced." (T6) "The content is not evenly distributed with its sub-branches. Citizenship subjects were mentioned in a limited way. There are unnecessary details for the field of geography that will not be useful for 4th-grade students." (T7) "The distribution seems sufficient in terms of the sub-branches of the course, although some courses seem to come to the fore when we look at the whole, we can say that there is balance. (T4) When the objectives in the social studies curriculum are examined, it is seen that there are mostly objectives in the field of history and very limited objectives in the field of geography. The psychomotor area is almost non-existent. In this sense, according to the teachers' opinions in the curriculum, the principle of balance in objectives was violated (f=7). Some of the opinions of the teachers are as follows: "In general, very little space was given to the psychomotor area at the cognitive and affective level." (T2) "More emphasis was placed on the cognitive area, and not much was on the affective and psychomotor areas." (T6) "The distribution of the curriculum to the cognitive and affective domains is generally good. However, it is missing in the psychomotor area. In order to develop it, studies on individual motor skills should be included." (T7) "The cognitive field is more dominant, but this is only at the level of knowledge. There is almost no place given to psychomotor learning more than the affective field psychomotor field." (T8) Considering the curriculum's objectives, it was determined that the educational situations were more oriented to conveying information during the lesson observations, in which cognitive objectives were predominantly included due to the characteristics of the field of social studies. According to teacher's opinions, it can be said that the social studies curriculum is suitable for group learning rather than individualized learning (f=6). Some of the opinions of the teachers are as follows: "Some objectives were attempted to be given through the student examples. Although this situation seems appropriate for individualized teaching, it can make it difficult to learn the subject in general terms." (T3) "I do not think that it is not very suitable for students who need individualized teaching because their interests, abilities, and needs differ." (T4) "There are some objectives and activities for individualized teaching, but they can be increased a little more." (T6) "I do not think it is very suitable for individualized learning, although it seems appropriate when we look at the curriculum, the situation is different in practice." (T5) When the objectives in the curriculum are examined, the objectives that include individualized education are limited. During the observations, co-observers determined that the teachers could not include individualized education to teach the objectives in the determined period. In this sense, it can be said that the principle of balance was violated. Based on the opinions received from the teachers, it can be said that the content, which is one of the basic elements of the social studies curriculum, is prepared mainly by considering the features of the subject (f=6). In the observations, it was determined that the teachers tried to adapt the abstract points to student logic by supporting them with materials and examples. Some of the opinions of the teachers are as follows: "Even if there is no problem in the compliance of the subjects with the logic, the students can be prone to logic by increasing the examples according to the economic, social, and cultural environment of each student." (T1) "The content is given a lot of unnecessarily extended details. Instead, it can be adapted to the logic of the student with a simpler and concrete content." (T3) "The content items in the curriculum are in line with the logic of the subjects but do not appeal to the logic of the student. Due to the nature of the course, there are many different sub-branches, so content can appear like many stacking." (T4) "The content design is absolutely negative. There is no continuity between subjects. There is a sequence that skips from branch to branch. The student has a hard time connecting. Course hours should be increased, and the subjects of History, Geography, and Citizenship should be given systematically and chronologically." (T5) "I do not think there is any problem in terms of compliance with the logic of the students in general. Some of the concepts can remain abstract, and they can be arranged by making arrangements about them." (T7) According to the opinions of teachers' it can be suggested that although the social studies curriculum is sufficient to meet the needs of average and upper-level students, it is not sufficient for the needs of lower-level students (f=6). "We can say that the student level in the social studies course is generally the closest to the homogeneous. There is not much problem in meeting student needs." (T1) "There are appropriate objectives to meet the needs of both average and high-level students. (T6) "It meets the needs of normal students sufficiently. But I think it is a bit lacking for high-level students." (T7) "Normal and high-level students understand basic level concepts that they will need in daily life, but lower-level students have difficulties." (T8) When the objectives in the curriculum are examined, it can be indicated that different individual characteristics of individuals with different intelligence types are considered in the curriculum, but in the observations conducted in this study, teachers adopt a standard approach more than students with average intelligence levels. According to the teachers' opinions, the social studies curriculum is a field where these techniques can be used together in the written, verbal, and visual sense, and a balance was achieved in this regard (f=8). Some of the teachers' opinions on this issue are as follows: "Cognitive level objectives also necessarily need written and visual support in affective level gains. It is positive that the textbook pictures are real pictures to make them concrete. Its use depends mostly on the research ability of the teacher." (T3) "It is appropriate in terms of written, verbal and visual, but needs to be further enriched in terms of visual techniques. Visual elements are more suitable for children at this age, especially for their level." (T4) "It is naturally not appropriate to use these techniques in every theme. However, if we consider the curriculum in general, it is balanced in terms of the use of each of these techniques in terms of achieving the objectives." (T6) When the objectives in the social studies curriculum are examined and in the observations made, it can be said that different written, verbal, and visual techniques are used by teachers, and the principle of balance is included in the curriculum. According to the teachers' views, there is no balance between the social studies curriculum in terms of academic aspects, sports, entertainment, and physical activities (f=7). Some of the teachers' opinions on this sub-problem are as follows: "The balance between the academic aspect and sports, entertainment and physical activities is insufficient. If more opportunities are offered to schools, more active learning could occur." (T4) "Not only in terms of social studies, but all other curricula are lacking in this regard, there is an artificial understanding of art and limited sports activities." (T5) "Activities such as trips, observations, and research within the scope of the curriculum ensure the harmony between these activities." (T6) "There is content that supports these activities. However, in general, this fit can be further increased. In order for learning to be more permanent, physical activity adaptation should be given more place." (T7) When the objectives in the curriculum were examined, there was no one that included academic aspects, sports, entertainment, and physical activities. Besides, during the observations, it was observed that teachers did not give much place to these activities. In this context, the curriculum is not suitable for the principle of balance in terms of academic aspects, sports, entertainment, and physical activity. According to teachers' opinions, although there is a balance in the social studies curriculum in terms of providing in-class and out-of-class learning, it is dependent on some external factors rather than the curriculum in out-of-class learning (f=6). In the observations, it was found out that teachers teach out-of-class mostly through homework. Some of the teachers' opinions on this sub-problem are as follows: "The content of the subjects as well as in the classroom can give allow them to learn outside the classroom. In fact, unlike many lessons, the content of the lesson is in the way of making use of events outside the classroom visually." (T1) "There are usually in-class activities. This situation depends on the socio-economic status of the school and the teacher's devotion rather than the curriculum." (T2) "Actually, there is a curriculum that provides many opportunities, but external factors such as space, economy, and socio-cultural structure are more effective." (T5) "There are more in-class activities in the curriculum; if the teacher gives individual out-of-class activities that support the in-class learning of the students, the learning is carried out of the classroom." (T6) "The information in the content supports the research direction of the students. Especially it provides an opportunity for learning outside the classroom." (T7) According to the opinions obtained from the teachers, the social studies course is related to other disciplines as it is a core field, and a balance was established between them (f=8). Some of the teachers' opinions on this sub-problem are as follows: "Social studies course is a flexible course, and by using this flexibility in the curriculum, this balance was achieved with other courses and discipline areas." (T5) "We can say that the objectives of the primary school 4th-grade social studies curriculum are compatible with mathematics, science, and Turkish courses." (T6) "The social studies lesson curriculum is associated with Turkish, science, and mathematics courses. In this respect, there is a balance between them, but sometimes they can be more abstract than other lessons." (T8) When the course objectives are examined, it was ensured that many objectives from all grade levels are compatible with different disciplines. During the observations, it was indicated that in the educational situation of the teachers, the lesson provided harmony with various explanations and examples with both sub-branches and other disciplines, and in this respect, the principle of balance was achieved. According to the teachers' opinions, social studies lesson allows different learning approaches because it has different themes and different contents related to its sub-disciplines and many fields (f=8). Some of the teachers' opinions on this sub-problem are as follows: "If we consider the social studies curriculum in our country with regards to citizenship transfer, social sciences and reflective thinking accordingly, it allows different learning approaches for structuring, prediction, and critical thinking dimensions." (T1) "Social studies course is suitable for using different learning approaches as there are many learnings from life." (T4) "The curriculum can help students gain different learning approaches in terms of the objectives and units within each theme." (T6) In the curriculum, some objectives can be achieved by using different learning approaches, and in the observations made, it was found out that the principle of balance was achieved in terms of teachers trying to include different learning approaches in accordance with the course objectives and content. According to the opinions of the teachers, the content of the social studies course is not suitable and balanced for the development level of the students because of the abstract subjects (f=5). During the observations, it was observed that some of the objectives in the curriculum remained abstract and used materials such as smartboards to concretize these objectives. Some of the teachers' opinions on this subproblem are as follows: "The contents are suitable for the development of the students. In some subjects, for example, to fully understand the subject of selection and election, the 4th graders need to be addressed in more detail in terms of adopting democracy education and understanding." (T1) "I think some subjects remain abstract because they are not suitable for the development level of the students." (T4) "Absolutely not suitable. It needs to be simplified and the content divided into relevant areas. The transition between subjects and an emphasis on logic should be provided. For example, it is not very logical to teach Seljuk history after forests." (T5) "Generally, I find it appropriate. However, the fact that some subjects are full of abstract concepts in the 4th grade makes learning difficult." (T6) "There is a straight narration; only the use of maps is included as material. The use of more materials and tools should be included. We try to concretize the subjects with animations on the smartboard." (T8) According to the opinions collected from the teachers, the social studies curriculum is not balanced between innovation and tradition (f=5). The more traditional aspect of the curriculum prevails. Some of the teachers' opinions on this sub-problem are as follows: "Our curriculum is not very open to innovations and put teachers into narrow forms. In this case, it brings along more traditionalism." (T1) "I can say that there is a partial balance. The culture and heritage learning area is informative about traditions." (T3) "I think it should be more open to innovations. It would be better if it covers the traditions of different cultures." (T4) "It is more traditional as in all our curriculum." (T5) "Although the curriculum is not completely dependent on innovation, adaptation studies to innovation do exist in recent years, although they are not very effective at this stage." (T6) When the objectives in the social studies curriculum are examined, it can be said that a balance is tried to be ensured between innovation and traditionalism, but this situation cannot be achieved in the observations and teachers' opinions in practice, and in this sense, the principle of balance is violated. According to the opinions taken from teachers, the social studies curriculum established harmony and balance between the distant past and the current. Additionally, it was found that attention was paid to this principle in most of the acquired objectives (f=8). Some of the teachers' opinions on this subproblem are as follows: "Social studies are not a field that varies much except the field of history. Advances in the technological field have some impact on the content of the curriculum. There is a balance between the distant past and the present in terms of updating." (T1) "Near, far, and current time is included, but the logic can be taught more properly. However, it can still be considered sufficient. It can even be mentioned that the balance situation is good, especially in some subjects." (T5) "The curriculum is a little more limited with the distant past, but there are harmony and balance with current developments." (T7)
DISCUSSION, CONCLUSION AND SUGGESTIONS
In this study, the social studies curriculum was evaluated according to the principle of balance, which is one of the guiding principles of curriculum development. As one of the curriculum theories, reductive theory explains the balance under conditions such as the laboratory where everything is fixed. However, educational curricula have a complex theory (Morrison, 2010). There is no one right or one answer to a situation. For this reason, data was collected and analysed through observation, interview and document analysis by making data triangulation in the study.
The social studies curriculum is more subject-centered than student-centered, especially due to the sequence and level of content, and the principle of balance could not be achieved in this sense. This may be due to the fact that the most crucial feature of the constructivist approach that it puts the student in the center is not fully reflected in the content. The teachers generally expressed that the content was too much on this subject and that the activities were less included. According to Demirtaş and Erdem (2015), one of the criticisms brought to the curriculum is that the content is too boring, and it supports this finding in this study. Another principle that the balance cannot be achieved in the curriculum is that the needs of the society in terms of the needs of students and society are prominent. Similarly, Yücel et al. (2017) found in their study that the balance principle was violated regarding students' interests and needs in the English language curriculum. Cuban (1992) stated that schools cannot find a solution to every problem of society and should not feel obliged to do so. The social studies course is a course that includes sub-disciplines such as history, geography, and citizenship. However, when the achievements are examined, it is stated that there is no balance in the field of history and that the field of history is more prominent based on the teachers' opinions, and the field of citizenship is given less space. This result is similar to Gürel's (2017) result that the discipline of history and geography comes to the fore in terms of sub-disciplines of the course in the social studies curriculum. All of the 131 objectives in the curriculum were examined, and it was determined that there was no balance in the distribution of the objectives in the cognitive, affective, and psychomotor areas, and during the observations, more cognitive goals were included. The teachers asserted that cognitive and affective goals were included in the curriculum, and psychomotor goals were almost non-existent. According to Ediger (2007) there should be a balance between knowledge, skills and attitudinal goals in terms of course objectives; otherwise, the student may have knowledge but cannot use it. Merter et al. (2012) stated that the 2011 secondary education English curriculum's targeted course hours were quite intensive, and the principle of balance was ignored. This finding may be closely related to the continuation of a success-oriented vision in education. It can be said that social studies lesson includes more group learning in terms of group and individualized learning. Teachers stated that the program is not suitable for individualized education because the students' interests, abilities, and needs are different in this sense. In the observations made, it was observed that the teachers, where the lessons were conducted mainly in groups, could not deal with students of different nationalities, especially because they did not know the language and did not have enough time and experience.
One of the basic elements of the curriculum is content. It must be balanced between the subjects and the logic of the students to provide the principle of balance in content. The principle of balance is to include a good variety of content to contribute to the development of the student (Tan, 2011). The teachers mentioned that the content remained too intensive and abstract due to the sub-disciplines of the course and that the balance could be achieved by reproducing the examples according to the students' economic, social and cultural environment. In this respect, it can be said that the content of the curriculum is determined by considering the logic of the subject. This finding coincides with the fact that the curriculum is subject-centered rather than student-centered in the first sub-problem for which an answer was found out. Taş and Kıroğlu (2018) asserted that the curriculum content is intense, which shows that the balance cannot be achieved in terms of content. In the study examining the balance between themes in the science curriculum in Lebanon, Boujaoude (2010) revealed that the curriculum violated the theme of "science as a way of knowing" and that there is no balance between themes. This finding is in the same directions as the result of the study. While preparing the curriculum, it is important to balance these differences by considering the characteristics of students with different intelligence types. When the curriculum objectives are examined, the characteristics of individuals with different intelligence types are considered, but in the observations made in practice, teachers adopt a standard approach more than students with average intelligence levels. In the interviews with the teachers, the teachers underlined that the curriculum was mostly aimed at students with average and high intelligence. In this context, the principle of balance for different intelligence types was violated. This finding also points out that inclusive education practices cannot be fully reflected in the curriculum.
The use of written, verbal, and visual techniques in the curriculum is one of the principles of balance. According to Ornstein and Hunkins (2014), it is essential to give appropriate weight to each of the different techniques while designing the curriculum to bring learners to their goals. When the objectives were examined and the observations made, it was determined that different written, verbal, and visual techniques were used by teachers. The teachers expressed that the balance was achieved by suggesting that the lesson was the most appropriate lesson in terms of using different written, verbal, and visual techniques. Some objectives in the curriculum, such as academic direction, sports, entertainment, and physical activities, were available, but in the observations of this study, it was seen that the academic side of the curriculum was prioritized. This may be due to the insufficient course time and physical facilities of the schools. A sufficient number of objectives are included in the course to enable in-class and out-of-class learning. In the interviews with the teachers, they suggested that the curriculum is balanced in this regard. However, out-of-class learning is primarily dependent on external factors such as parents, economy, and socio-culture. The observations determined that teachers try to realize out-ofclass teaching mostly through homework and activities. Besides, Jonyo and Jonyo (2019) emphasized the role of school principals to balance and supervise in-class and out-of-class learning.
The social studies course is closely related to many different disciplines in terms of its content, which has different sub-disciplines. When the objectives are examined, the balance of the course with different disciplines was achieved in many objectives at each grade level. The teachers stated that the course is especially compatible with Turkish and science disciplines. During the observations, it was determined that in the educational situation of the teachers, they established a relationship with both the sub-branches of the course and other disciplines by harmonizing them with various explanations and examples, and in this respect, the principle of balance was ensured. Bayır, Köse, and Balbağ (2016) stated in their study that benefiting from different disciplines in which teachers benefit from the intermediate disciplines of the lesson gives students various knowledge, skills, and values. Turan (2019) concluded that the social studies curriculum is associated with different disciplines at the level of learning areas. These results support the result of the study. Because of these features, it can be said that the course allows different learning approaches, and the principle of balance is implemented. In the curriculum, some objectives can be achieved by using different learning approaches, and in the observations made, it was observed that teachers included different learning approaches in accordance with the outcome and content, although not very diverse. The teachers stated that the lesson allows for different learning approaches because it has different themes and different contents related to their subdisciplines and many fields. A balance could not be achieved in terms of the compatibility of the content of the social studies course with the students' development levels. Hewitt (2006) also added that the balance between the developmental stages of learners and the complexity of the curriculum is important. It was determined that some of the objectives in the curriculum remained abstract, and the teachers mostly used smartboards to concretize them. On the contrary to this result, Geçitli and Bümen (2020) found out that the information technologies curriculum was trying to achieve the principle of balance by considering the students' age and development levels. The balance between being open to innovations and commitment to tradition could not be achieved in the curriculum. When the course outcomes are examined, it can be expressed that there is a balance between innovation and tradition only in the field of culture and heritage learning. Teachers stated that adaptation studies were carried out in the last curriculum and that they were insufficient at the moment and that the curriculum came to the fore with its traditionality. In the social studies curriculum, the balance was included in many objectives in terms of adaptation to the near and far past and current developments. Teachers indicated that both the past and current developments are included in the course content in the curriculum.
Consequently, the principle of balance, which is one of the main principles of curriculum design, is provided in terms of the use of written, verbal, and visual techniques of the social studies curriculum, providing opportunities for in-class and out-of-class learning, the course being compatible with different disciplines, allowing different learning approaches, and adaptation to near and far past and current developments. Nevertheless, the balance is ignored for other principles.
Suggestions:
In light of the results obtained in the research, the following suggestions are made to ensure the principle of balance in the social studies curriculum:
Suggestions for Curriculum Developers:
1-The social studies curriculum is more subject-centered and was designed to appeal to the needs of society and the logic of the subject. Based on this background, it can be suggested that the student is more prominent and prepared in accordance with the logic of the student rather than the logic of the subject, by balancing their interests and needs with the needs of the society.
2-History, which is one of the sub-branches of social studies, is given more space in the content. It may be suggested to increase the content related to Geography and Citizenship and to establish a balance of content within the sub-branches.
3-Cognitive learning is prominent in terms of learning skills. Increasing the number of affective and psychomotor learning skills can be suggested to achieve a balance between these learning skills.
4-The curriculum was prepared to appeal to average and high-level students. Considering the lower level and other different students, it can be suggested to create a balance between different groups and make them suitable for individualized education. 5-It may be suggested to establish a balance between academic direction, sports, entertainment, and physical activities in the curriculum considering schools' facilities.
Suggestions for Teachers:
In light of the observations and findings made, the following suggestions can be made for teachers who are the implementers of the curriculum: 1-A balance between student-and teacher-centered education in lessons should be established.
2-A balance by including not only cognitive learning skills of students but also affective and psychomotor learning should be provided.
3-A balance should be ensured between students with different levels, cultures, and socio-economic characteristics in the courses, considering the inclusive education, 4-Based on the possibilities of the classroom, school, and environment, a balance should be implemented by appealing to many sensory organs of students such as academic aspects, sports, entertainment, and physical activity, 5-By using more than one learning method and technique in the lessons, suggestions can be made to establish a balance between the didactic and questioning learning of the students. | 2021-09-01T15:05:14.580Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "a1637666c38c79126858604c47415d4f109e0ace",
"oa_license": null,
"oa_url": "https://doi.org/10.19044/ejesvv8no2a68",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7287a7590c46192bd23a028ef7f2faf8618edb83",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
235573121 | pes2o/s2orc | v3-fos-license | Modeling, Control System Design and Preliminary Experimental Verification of a Hybrid Power Unit Suitable for Multirotor UAVs
: A key drawback of multirotor unmanned aerial vehicles (UAVs) with energy sources based solely on electrochemical batteries is related to the available on-board energy. Flight autonomy is typically limited to 15–30 min, with a flight duration upper limit of 90 min currently being achieved by high-performance battery-powered multirotor UAVs. Therefore, propulsion systems that utilize two or more different energy sources (hybrid power systems) may be considered as an alternative in order to increase the flight duration while retaining key performance benefits of battery energy storage use. The research presented in this work considers a multirotor UAV power unit, based on the internal combustion engine (ICE) powering an electricity generator (EG) connected to the common direct current (DC) bus in parallel with the lithium-polymer (LiPo) battery, and the respective modeling and identification of individual power unit subsystem, along with the dedicated control system design. Experimental verification of the proposed hybrid power unit control system has been carried out on the custom-build power unit prototype. The results show that the proposed control system combines the two power sources in a straightforward and effective way, with subsequent analysis showing that a two-fold energy density increase can be achieved with a hybrid energy source, consequently making it possible to achieve higher flight autonomy of the prospective multirotor (hover load around 1000–1400 W) equipped with such a hybrid system.
Introduction
Advances in electronics, materials, and production techniques have allowed for the manufacture of powerful and compact electric motors, high power electrochemical batteries, integrated sensor arrays, and versatile flight controllers suitable for multirotor unmanned aerial vehicle (UAV) applications. However, the key downside of multirotor UAVs is their inherently limited flight endurance and useful range which are directly related to the aircraft take-off mass (aircraft and payload mass combined) and battery charge and energy capacity. A fully electric multirotor powered by batteries, currently being the most widespread multirotor design, is typically characterized by a flight autonomy between 15 and 30 min, with a flight duration of 90 min currently being achieved by high-performance battery-powered multirotor UAVs.
The initial motivation of this research was to alleviate the aforementioned drawbacks of energy sources based purely on electrochemical batteries, by utilizing an alternative propulsion system combining an internal combustion engine (ICE) coupled with the electricity generator (EG) and augmented with a lithium-polymer (LiPo) battery, thus forming a hybrid power unit suitable for multirotor UAV use. Due to the specific energy density of gasoline fuel (about 12 kWh/kg) being practically two orders of magnitude greater than the specific energy density currently available with state-of-the-art lithium polymer (LiPo) batteries (ranging about 0.2-0.5 kWh/kg) [1], it is expected that the use of a hybrid power unit can facilitate considerable improvements in flight endurance and useful range of such multirotor UAV.
Hybrid propulsion is nowadays typically found in road vehicles and rail transport, as shown by analyses presented in [2,3] Since land-based vehicle motion is two dimensional, with gravity-related constraints being much less emphasized compared to those of aerial vehicles, hybrid propulsion system implementation for multirotor-based UAVs would pose a significantly greater challenge, which represents an additional motivation to investigate the proposed research topic.
Hence, the problem of multirotor UAV flight autonomy is actively investigated field and different power system configurations are currently being researched with the aim of satisfying low power consumption, low mass, and high output power density requirements. In a majority of cases, hybrid propulsion system research is conducted for standard fixedwing and vertical takeoff and landing (VTOL) fixed-wing hybrid [4,5], and some other types such as dirigible UAV [6,7]. A detailed analysis of synergetic effects for various classes of UAVs is given in [8] by applying the multi-objective optimization, where results have shown that hybrid-electric configuration has the potential to give a strong contribution to aircraft performance. Moreover, by investigating the behavior of different power sources within the UAV hybrid-electric propulsion system through simulations in order to predict the hybrid electric power system behavior using bench tests, the feasibility and efficiency of the onboard UAV power system can be assessed before the final flight test phase [9]. Also, a comparison of five different UAV power-train options has been investigated in [9] using simulations. These alternatives included (i) a free-piston engine with integrated linear EG, (ii) advanced lithium-ion batteries, (iii) ICE with the embedded rotary EG, (iv) a parallel hybrid power-train configuration with ICE, and (v) the proton exchange membrane fuel cell.
Hybrid-electric propulsion system using an ICE has been researched in [10], with reference [11] further analyzing some realistic challenges related to ICE use in hybridelectric propulsion systems. These included acoustic noise and associated mechanical vibrations, engine cooling issues, and their implications to the operation of a compact power unit comprising a small-scale ICE equipped with suitably sized EG. In particular, it has been shown that for such a small ICE, a powerful and complex vibration pattern can be obtained, and it cannot be easily related to engine crankshaft rotation or linear piston motion.
Reference [12] investigates the potential of hydrogen fuel cell stacks as an alternative power source to ICE in small UAVs. The investigation was based on the commercial "Aeropack" hybrid power supply consisting of a fuel cell stack and a battery pack. The functionality of such a fuel-cell battery hybrid power system has been successfully demonstrated during a flight test of the target prototype UAV.
References [13,14] show that photovoltaic power is mostly suitable for fixed-wing UAVs, primarily as an auxiliary power source.
Additionally, UAV hybrid power source research and development efforts have resulted in several patents [15][16][17], which, unfortunately, do not present detailed information about control system design, which is crucial for the implementation of such hybrid power sources. Some other literature sources, such as reference [18], present the development of a 1 kW hybrid electric power train along with its characterization and performance testing. An interesting approach is given in reference [19], dealing with a dual power design concept, where UAV motion control is performed through commonly used brushless motors driving a set of auxiliary propellers, while the major portion of UAV lifting power is provided by a gasoline engine-powered main propeller system. Taking into account the above issues related to specific energy density, mass, ease of operation and performance, and price and availability, hybrid propulsion of multirotor UAV's based on an ICE coupled to an EG and utilizing a battery energy storage system as auxiliary power supply should provide many attractive research challenges in the field, while simultaneously opening new research frontiers which might ultimately improve knowledge levels in multiple research sub-areas [20].
Even though internal combustion engines are frequently used for light model aircraft propulsion, their controls are not a frequently discussed topic in the more recent research literature. In order to model the highly nonlinear nature of the internal combustion engine dynamics, the so-called mean-value engine model (MVEM) is typically used as a basis for engine speed controller design, wherein PI and PID controllers can be optimized for different engine speed-torque operating points [21].
For example, in [22] a PI controller with feed-forward load compensator is considered for engine speed control within the UAV hybrid propulsion, which may utilize available generator-based measurements (current and voltage) to establish a speed-sensorless feedback loop, or, alternatively, the readily available low-precision Hall-sensor position measurements can be used for that purpose [23], with Kalman filtering used in both cases to provide a relatively smooth and precise engine speed estimate.
In reference [24] the three different hybrid-electric power-plant configurations are considered and dynamic models are derived, with flight dynamics performance testing carried out on a UAV prototype.
A good review of current hybrid-electric propulsion systems (HEPS) for fixed-wing aircraft can be found in [25]. A generic hybrid propulsion system based on DC-AC inverter plus PMSM machine has been presented and modeled in reference [26], wherein a BLDC generator plus active front end rectifier has been considered as a viable alternative to electric power source hybrid-propulsion UAV in [27]. In the latter case, using PI and PID controllers has shown favorable engine speed and DC bus voltage control system performance.
An alternative to active rectification of speed-dependent generator voltage has been researched in [28], wherein a parallelized array of low-cost DC/DC power converters and battery energy storage have been used in conjunction with passive diode rectifier to maintain the fixed voltage at the common DC bus within the UAV.
In order to determine the optimal hybrid power-train configuration and required hybrid propulsion system components, a parameter matching method has been proposed in [29] wherein the correlation between rotor-based propulsion power demands and the hybrid power system requirements have been identified, with test results confirming the accuracy of the proposed parameter matching method.
Having the aforementioned in mind, this paper proposes a straightforward approach to modeling, identification, and control system design for multirotor UAV power unit system based on ICE coupled with the EG and augmented with the LiPo battery energy storage system. To validate the proposed approach, the custom-built hybrid power unit prototype with dedicated control unit has been built and used to conduct both the process model identification and control system verification.
The paper is organized as follows: Section 2 gives an overview of the proposed hybrid propulsion system topology, along with the corresponding mathematical models of individual subsystems, in particular, the battery energy storage system, the ICE, and the EG set with a full-wave rectifier. Section 3 presents the control system design for the ICE-EG set based on the so-called damping optimum criterion, along with key simulation results. Section 4 presents the results of comprehensive experimental validation of the proposed hybrid propulsion system design. Discussion of key findings presented in the paper and the comparative assessment of hybrid vs. purely electrical propulsion system benefits is presented in Section 5, whereas concluding remarks are presented in Section 6.
Overview
The UAV power unit must balance two opposing requirements: (i) the need for sustained power production (i.e., energy capacity) in order to achieve flight endurance, and (ii) the need for peak load leveling to achieve satisfactory in-flight dynamic performance (in terms of maneuvering capability). Purely electrical propulsion, which is commonly used in multirotor UAVs employs an electrochemical battery for energy storage and power production [30], and has the advantages of a fast load compensation (response time within milliseconds), whereas the electrical propulsion using electric motors coupled to propellers offer a distinct advantage of very precise and highly dynamic thrust control. This represents the key motivation for retaining the battery energy storage within the hybrid power unit and electrical power transmission system using a common DC bus for power distribution, with electrical motors powering the propellers. The hybrid power system considered herein surpasses the key drawbacks of the pure electro-chemical power source (using battery only) by using the energy of ICE and EG set, supplying the bulk of energy to the common DC bus via appropriate AC to DC rectifier, thus maintaining the DC bus operating voltage level. The battery now represents an auxiliary power source connected in parallel to the common DC bus and is used primarily for peak load shaving during load transients, and load leveling at very high hybrid power-train loads. In the former case (peak load shaving role) battery surpasses the engine-generator load response speed by several orders of magnitude, thus it can quickly take over and sustain peak load power delivery until the ICE-EG set can take oversupplying the common DC bus. The main advantage of such a hybrid power unit with an electrical energy storage system (battery or possibly ultracapacitors) is the ability to support the common DC bus voltage under highly dynamic loading conditions, and therefore fully utilize the fast response capability of the propellers electric motor drives, which is the key pre-requirement for high performance of the overall UAV flight control system, whereas simultaneously exploit the rather large energy capacity of the fuel, thus achieving enhanced flight endurance [20][21][22][23][30][31][32].
The analyzed configuration of a hybrid propulsion system (shown in Figure 1) is suitable for any multirotor UAV, comprising four or six propeller drives in a conventional configuration. The following assumptions have been made with respect to the hybrid power unit under examination: • ICE drives the brushless permanent magnet synchronous (BPMS) machine, used as an EG which provides for quasi-steady-state load power supply (i.e., to cover power demands needed for hovering and light maneuvering), while the battery unit is used for peak load shaving; • The energy recovery system is not used within hybrid power unit, thus simplifying power train topology, meaning that the battery energy storage is only charged prior to the flying mission; • ICE angular speed is controlled by a throttle servo actuator based on a small DC servomotor which positions the throttle valve according to the output of the enginebased DC bus voltage controller requiring a suitable DC bus voltage reference target and DC bus voltage measurement-based feedback. Figure 1 shows the hybrid power unit topology with the ICE being directly connected to the EG shaft supplying the common DC bus through a three-phase full-wave diode rectifier, thus acting as the DC bus power supply, that is controlled by the engine throttle command. A battery with suitable charge capacity and terminal voltage range matching the target DC bus voltage range is connected to the common DC bus through a blocking diode used for preventing uncontrolled battery charging from the DC bus, and related battery current and voltage overloads.
When the DC bus voltage is lowered below the battery terminal voltage due to the engine-generator set slow dynamics in covering the sudden DC bus load change, the blocking diode can become forward biased, thus allowing the battery to be discharged and to supply the DC bus, with typical discharge rates of 50 Amperes becoming available within a few milliseconds. In order to facilitate the engine-generator set taking over after the initial DC bus voltage drop, the DC bus voltage control system is based on the engine-generator set throttle command equipped with the modified proportional-integral-derivative (PID) voltage controller, thus being able to match the ICE mechanical load imposed by the electrical torque demand (load) from the EG.
Battery Model
The modeling approach used herein is based on the model precision requirements and other factors as simulation speed and others. The approach based on the quasi-static Thevenin model [33,34] (see Figure 2a) used herein to characterize the key aspects of battery operation, i.e., quasi-steady-state terminal voltage and overall inner resistance corresponding to heat power losses.
In the equivalent circuit shown in Figure Battery state-of-charge is defined in the following manner [34] (see Figure 2b): where ∆Q b = − i b dt is the discharged battery charge and Q max is the battery charge capacity (which may also be dependent on the discharge current rate and battery temperature). It should be noted that the above battery model may also include the effects of battery electrolyte polarization, which are manifested as additional first-order lag dynamics in the battery voltage response under loading conditions [33,34]. Normally, these effects are visible only after a certain amount of time, with polarization voltage time constants typically in the range of tens of seconds, and with voltage transient due to polarization effects typically lasting up to several minutes [34]. Since in this work the battery is primarily used as a power buffer that takes on short-duration load peaks, these battery polarization voltages transient effects are far less emphasized when compared with the internal resistance-related voltage drops which occur during these load peak shaving events. Thus, the presented quasi-static Thevenin model should capture the dominant battery voltage effects for the considered peak load shaving operating regimes.
The experimental characterization procedure consists of two phases: (i) the initial intermittent discharge test intended for OCV vs. SOC curve identification, and (ii) the continuous discharge test used for the recording of the battery equivalent circuit series resistance vs. SOC characteristic over a wide range of battery state of charge values. Measurements were conducted on a developed test bench, consisting of appropriate sensors, data acquisition, and load with cooling propellers (see Figure 3). A network of parallel dissipation resistors (with 4 Ω in a single parallel branch) is used as a battery load, thus obtaining the required battery discharge current profiles, while honoring the safe range of battery operating voltage. OCV is defined as the battery terminal voltage at idle state, i.e., the battery is neither charged, nor discharged (so-called open-circuit condition). After the battery has been charged or discharged, the battery terminal voltage eventually settles to a steady-state value after a certain amount of time has passed in the open-circuit condition (typically one hour or more).
Subsequently, after chemical stabilization of the battery has been achieved (enough time has passed after charging or discharging), the battery terminal voltage is equal to the OCV, which is directly related to the amount of charge currently stored in the battery (i.e., it also corresponds to battery SOC) [34].
To relate the OCV with the battery SoC, an intermittent discharge test was conducted consisting of the following steps: • Initial full charging of the battery cells to achieve 4.15 V per cell, followed by voltage stabilization (settling) to achieve electrochemical and temperature equilibrium; • Partial battery discharging for a short period of time (i.e., 10 min); • Allowing the battery to rest in the open-circuit condition for 3 h, in order to achieve the terminal voltage steady-state; • Repeating the intermittent discharging steps until OCV per cell is approximately 3.4-3.5 V (which corresponds to a fully discharged battery state).
By charging the battery with different current rates (0.25, 0.5 and 1 C), the maximum battery charge, related to the change from 3.5 V per cell to 4.15 V per cell (all after stabilization) is estimated to be 9800 mAh, and this defines 100% SOC value. The intermittent discharging steps were repeated until OCV per cell has reached about 3.4-3.5 V (which corresponds to the fully discharged charge state of the cell, or 0% SoC per cell).
The internal resistance characterization test relies on the gradual (very slow) change in battery current while discharging under the constant load (Figure 4), i.e., that there are no sudden changes in the current magnitude. Thus, it follows that changes in battery polarization voltage are primarily dependent on the polarization resistance (polarization capacitance does not affect the polarization voltage variations), thus justifying the aforementioned choice of the quasi-static Thevenin model of the battery. Figure 4a, and a relatively mild trend of battery internal resistance increase with the battery state-of-discharge (1−SoC), as shown in Figure 4c. The latter result is obtained by combining the battery voltage and current responses in Figure 4b with the battery OCV vs. SoC over the time that discharging test in Figure 4b has been performed.
ICE Model
A nonlinear mean-value engine model (MVEM) described in detail in [35,36] is used as a basis for the modeling of a two-stroke engine for simulation studies and control system design, because it covers static characteristics and dominant (low frequency) dynamic phenomena within the engine, whereas it does not include the high-frequency (fast) dynamics of cyclic/reciprocating piston operation, but it does include the associated torque development delay, i.e., the engine torque being unable to respond immediately to an increase in the manifold pressure.
The considered mean value engine model possesses only two state variables: intake manifold pressure p and the engine angular speed ω, whereas all other effects are modeled by means of three-dimensional static maps. Figure 5a shows the block diagram of a mean value engine model according to reference [35]. In the case of a small-volume intake manifold, typically valid for a small, low-power engine, intake manifold air filling dynamics would be very fast, and, thus may be neglected. Thus, the MVEM model can be simplified to a first-order model, and it should still be valid for engine control applications if the bandwidth of a control system is relatively low [36]. By neglecting the intake manifold dynamics, a simplified model (with throttle dynamics) is shown in Figure 5b.
The dynamics of the thus-simplified first-order model consists of just one state variable, i.e., engine speed: where τ M = f (θ, ω) is net torque after losses, described by a static map. Such map can be obtained from test bench measurements, i.e., by gradually increasing and subsequently decreasing the throttle angle, and recording the net torque τ M quasi-steady state values, which corresponds to the imposed load (e.g., from the generator coupled to the engine shaft). Therefore, for each throttle angle, there is an engine speed with maximum torque output [36]. The simplified model represented by the block diagram in Figure 5b includes several subsystems that correspond to individual engine parts, i.e., throttle servo-valve, rotational dynamics, and the aforementioned torque development dead time (delay) T d For the purpose of control system design, the above nonlinear first-order model is linearized in the vicinity of the engine operating point corresponding to steady-state values of throttle angle and speed for the case of maximum engine torque output.
The torque development dead time T d can be approximated by the first two terms of the Taylor series expansion of the exponential term [35]: provided that the control system closed-loop dynamics are much slower compared to the torque response. Namely, under such conditions, the engine torque response is expected to be gradual, thus justifying the above simple approximation of the torque development deadtime. This dead time represents the mean time between mixture ignitions within cylinders. For a two-stroke engine, combustion occurs at every full rotation of the crankshaft, so the dead time can be approximated as follows [20]: where n is revolutions per minute of the crankshaft. Finally, the throttle actuator dynamics are approximated by a first-order lag system, as suggested in [20,21]. The final linearized process model used in subsequent control system design is shown in Figure 5c.
ICE-EG set test bench is shown in Figure 6, while technical parameters given by the manufacturer are given in Table 1.
Single torque-throttle-rpm map is obtained from breaking torque/power curve provided by the manufacturer (Figure 7a) and suitable sizing methodology, i.e., as shown in [37]. The moment of inertia is estimated by using CAD models. Nonlinear map torquerpm-throttle has been linearized (see Figure 7b) in the vicinity of the anticipated engine operating point, characterized by the throttle opening of approximately 70% opening of the full scale, and engine speed of 9500 rpm, by using related numerical tools within. MATLAB/Simulink™ software. It is generally recognized that the internal combustion engine within a hybrid powertrain should be predominantly operated in the region of its highest fuel efficiency, which is typically characterized by a narrow range of engine speeds and torques. When a hybrid drive has a constraint of the lowest overall mass possible, it is of interest to attain as much current from the generator as possible, because it effectively determines the DC bus charging rate. This is achieved at the point of maximum torque developed by the ICE, since the motor torque is reflected as a current in the generator (maximum of blue line on power/torque characteristics, Figure 7a), subject to the requirement that the DC bus voltage should be closely matched to the engine speed at the particular maximum-torque operating point.
In this work, the particular engine has shown the best torque performance around the engine speed of 9500 rpm [20], so the model derivation and subsequent testing have been carried out for this particular operating point.
Electrical Generator and Rectifier Model
The brushless permanent magnet synchronous (BPMS) generator consists of the rotor shaft with attached rotor permanent magnets, stator case with stator windings, optional Hall sensors (logic level Hall probes) for rotor angle detection, and stator winding phase connections [38].
According to the design that defines the shape of the electromotive force, these machines can differ as trapezoidal shape back electromotive force (BEMF) machines referred to as a brushless direct current (BLDC) and sinusoidal BEMF machines referred to as permanent magnet synchronous machine (PMSM), as elaborated in [39]. The main differences between these two designs are related to the winding spatial distribution within stator slots, magnetic design (i.e., airgap and rotor tooth/slot geometry), the physical shape of rotor magnets, and their magnetization profile [39,40]. Figure 8a shows the simplified electrical circuit of the three-phase BPMS EG with rotor permanent magnets and its connection to the three-phase full-wave diode rectifier, whereas Figure 8b shows the trapezoidal back-emf waveforms of individual EG phases. Each stator phase winding is characterized by its respective internal resistance and inductance parameters R ph and L ph , and the induced EMF per phase (Figure 8b), which is dependent on rotor speed ω g and rotor magnetic flux spatial distribution ϕ m [41,42]: e l = K e ω g ϕ m pα g − 2π(l − 1)/3 (6) where l = {1, 2, 3} being the phase number, p being the number of EG pole pairs, and α g = ω g dt is the BPMS machine rotor position (mechanical angle). On the other hand, the total electromagnetic torque of the BPMS machine due to individual phase currents il can be calculated as: According to Figure 8a,b, during each commutation sequence one of the three phases of the trapezoidal back-emf BPMS EG is non-conducting, whereas the two remaining phases are connected to the DC bus through the rectifier diodes when conditions for their conduction are established (i.e., their total BEMF is higher than the DC bus capacitor voltage, augmented by double diode forward biasing voltage). Therefore, for the case of phases 1 and 2 conducting with respect to the DC bus, i 1 = −i 2 = i eq is valid (based on the notation in Figure 8a).
In the above case the BPMS machine may be regarded as a DC machine from the standpoint of the DC bus, with equivalent inductances and resistances given as follows: where the double value of dynamic resistance r d is added to account for semiconductor switching element (i.e., diode) conduction losses. Similarly, the equivalent DC model electromotive force and torque gains are defined as double phase values (K eq = 2K e,line ) for approximately square rotor flux spatial distribution, resulting in the following EMF and EG torque equations [41]: e eq = K eq ω g = 2K e ω g , τ g = K eq i eq = 2K e i eq , where i eq is the previously defined instantaneous BLDC EG line current.
A lightweight, high power outrunner brushless electrical motor (type 6374 from Maytech) is used as an electricity generator within the hybrid power unit setup (see Figure 9a), featuring embedded current/rpm sensors and characterized by 170 KV EMF constant (170 rpm per 1 volt of induced electromotive force). The chosen generator specifications are listed in Table 2. The aforementioned motor constant is confirmed by measurements based on the ramping of the voltage throttle command from zero to maximum and recording the rotational speed (rpm) and phase voltage profiles (Figure 9b). Measurements were conducted on the ICE-EG set test bed (Figure 6), with a mechanically coupled EG and ICE as a prime mover, with speed measurement based on Hall-effect sensors, and armature voltage measurements based on a suitable resistor divider network and analog-to-digital converter embedded on the microcontroller unit used for data acquisition. Electrical machine winding resistance and inductance are measured utilizing a precise laboratory LCR meter. Phase-to-phase quantities are measured for each possible phase-to-phase combination and the phase impedance components (resistance R ph and inductance L ph ) are calculated as average values obtained by measurements.
DC bus Voltage Feedback Control
This section presents the control system design featuring an ICE-EG set voltage control (DC-bus voltage) system utilizing a proportional-integral-derivative (PID) feedback controller, with the controller tuning relying on the damping optimum criterion.
Damping Optimum Criterion
The control system design herein is based on the so-called damping optimum criterion, which belongs to the category of practical optima for linear dynamic systems. More precisely, this is a pole-placement-like analytical method of design of linear continuous-time closed-loop systems, which results in analytical relationships for precise tuning of closed-loop damping (see i.e., [43,44]). The tuning procedure is based on the following closed-loop characteristic polynomial: · · · D n T n e s n + · · · + D 2 T 2 e s 2 + T e s + 1 (10) where T e is the closed-loop system equivalent time constant, and D 2 , D 3 , . . . , D n are the so-called characteristic ratios. In the so-called "optimal" case D i = 0.5, i = 2, . . . , n the closed-loop system of any order n has a quasi-aperiodic step response characterized by an overshoot of approximately 6% (resembling a second-order system with damping ratio ζ = 0.707) and the approximate rise time (1.8 . . . 2.1) T e . For larger T e value choices, the dominant closed-loop modes are characterized by slower response and generally improved control system robustness and noise suppression ability.
The aforementioned equivalent closed-loop system time constant represents the dominant dynamics of the closed-loop system tuned for a well-damped response. Hence, the closed-loop system can be approximated by the equivalent first-order lag term with the time constant T e which can simplify the design of the superimposed (upper-level) controller:
DC Bus Voltage Feedback Control through ICE Throttle Command
DC bus voltage control is facilitated by means of indirect ICE engine speed control based on the DC bus voltage feedback for the dedicated PID controller commanding the engine throttle drive (i.e., providing the throttle unit angle reference). Figure 10 shows the proposed model layout including the linearized engine model from Figure 5c, coupled with a fixed transmission ratio i g to the electricity generator characterized by its equivalent DC model, which is in turn coupled to the common DC bus characterized by the capacitance parameter C dc . For the sake of simplicity of control system design, the throttle unit delay and torque production dynamics can be lumped into a single time constant T ΣICE , which may also incorporate the sampling delay T AD in the case of discrete-time (digital) controller [21,44], as shown below: T IC = T M + T D + T AD (13) Figure 10. DC bus voltage control system process dynamics.
This simplification is used to form the following input-output transfer function model of the engine-generator set connected to the common DC bus G p (s) = K MT K PV s(1 + T ΣICE s) 1 + T PV s + T PV T EQ s 2 (14) where the transfer function model parameters are: The above process model is of the fourth order, which would result in a fifth-order closed-loop system when using the proportional-integral-derivative (PID) controller. However, if the process model could be simplified to a third-order system, then it would be possible to derive explicit analytical expressions for the PID controller parameters, as shown in reference [44]. More precisely, if the second-order term: T PV T EQ s 2 + T PV s + 1 = Ω −2 s 2 + 2ζΩ −1 s + 1 (16) would be characterized by T PV >> T EQ then it could be approximated by the first-order lag dynamic term, valid if the damping ratio ζ is greater than 0.707: Figure 11 shows the block diagram representation of the DC bus voltage feedback control system with a PID feedback controller implemented in the so-called I + ID modified form (see [44]). By equating the coefficients of the characteristic polynomial of the closed-loop system with the equivalent coefficients of the "prototype" damping optimum characteristic polynomial of the fourth order, the following explicit analytical expressions are obtained for the PID controller parameters: It is assumed that the torque gain of the linearized internal combustion engine model is subject to additive errors ∆K mt and ∆T m from their nominal values K mt and T m used in engine speed control system design, so that the actual gain and time constant K * mt and T * m are given as K * mt = K mt + ∆K mt and T * mt = T mt + ∆t mt . The total inertia J, throttle lag T θ , DC bus capacitance C dc and generator resistance and inductance parameters R eq and L eq can be regarded as constant during the engine operation, and engine torque development lag (dead-time) can be calculated online based on speed measurement (estimation), as indicated above.
The closed-loop poles of the control system subject to torque gain variations according to the definition above are shown in Figure 12 for two characteristic scenarios, corresponding to gain K mt relative errors of ±50% with respect to the nominal case (also shown in Figure 12), characterized by the nominal tuning of the DC bus PID controller with nominal K mt gain value. The results in Figure 12 show that the torque gain error shifts the dominant conjugate-complex towards the imaginary axis of the s-plane, but the closed-loop pole damping is still favorable, i.e., it is still ζ ≈ 0.5 or better, which indicates that the proposed PID controller tuning is characterized by favorable robustness to torque gain modeling errors. The effectiveness of such a PID controller tuning approach is illustrated by simulation results shown in Figure 13, for the case of the simplified MVEM model of the ICE used within the DC bus closed-loop control system model in Figure 11. Simulation is carried out for the DC bus load stepwise change from 0 A to 10 A and steady-state voltage reference of 48 Volts used as PID controller setpoint. The ICE responses show that the DC bus voltage control system featuring a PID feedback controller is characterized by a rather fast response: a speed drop after the disturbance is about 45 rpm and overall engine speed recovery lasts 0.4 s. Similarly, the DC bus voltage response is characterized by a short transient with the maximum voltage drop (of about 0.6 V, and the overall load transient) lasting about 0.25 s. Such favorable closed-loop system performance is mainly due to the fast action of the derivative term within the PID feedback controller.
Experimental Verification of DC Bus Voltage Feedback Control System
This section presents the design and development of the proposed hybrid propulsion system experimental setup (schematically shown in Figure 14a) which was subsequently tested under realistic DC bus electrical load conditions to verify its functionality and validate the simulation model.
Experimental Hybrid Power Unit Realization
The experimental setup (a photograph is shown in Figure 14b) consists of the frame that holds all components together, the ICE-EG set that is connected to the common axle using a claw coupling and equipped with mechanical dampeners and springs, a LiPo battery, graduated cylinder tank with the capacity of 0.5 L, and various electronic circuitry including the two microcontroller boards. Separate microcontrollers are used for the ICE throttle control and data acquisition/telemetry tasks in order to avoid possible data acquisition and code execution bottlenecks. Both microcontrollers are programmed and monitored through a host portable computer running MATLAB/Simulink™ software environment. Table 3 lists the key parameters of individual components used in the setup, along with brief descriptions of these components. The microcontroller running the PID control algorithm is equipped with a DC bus voltage sensor connected to the appropriate analog-to-digital converter input, this providing the DC voltage measurement needed to establish the feedback loop for the PID controller (as elaborated in the previous chapter). Moreover, this microcontroller also provides the actuator reference for the ICE throttle valve actuator (in the form of a suitable PWM signal). Thus, when DC bus voltage excursion due to electrical load change is detected through the change of feedback signal, the PID controller adjusts the ICE throttle PWM reference, consequently correcting the engine rotational speed and torque, i.e., the overall power output of the engine-generator set feeding the common DC bus.
Any load excess that cannot be compensated for by the DC bus voltage PID controller is taken over by the battery (see Figure 1), whose discharging is mandated by the perceptible DC bus voltage drop sufficient to forward bias the blocking diode. In this way, straightforward passive energy management is implemented, which is desirable from the standpoint of overall control system robustness and redundancy.
The second microcontroller used solely for data acquisition only is equipped with current sensors for the EG current, battery current, DC bus total current, rpm hall sensor, and DC bus voltage sensor. Data acquisition is executed within the MATLAB/Simulink™ software environment, using the so-called simulation model "external mode" execution thus facilitating real-time telemetry.
In order to have a safe and reliable connection between the ICE and EG, a claw coupling of sufficient torque rating is used. It is secured using an appropriate adapter with a conical hole on the engine side to connect it to the engine shaft (Figure 15a). On the EG side, the coupling must be fixed using a screw characterized by sufficient strength to withstand the load torque variations due to the stroke-based operation of the ICE (see locking pins in Figure 15b).
Engine Fuel Consumption
The fuel consumption was measured in engine idle regime first, followed by five characteristic operating points that corresponding to EG power output range between 300 to 1700 W. EG unit power was subsequently dissipated by the power resistor network, which was also simultaneously measured by means of a suitable DC wattmeter (see Figure 17a).
Each test was conducted under steady-state load conditions, by keeping the engine in the particular operating regime for over 5 min. For each steady-state load case, initial and final volumes of fuel were measured using the graduated cylinder tank (as shown in Figure 17a). Each test was repeated five times to obtain a reliable fuel consumption estimate. The final results of averaged fuel consumption for each operating regime are shown in Figure 17b. The presented results indicate that fuel consumption characteristic is practically linear with respect to the EG unit electrical load.
Hybrid Power Unit Measurements
In order to test the functionality of the proposed hybrid power system concept under realistic load conditions, a system was tested in several operating regimes corresponding to no-load (idling), low load, medium load, and high load operating modes. In all experiments, the voltage reference is set to 50 V. The DC bus voltage PID controller was implemented in the C programming language complying with the proposed PID algorithm structures presented in Section 3.
A stepwise load is chosen for the testing of the hybrid power system transient and steady-state performance for the following reasons: (i) it provides the most abrupt load profile, usable for stringent testing of the control system transient performance, including testing for possible saturation effects and stability issues (closed-loop damping issues), and (ii) it can be easily related to a sudden vertical ascending maneuver, because in that case all propeller drives are suddenly loaded with an almost equal constant (stepwise) load which is maintained until the desired flight level is reached.
Each test was repeated five times and consists of the following DC bus loads emulated by the power dissipation resistor network equivalent resistance:
During tests, ICE is initially held in idle conditions for approximately 5 min (engine warm-up period), followed by the DC bus load being stepped up and down, with each load step lasting several seconds to record the corresponding load-on and load-off DC bus system voltage and current transients. Offsets in current measurements are due to non-ideal sensor characteristics, notably emphasized while measuring low (near-zero) currents, which would be removed by periodic recalibration of current sensors in actual applications. Since the tests conducted herein were rather short, these offsets were simply subtracted offline after the measurements were carried out.
Low load responses are presented in Figure 18a,b. Initially, during the engine run-up from idle speed to the chosen operating point around 10,000 rpm, the battery provides the DC bus load (139-143 s). As the engine speed (rpm) reaches around 10,000 rpm, the EG takes over the load, and the battery current drops to zero. DC bus voltage remains stable near the target value of 50 V throughout the experiment, without significant voltage drop excursions. Results for medium-high and high load regimes are shown in Figure 18c,d. Engine ramps up its rpm to approximately between 13,500 rpm and 14,000 rpm. As was the case for low load, the battery initially supplies DC bus load until the EG takes over, with the average generator load being around 30-35 A, thus corresponding to 1500-1750 W of power consumption at the load dissipation resistor network. Again, DC bus voltage is maintained near the target value of 50 V without significant voltage drop excursions.
Results for the case of peak load are shown in Figure 18e,f. Due to limited ICE-EG set power output, the battery needs to supply the additional current load (approx. 16 A) to the DC bus when the EG current limit of 30 A is reached. In this case, the overall hybrid power system is characterized by maximum power production, amounting to 2400-2500 W of total power output. The DC bus voltage again remains very stable near the reference voltage, without significant voltage excursions under abrupt load current changes.
Discussion
Based on the obtained results, it can be concluded that a functional hybrid power system has been developed comprising an ICE-EG set and LiPo battery, with the output DC voltage being fully supported by the power output of the ICE-EG set up to DC bus electrical loads amounting to 1700 W. For larger loads magnitudes, the battery pack is capable of supplying the additional peak load, so a total of 2200 W (up to 2.5 kW in best cases) can be obtained from the proposed hybrid power system. The EG is capable of continuously delivering up to 1200 W without significant heating, with increased heating of the EG (up to 85 • C) being noticed when the power demand exceeds 1500 W, primarily due to EG being fully enclosed, which may prevent adequate cooling at higher loads.
The hybrid power supply overall efficiency can be estimated in the following manner: η e f f = P el P mech ≈ 0.75 (22) where P mech is the mechanical power at the ICE output, and P el is the electrical power transferred from the generator to the load (i.e., power at the dissipation resistor grid). Since the engine can produce 1.8 Nm of torque at 10,500 rpm, what would correspond to the mechanical power of 1980 W. Under these conditions, electrical power dissipated that the load is 1500 W, and the overall efficiency of the hybrid power system is estimated to the aforementioned value of 0.75. The hybrid drive vs. conventional drive efficiency analysis can be determined based on the energy obtained over a particular reference time interval. In order to equate the energy capacity of the battery and the hybrid drive energy delivery capacity in a straightforward manner, the equivalent hybrid drive energy production in Watt-hours was calculated based on the recorded data and compared with values of the energy capacity of typical batteries for UAV applications. In particular, the internal combustion engine fuel consumption is about 30 mL/min under sustained power production of 1500 W at the generator (corresponding to medium to high generator loads), thus resulting in hourly fuel consumption of about 1.8 L of fuel. This corresponds to 1500 Wh of available energy at the generator, with the overall mass of the hybrid power unit (engine, generator, rectifier, fuel) amounting to 6 kg.
On the other hand, the UAV battery pack under examination is characterized by 400 Wh of energy capacity (see results of battery identification section), with the total mass of the battery and the supporting equipment of about 3 kg. Thus, in order to match the energy output of the hybrid power unit, a total of 4 battery packs are needed, resulting in an overall battery system mass of 12 kg. It is, therefore, easily concluded that the ICE-EG hybrid power unit has a gravimetric energy density of 300 Wh/kg, while the energyequivalent battery pack has an energy density of about 140 Wh/kg. This clearly shows that the energy (i.e., flight) autonomy of a UAV equipped with a hybrid power unit is significantly improved in comparison to the battery-only "benchmark" case (see Figure 19).
It is important to point out that when designing the experimental setup, the authors have mounted the ICE-generator set on rubber dampeners (so-called rubber bobbins or bushings). An estimate of the vibrations magnitude was done by using vibration tests that were implemented in the UAV multirotor "Pixhawk 2" controller unit and "Mission Planner" software. It turns out that the vibration modes that occur during the operation of the ICE-based hybrid propulsion are outside the frequencies that would affect the sensors (inertial unit) and, thus, create navigation problems and stability problems for the aircraft in general. This fact inspires confidence that it should not be too complicated to decouple the hybrid drive from the frame of the aircraft and thereby reduce the impact of vibrations generated by ICE. Figure 19. Energy density analysis of hybrid and conventional electric power unit.
Conclusions
Based on the results of computer-based simulations and experiments conducted using the hybrid power unit test bench, and the comparative assessment of the conventional battery power unit with the proposed hybrid power unit, several clear benefits of hybrid power supply are identified, which are given below:
•
Hybridization of the conventional ICE-EG set results in a stable power source is obtained which is characterized by the gravimetric energy density which is two times higher compared to a purely battery-based power supply; • For the aforementioned increase in the gravimetric energy density using a hybridized ICE + EG power unit, the overall mass of the hybrid power system is two times smaller when compared to the comparable battery-based system.
According to the obtained results that indicate the maximum stable power obtained from the hybrid propulsion, in the future the plan is to build a multirotor aircraft (presumably a quadcopter) with a takeoff mass of 10-12 kg with highly efficient 22-24 inch agricultural-UAV propellers, with typical hovering regime power requirements of approximately 1000-1200 W, which would be provided for by a hybrid drive, and an additional power margin of 300-500 W, wherein any power demand above 1500 W would be covered by the battery.
In the future, it would be interesting to explore hybrid power supplies featuring energy recovery, where the battery would be recharged if the generated power of the hybrid drive allows charging of the battery, by means of simultaneous DC bus voltage and battery current control, which may require additional power electronic systems in the form of bidirectional DC/DC power converter.
Of particular interest would be the application of an active rectifier, where a stable DC bus voltage could be generated, independent of the ICE speed, which motivates further research of fuel consumption-optimal ICE control.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-06-22T17:54:40.846Z | 2021-05-06T00:00:00.000 | {
"year": 2021,
"sha1": "5870ce27d64be3c00ce90fef08b7331031f53480",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/9/2669/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "677652ede9f9628efc38d13353dc753204c815fc",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17043754 | pes2o/s2orc | v3-fos-license | Enhanced model for determining the number of graphene layers and their distribution from X-ray diffraction data
A model consisting of an equation that includes graphene thickness distribution is used to calculate theoretical 002 X-ray diffraction (XRD) peak intensities. An analysis was performed upon graphene samples produced by two different electrochemical procedures: electrolysis in aqueous electrolyte and electrolysis in molten salts, both using a nonstationary current regime. Herein, the model is enhanced by a partitioning of the corresponding 2θ interval, resulting in significantly improved accuracy of the results. The model curves obtained exhibit excellent fitting to the XRD intensities curves of the studied graphene samples. The employed equation parameters make it possible to calculate the j-layer graphene region coverage of the graphene samples, and hence the number of graphene layers. The results of the thorough analysis are in agreement with the calculated number of graphene layers from Raman spectra C-peak position values and indicate that the graphene samples studied are few-layered.
Introduction
Graphene, the atom thick material that is the 2D building unit of all carbon allotropes [1], having unique and remarkable properties which are largely due to its structure, has attracted great interest in terms of fundamental studies as well as potential applications [2]. To date, several methods have been used to produce high-quality graphene sheets, such as mechanical exfoliation of graphite, chemical vapor deposition (CVD) of gases containing carbon atoms on the surface of copper films [3] and cutting open nanotubes [4]. The electrochemical approach is a proven low-cost method for a high-yield production of carbonbased nanostructures such as graphene [5,6]. Depending on the production procedure, graphene can be produced as a mixture of monolayers, bilayers and multilayers (3-10 monolayers) in form of flakes or flat sheets [7,8]. As the characterization protocol that usually follows after graphene production is an important activity, the ultimate aim of this study was to define a reliable model and thus provide a method for determining the number of graphene layers and their distribution by XRD data. To date, XRD data have been used by some authors to determine the distribution of graphene layers or their average number [9][10][11][12]. This work includes a definition and application of an enhanced model for determining the thickness distribution of graphene layers and their number by XRD data. The enhanced model was applied to graphene samples produced by two electrochemical methods: high temperature electrolysis in molten salt and electrolysis in aqueous solution, both using a nonstationary current regime. The enhancement of the model provides a great increase of the accuracy of the results, as it may be used with graphene XRD curves which are highly asymmetrical. The resulting jth layer region occupancies and jth layer coverages, for each j ≥ 1, allow for the calculation of the average number of layers of the graphene samples. The calculations are compared to the results obtained by other methods and are in accordance with them.
Having obtained the number of layers, together with the determined mean crystallite size L a for the studied samples, provides a better overall picture with regard to their size and thickness. The nomenclature of the samples presented in this work is the following: graphene samples obtained by electrolysis in molten salts are denominated GMSE1, GMSE3 and GMSE4, and graphene samples produced by electrolysis in aqueous electrolyte are denominated GAE1 and GAE2.
Results and Discussion
The XRD pattern of each of the samples was analyzed around the 002 peak, with the attention strictly focused on the line shape. Therefore, the fitting was operated in the range of . It was done using the Laue functions model presented by Equation 1, which is further in the text addressed to as simple model [9], and by using the proposed improvement of the Laue functions model presented by Equation 3, which we name the enhanced model. The latter is recommended in general as the 002 XRD peak line shape may be extremely asymmetrical and is explained later in the text.
In Figure 1 are given the Raman spectra for two of the five presented graphene samples, GAE1 (Figure 1a), having the least symmetrical 002 XRD peak of all studied samples, and GMSE1, having an almost symmetrical 002 XRD peak, both elaborated later within this work. Raman spectra clearly show the structural order of the samples. Considering the GAE1 graphene sample, the ratio of D and G Raman intensities shows low structural order, whereas for the GMSE1 graphene sample the same ratio shows that its structure is highly ordered. This undeniably affects the symmetry of their 002 XRD peaks.
The simple model includes graphene thickness distribution and certain parameters, providing calculations of XRD intensities of the theoretical curves: (1) where F is a structure factor, N is the maximum number of layers, |f(θ)| is an atomic scattering factor which varies from 6.00 to 6.15 eV/atom with incident radiation ranging from 2 to 433 keV, ka j = (4πd j sin θ)/λ, where d j is the lattice spacing between the jth and the (j − 1)th layer, θ is an angle between the incident ray and the scattering planes, λ is the X-ray wavelength, and β j , having a value between 0 and 1, is the occupancy of jth graphene layer.
Hence, B j = 100β j is the occupancy of the jth layer in percent, and D j = B j -B j+1 is the jth-layer coverage in percent, for each j = 0,1,…,N, assuming β 0 = 1 and β N+1 = 0, where N is the total number of layers in the studied sample, regardless of the distrib- ution. Thus, the average number n of graphene layers may be calculated by the following formula: (2) Enhanced model for determining the number of graphene layers and their distribution by XRD data The simple model (Equation 1) is convenient to use when the peak it is dealing with is more or less symmetrical, since this model simulates symmetrical peaks only. The approximation of the number of graphene layers therefore becomes less accurate when the 002 peak it is dealing with is highly asymmetrical. The simulation may be done with several theoretical multilayer curves, calculated from Equation 1, all being symmetrical and having an acceptable correlation coefficient to the experimental curve. The improvement to the model is done in the following way: where α s is the parameter that represents the share of each simulating curve, having values between 0 and 1, and M is the number of simulating curves. Through Equation 3 we have defined the enhanced model. One could notice that for symmetrical peaks, there is one parameter α = 1 and thus one theoretical curve. In this case, the model in Equation 3 coincides with the model in Equation 1.
To estimate the parameters α s , we propose the partitioning of the 2θ interval [a,b) around the 002 peak.
, Figure 2). Given this, α s may be calculated through , for each s = 1,2,…,M. Having defined the enhanced model, it is further used to analyze some graphene samples produced by electrolysis in molten salts and by electrolysis in aqueous solutions, both using reverse potential, providing considerably increased accuracy in determining graphene layers thickness distribution.
Graphene produced by electrolysis in molten salts and the model in Equation 1
Two graphene samples produced by electrolysis in molten salts are considered and discussed herein: graphene sample GMSE1, having a symmetrical peak and graphene sample GMSE4, having an asymmetrical peak, which are different in cell potential. Typical TEM micrographs of the graphene sample GMSE1 material are shown in Figure 3a and reveal that, as usual, they are planar. The characteristic diffraction pattern of the same section is shown in Figure 3b.
Using Equation 1, XRD intensities of the curves in Figure 4 are calculated as further discussed. The three red lines are calculated curves from Equation 1 for β j ≠ 1, which suggests that the number of graphene layers has a distribution. The two light red lines are shown for comparison to the dark red fitted multilayer curve.
The wide, red dotted line in Figure 4 is the calculated curve for ideally distributed, monolayer graphene. The light red line, which is narrower than the monolayer graphene line, but broader than the green experimental curve, is the calculated curve for a nonuniform distribution of graphene layers for a 9-layered graphene sample. The dark red line is the calculated curve for a nonuniform distribution of graphene layers for multilayered graphene. This illustrates a good fit to the experimental curve of GMSE1, which is symmetrical with a correlation coefficient of ρ = 0.986. According to the associated β j parameters, the coverage of the j-layer graphene regions are calculated and given in Table 1.
Apparently, the dominant structure (80% or more) is fewlayered, and the average value for the number of graphene layers is calculated as N GL = 2.87 for the dominant structure and N GL = 5.16 for the overall structure. The calculated value of L a for the sample GMSE1 was 1.82 nm.
The graphene sample GMSE4 has an asymmetrical experimental 002 XRD peak, as shown in Figure 5a. The calculated curve from Equation 1 is presented in red, and the experimental curve is in green. In Figure 5b, there is a low frequency part of the Raman spectrum for sample GMSE4, given with its C-peak. Its position, Pos(C) N , is directly related to the number of graphene layers, N, and it varies with N as given by the following Equation 4 [13]: where α = 12.8 × 10 18 N·m −3 is the interlayer coupling, and μ = 7.6 × 10 −27 kg·Å −2 is the graphene mass per unit area.
The j-layer region coverages according to the analysis of the XRD 002 peak with the model in Equation 1 are given in Table 1. The average value for the number of graphene layers is calculated as N GL = 3.1 for the dominant structure and N GL = 9.57 for the overall structure. The calculated value of L a for the sample GMSE4 was 2.85 nm.
According to the C-peak position at 32.46 cm −1 , the number of graphene layers for sample GMSE4 is calculated as N = 2.13. Later within this article this graphene sample is analyzed using the enhanced model and that result is closer to the C-peak position method of calculation.
Graphene produced by electrolysis in aqueous electrolyte and the model in Equation 1 Graphene samples GAE1 and GAE2 produced by electrolysis in aqueous solution from two different raw graphite materials, using nonstationary current regime, are analyzed here. Typical TEM micrographs of the graphene sample GAE1 material are shown in Figure 6. Compared to the TEM micrographs of the sample GMSE1, these indicate a higher presence of monolayer regions.
In Figure 7a and Figure 7b there is a review of the theoretical (Equation 1) nonuniform thickness curves and the experimental curves obtained for graphene samples GAE1 and GAE2. In Figure 7c and Figure 7d, there are presented the C-peak positions for GAE1 and GAE2, respectively.
The widest blue dotted line in Figure 7a and Figure 7b is the calculated curve for uniformly distributed monolayer graphene, the light blue line is calculated curve for a nonuniform thickness distribution of 3-layered graphene. The dark blue line is the calculated multilayer curve for a nonuniform graphene thickness distribution using the simple model of Equation 1.
There is a noticeable discrepancy with the experimental curves in both Figure 7a and Figure 7b, particularly in the GAE1 spectra. Having a correlation coefficient ρ = 0.92 for sample GAE1 and ρ = 0.93 for sample GAE2, the results are acceptable and are presented in Table 2. However, as explained later in the text, using the enhanced model for analysis and sample GAE1 results in a significant improvement of the accuracy of the results.
The preceding analyses show that the dominant structures of both graphene samples GAE1 and GAE2 are few-layered. The average value for the number of graphene layers of sample GAE1 is calculated as N GL = 2.57 for the dominant graphene structure and N GL = 4.25 for the overall graphene structure. According to the C-peak positions at 33.28 cm −1 (Figure 7c) and Equation 4, the number of graphene layers for sample GAE1 is N = 2.22, which is closer to the results obtained and presented later in this work by the enhanced model for the same sample. The calculated value of L a for the sample GAE1 was 4.16 nm.
The average value for graphene layers number of sample GAE2 is calculated as N GL = 3.53 for the dominant graphene structure and N GL = 5.6 for the overall graphene structure. The C-peak positions at 39.4 cm −1 (Figure 7d) and Equation 4, allows for the calculation of the number of graphene layers for sample GAE2 and it is N = 3.46. The calculated value of L a for the sample GAE2 was 3.93 nm.
In the following section, the alterations to the simple model, which form the enhanced model and which improve the reliability of the provided insight, the precision of the results and the fitting, are presented.
Graphene produced by electrolysis in molten salts and the enhanced model in Equation 3
The graphene samples produced by electrolysis in molten salts that are considered and discussed using the enhanced model are sample GMSE3 and GMSE4. Considering GMSE3, the partitioning of the interval [17,36), is done in the following way: 17,27) and I 2 = [27,36) as shown in Figure 8a. In Figure 8b, there is part of the Raman spectrum for sample GMSE3, showing its C-peak.
The red curve in Figure 8a exhibits a good fitting to the green experimental curve GMSE3, as its correlation coefficient is ρ = 0.96. The summarized results obtained from the calculated nonuniform multilayer distribution curve and the jth layer occupancy (B j ) and therefore the coverages of j-layer graphene regions (D j ) in percent for each part of the structure on the whole interval provide the analysis and results shown in Table 3.
The average value for number of graphene layers according to XRD 002 peak analysis according to Equation 3 is calculated as Table 4.
The average value for number of graphene layers according to XRD 002 peak analysis is calculated as N GL = 2.14 for the dominant graphene structure, which is about 80% of the whole graphene sample, and N GL = 8.68 for the overall graphene structure. One could notice that the average graphene layers number calculated using the enhanced model (Equation 3) is closer to the result obtained by the C-peak method than the layers number obtained by the simple model (Equation 1) calculated in the previous section.
Graphene produced by electrolysis in aqueous electrolyte and the enhanced model in Equation 3
Graphene sample GAE1 produced by electrolysis in aqueous solution is intentionally considered for analysis again, so that the results obtained by the enhanced model could be compared to the results obtained by the previous simple model. obtained with the enhanced model (Equation 3) are shown in Figure 10.
The black curve in Figure 10 exhibits an excellent fitting to the green experimental curve GAE1, as its correlation coefficient is ρ = 0.99. The results obtained from the calculated nonuniform multilayer distribution curve and the jth layer occupancy (B j ), and therefore the coverages of j-layer graphene regions (D j ) in percent for each part of the structure over the whole interval are summarized in Table 5.
The graphene structure is few-layered, as nearly 95% of the overall structure consist of eight layers or less. The average value for number of graphene layers is calculated as N GL = 2.07 for the dominant graphene structure and N GL = 3.06 for the overall graphene structure.
Conclusion
There are several conclusions to be drawn from the preceding analysis. The model that is used provides an additional insight into the j-layer occupancies and coverages of graphene samples. The enhanced model is suggested to be generally used, because graphene sheets that are subject of research may have highly asymmetrical 002 XRD peaks. Such peaks are inconvenient to be analyzed by the simple model (Equation 1), and therefore adequate alterations to the model were considered and embedded as shown in an enhanced model (Equation 3). The enhancement of the model provides a significant increase of the accuracy of the results. The results of the analyzed graphene samples, either obtained using the simple model or the enhanced model, show that these samples are few-layered. While Equation 1 already produces acceptable results when it comes to the number of layers, it is shown here that the enhanced model is in higher accordance with other methods results and therefore more accurate.
Experimental
Graphene samples were produced by electrolysis in molten salt and aqueous solution using a nonstationary current regime (reverse potential, from 1 to 5 V, in molten salt electrolysis, and from 10 to 15 V, in aqueous solution, controlled by a molybdenum quasi-reference electrode and a calomel reference electrode). The study was done at temperatures between 400 and 600 °C in molten salt, and 25 °C in aqueous solution. It should be underlined that, during the electrolysis, the cations reduced at the electrode intercalate at the graphite surface and generate a high mechanical stress that causes exfoliation of the cathode. This phenomenon enables the electrochemical synthesis of graphene to be performed. The electrochemical route also offers the possibility for accurate control of various parameters, such as applied voltage, current density, temperature and morphology of starting material. The morphological structures of the obtained graphene samples were investigated by TEM analysis using a FEI Tecnai G2 Spirit TWIN with a LaB6 cathode. XRD spectra were recorded using a PAN-analytical X'Pert Pro diffractometer (Cu Kα radiation). Raman spectra were recorded using LABRAM ARAMIS-HORIBA JOBIN YVON system with 532 nm wavelength incident laser light, hole 250 μm, slit of 250 μm. | 2017-06-18T09:31:06.567Z | 2015-11-06T00:00:00.000 | {
"year": 2015,
"sha1": "e97d04787ad7d231bbba666f22160ce02a2271d8",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-6-216.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e97d04787ad7d231bbba666f22160ce02a2271d8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
259164417 | pes2o/s2orc | v3-fos-license | Learning by Analogy: Diverse Questions Generation in Math Word Problem
Solving math word problem (MWP) with AI techniques has recently made great progress with the success of deep neural networks (DNN), but it is far from being solved. We argue that the ability of learning by analogy is essential for an MWP solver to better understand same problems which may typically be formulated in diverse ways. However most existing works exploit the shortcut learning to train MWP solvers simply based on samples with a single question. In lack of diverse questions, these methods merely learn shallow heuristics. In this paper, we make a first attempt to solve MWPs by generating diverse yet consistent questions/equations. Given a typical MWP including the scenario description, question, and equation (i.e., answer), we first generate multiple consistent equations via a group of heuristic rules. We then feed them to a question generator together with the scenario to obtain the corresponding diverse questions, forming a new MWP with a variety of questions and equations. Finally we engage a data filter to remove those unreasonable MWPs, keeping the high-quality augmented ones. To evaluate the ability of learning by analogy for an MWP solver, we generate a new MWP dataset (called DiverseMath23K) with diverse questions by extending the current benchmark Math23K. Extensive experimental results demonstrate that our proposed method can generate high-quality diverse questions with corresponding equations, further leading to performance improvement on Diverse-Math23K. The code and dataset is available at: https://github.com/zhouzihao501/DiverseMWP
Introduction
Solving Math Word Problem (MWP) aims to infer a mathematical equation and final answer from the natural language description of a math problem.Table 1(a) shows one typical MWP example.In this (Kumar et al., 2022), (c) MWP with diverse questions generated by our method.The questions are highlighted by red color in the texts of (a) and (b).
task, the machine needs to extract relevant information from natural language texts and perform mathematical reasoning, which is challenging.With the boom of deep neural networks (DNN), the research of solving MWP has recently made great progress.For example, Seq2Seq models (Wang et al., 2017;Xie and Sun, 2019;Zhang et al., 2020a) as well as pre-trained language models (PLMs) (Tan et al., 2021;Li et al., 2022b;Liang et al., 2022) have been extensively exploited to deal with MWP, and increase the prediction accuracy significantly.However, such models are usually in lack of the ability of learning by analogy due to the limited data size and problem diversity.Therefore, current approaches unfortunately have reached their performance bottleneck (Zhang et al., 2019;Patel et al., 2021;Liu et al., 2021a;Sundaram et al., 2022), showing that much remains to be done.
To alleviate this limitation, recent focus has been put on how to augment high-quality data for MWPs.Along this line, there have been some proposals (Jen et al., 2021;Kumar et al., 2021;Liu et al., 2021a;Li et al., 2022a;Kumar et al., 2022).Though demonstrating encouraging results, these current practices only consider word-level or sentence-level alternative expressions of the original problem, owing to the rigorous requirement in logic and numerical quantity.As illustrated in Table 1(b), the back translation augmentation method (Kumar et al., 2022) generates less diverse data sharing very limited semantic differences from the original counterpart.On the other hand, Yang et al. (2022) publish a diverse MWP dataset (called UnbiasedMWP), which was collected by manual annotation with huge cost but the size is limited.
In this paper, we make a first attempt to solve MWPs by automatically generating multiple diverse yet consistent questions (together with their corresponding equations), as illustrated in Table 1(c).There are two main reasons for this augmentation strategy.( 1) Training on less diverse data would lead the solver to learn shallow heuristics only, whilst deep semantics are preferred in order to better understand the problems (Patel et al., 2021;Li et al., 2022b;Yang et al., 2022).Consequently, when the question is changed (i.e., Question1,2,3 in Table 1(c)), the learned solver may not be able to solve MWP properly.(2) Our augmentation strategy could generate challenging and diverse MWPs.Training on such data would improve the ability of learning by analogy, which is essential for an MWP solver to deeply understand the problem.It is also beneficial to reduce the unreasonable case (Patel et al., 2021) that some current solvers still can predict the Equation even without any question (e.g., removing the question in the text of Table 1(a)).
Motivated by these findings, we propose a Diverse Questions Generation Framework (DQGF) to generate high-quality and diverse questions with their corresponding equations for a given MWP.Our DQGF consists of three components as shown in Figure 1.(1) Diverse Equations Generator: It generates diverse and meaningful equations from the original MWP based on two generation strategies.Specifically, we propose a subequation based strategy that extracts sub-equations from the original equation, and a unit based strategy that generates equations according the units (e.g., "dollars" in Table 1) in the scenario description.
(2) Equation-aware Question Generator: Given a scenario description and generated equation, it generates a corresponding question.For example, given the Scenario description and Equation1 in Table 1(c), it can generate Question1.In details, we utilize two encoders to extract the information of scenario description and equation respectively, and design an interaction mechanism which exploits numbers as a bridge to fuse the information of both encoders.(3) Data Filter: A large-scale MWP pre-trained language model (Liang et al., 2022) is leveraged to filter unreasonable data.As such, we can generate many high-quality and diverse MWP samples.
Extensive experiments on the existing dataset UnbiasedMWP (Yang et al., 2022) show that our proposed DQGF could generate high-quality diverse questions with corresponding equations, thus increasing the accuracy of the MWP solver.To further verify the effectiveness of the DQGF, we produce a new dataset (called DiverseMath23K) with diverse questions from the current benchmark dataset Math23K (Wang et al., 2017).We also propose a new Group-accuracy metric on all questions of a problem.Experimental results show that DQGF can effectively improve the overall performance of the solver on DiverseMath23K, demonstrating its ability of learning by analogy.In summary, our contributions are as follows: • We propose a novel diverse questions generation framework (DQGF) to automatically generate diverse questions with their corresponding equations for a given MWP.To the best of our knowledge, this is the first effort to generate such data in MWP.
• We propose a Diverse Equations Generator, consisting of sub-equations based and unit based strategy to generate diverse and meaningful equations from the original MWP.
• We propose an Equation-aware Question Generator to generate a question from the given scenario and equation.It consists of two encoders to encode scenario and equation respectively where an interaction mechanism is developed to fuse the information.
• We produce a new MWP dataset (called Di-verseMath23K) with diverse questions by extending the current benchmark Math23K.
• Experimental results demonstrate that DQGF could generate high-quality diverse questions and improve effectively the overall performance of the MWP solver on both Unbi-asedMWP and DiverseMath23K.MWP Solver: Recent proposals intend to solve the problem by using sequence or tree generation models.Wang et al. (2017) present a sequence-tosequence (seq2seq) approach to generate the mathematical equation.Xie and Sun (2019) propose a goal-driven tree-structured (GTS) model to generate the equation tree.This sequence-to-tree approach significantly improves the performance over the traditional seq2seq approaches.Zhang et al. (2020a) adopt a graph-to-tree approach to model the quality relations using graph convolutional networks (GCN).Applying pre-trained language models such as BERT (Devlin et al., 2019) was shown to benefit the tree expression generation substantially.Prior study (Patel et al., 2021) indicates that existing MWP solvers rely on shallow heuristics to generate equations.As such, they could not solve different questions of the same MWP well and even ignore the question.Our DQGF effectively helps the solver overcome these issues.
MWP Generation: MWP generation approaches can be divided into three categories: template-based approaches, rewriting-based approaches, and neural network-based approaches.
Template-based approaches usually follow a similar two-stage process: they first generalize an existing problem into a template or a skeleton and then generate the MWP sentences from the templates (Williams, 2011;Polozov et al., 2015).Rewriting-based approaches target the MWP generation problem by editing existing human-written MWP sentences to change their theme but the underlying story (Koncel-Kedziorski et al., 2016;Moon-Rembert and Gilbert, 2019).
Recent attempts have been focused on exploiting neural network-based approaches that generate MWPs from equations and topics in an end-to-end manner (Liyanage and Ranathunga, 2020;Liu et al., 2021b;Wang et al., 2021).Unlike these generation methods, our equation-aware question generator focuses on generating questions that are in line with the given scenario and match the given equation.Recently, Shridhar et al. (2022) have also proposed a generation model to implement this function, but main differences exist: (1) Their work focuses on generating goal-driven sub-questions without equations, which is used in prompt learning instead of a general data augmentation tool.
(2) While their generator directly concatenates the scenario and equation text sequence to encode and fuse their information, the structure of equation is much different from the scenario texts.We propose two different encoders where an interaction mechanism is designed to leverage numbers as a bridge to fuse the information.
MWP Dataset: Several datasets are proposed to evaluate the model's numerical reasoning ability (Koncel-Kedziorski et al., 2016;Wang et al., 2017;Amini et al., 2019;Miao et al., 2020).They only provide a single question to each scenario.Therefore, training and evaluating on such setting will lead that the solvers rely on shallow heuristics to generate equations (Patel et al., 2021).To mitigate this learning bias, Yang et al. (2022) propose a diverse MWP dataset (called UnbiasedMWP).However, manually collecting high-quality datasets is usually labor-intensive and time-consuming in practice.In contrast, our DQGF could automatically generate such diverse data.In this paper, we will use UnbiasedMWP to train equation-aware question generator and evaluate the whole DQGF.
Besides, we also propose a diverse MWP dataset DiverseMath23k to evaluate the MWP solver.
Methodology
Figure 1 shows the overview of the proposed Diverse Questions Generation Framework (DQGF).We firstly put the original MWP into the Diverse Equations Generator to generate diverse equations, then the generated equation and scenario description of the original MWP are fed into the trained equation-aware question generator to produce corresponding questions.In this way, we will obtain diverse questions with their equations, forming new candidate MWPs.Finally, these candidate MWPs are further filtered by the data filter.In what follows, we will introduce Diverse Equations Generator, Equation-aware Question Generator, and Data Filter respectively in Section 3.1, Section 3.2, and Section 3.3.
Diverse Equations Generator
Diverse equations generator aims to generate diverse equations from the original MWP.Our principle is to generate as many as possible logical equations.Motivated by this, we propose two equation generation strategies: sub-equation based and unit based strategy.
Sub-equation Based
The equation of the original MWP usually includes some sub-equations, which represent the necessary steps to solve the problem (Cobbe et al., 2021).For instance, in Table 1(c), "15+10" is a sub-equation of the original equation, describing a uniform's price.Therefore, we extract these sub-equations from the original equation, which are very high-quality and diverse.
Unit Based There are some physical relations between the numbers in an MWP.We could identify these relations, and then combine numbers with operators to get a new equation.Motivated by this, we propose to search the relations of numbers based on their units.Every number in MWPs has its unit.For example in Table 1, "40" has the unit "students" and "15" has the unit "dollars".We combine them in two situations.(1) Same unit: Two numbers with same unit always represent the same object.We combine them with the operator "+" to generate equations representing the totality questions like "what is the total of A and B".Besides, we combine them with "-" and "/" which represent the comparison questions like "how much more A than B" and "how many times A than B", respectively.
(2) Different units: Two numbers with different units in a MWP always represent two objects that have subordinate relations.Therefore, we combine them with "*".This strategy will generate diverse equations, though it probably brings some unreasonable equations further generating noisy MWPs.Such noisy MWPs will be filtered by the final data filter.
To be noted, both sub-equation based and unit based strategies rely on heuristic rules.Therefore, we do not need to train our diverse equations generator.
Equation-aware Question Generator
General question generation in the Question-Answering area aims to generate a question from a given passage and a specified answer (Sun et al., 2018;Kim et al., 2019;Li et al., 2019).By regarding the scenario description and equation as passage and answer respectively, we can formulate our task as a general question generation problem.Based on this, we propose an equation-aware question generator under a general encoder-decoder framework as shown in Figure 2. Specifically, we Scenario Encoder We adopt a pre-trained language model BERT (Devlin et al., 2019) as our scenario encoder.The unsupervised pre-training on large corpora makes the model capture linguistic knowledge, which provides rich textual representations.We represent the scenario S as a sequence of T tokens: S = [s 1 , s 2 , ..., s T ], and formulate the encoding process as ) where h s i represents the embedding of token s i from the encoder.Finally, the representation of scenario can be written as H s : (2)
Equation Encoder
The sequence form cannot model the structure of the equation well (Xie and Sun, 2019).Hence we transform it into an equation tree which is then encoded by a TreeLSTM (Tai et al., 2015).The equation is transformed into a binary tree representation as proposed in (Xie and Sun, 2019) and sequentialized as their pre-order traversal.Thus the equation can be represented as E = [e 1 , e 2 , ..., e n ], where n is the length of the pre-order equation and a node e i represents a number or operator (+,-,*,/).In details, we firstly adopt a BERT to encode each node: Then, we encode the equation tree by a TreeLSTM: Scenario to Equation: After BERT encodes the whole scenario text, each token's embedding has the scenario's context information.For a number appearing in both scenario and equation, we replace its embedding in Equation ( 3) with its embedding in Equation ( 1).In this way, the scenario's context information is brought into the equation.
Equation to Scenario: After bringing the information of the scenario to the equation and encoding the equation tree, we put the embedding of the number in the equation back into the scenario representation.In detail, we replace its embedding in Equation ( 1) with its embedding in Equation (4).
Decoder We adopt the pre-trained language model BertGeneraiton (Rothe et al., 2020) as our decoder.Representing a question Q as a sequence of m tokens: Q = [q 1 , q 2 , ..., q m ], the token q i is generated as where H is the final representation of the scenario and equation by the concatenating the H s and H e as To be noted, all of these pre-trained models in both encoders and decoders will be fine-tuned in the MWP dataset.
Data Filter
Filtering out detrimental augmented data can improve the quality of data as well as the downstream performance (Le Bras et al., 2020).However, it will take a great cost to do it by the human filtering due to the large-size of our augmented data.Therefore, we utilize an existing powerful MWP solver as an expert model to judge whether the predicted answer is same as the ground-truth (Axelrod et al., 2011;Xie et al., 2021).Inspired by Ou et al. (2022), we leverage a large-scale MWP pre-trained language model MWP-BERT (Liang et al., 2022) as our expert model, utilizing its powerful generalization ability.
Considering our generated MWPs have many new diverse questions, it is difficult for an existing solver to predict the answer accurately, resulting in many false filtering cases.To increase the recall on the generated samples, we apply beam-search strategy on the expert model to select top k predicted equations (We set k = 5 in our experiments).
Since the final answer can be from different solutions (Yang et al., 2022), we compare the answer calculated by equations instead of comparing equations directly.The augmented MWPs will pass our final filter if its final answer is equal to one answer from the selected top k equations predicted by the expert model.
Dataset and experimental setting
Dataset We conduct experiments on an existing diverse questions dataset: UnbiasedMWP (Yang et al., 2022), which is split into 2,507, 200, 200 MWP groups for training, validation, and testing, respectively.Each group contains one original MWP and additional 1 to 8 diverse questions and equations with the same scenario.In total, it has 8,895, 684, 685 MWPs for training, validation, and testing, respectively.In this paper, we train our Equation-aware Question Generator and evaluate the whole DQGF on it.
Evaluation Metrics For the whole DQGF, we apply the accuracy of a MWP solver to evaluate the quality of generated data.Without loss of generality, we choose GTS (Xie and Sun, 2019) with BERTEncoder (Devlin et al., 2019) as the MWP solver.Furthermore, we also propose a metric of Group-Accuracy to consider the prediction accuracy on all diverse questions in a MWP.For example, in Table 1(c), the normal accuracy simply regards it as three samples by evaluation of each question separately, while our Group-Accuracy consid- ers this as only one sample and if all three equations are predicted correctly then the prediction is correct.
Comparing to the common accuracy, the proposed Group-Accuracy can evaluate an solver whether truly understanding an MWP with the ability of learning by analogy.For the equation-aware question generator, we report BLEU (Papineni et al., 2002), ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004) which are based on exact word overlapping.BERT F1 score (Zhang et al., 2020b) is also used, which is based on DeBERTa (He et al., 2021).
Experimental Results
We evaluate the quality of generated data by the results of a common MWP solver on both accuracy and group-accuracy.In details, we train the MWP solver on three different data: the original data of each group in the UnbiasedMWP (called Unbiased-source), our generated MWPs data from the UnbiasedMWP (called Unbiased-DQGF), and ground-truth MWPs in the UnbiasedMWP (called Unbiased-GT).Notably, the Unbiased-source only has MWPs with single question, while the latter two have MWPs with diverse questions.Since the Unbiased-GT directly uses the annotated diverse questions, its performance can be regarded as the up-bounded of the generation method.The results are shown in Table 2.
As shown in Table 2, we can see that training on the data augmented by DQGF can significantly improve the accuracy of solver from 34.9% to 62.7%.It indicates that DQGF can generate high quality MWP samples, which are useful for the training of a solver.In addition, the group-accuracy is also increased largely from 29.5% to 42%, even higher than the common accuracy (34.9%) of Unbiasedsource, showing that our method can generate MWP samples with valid diverse questions to help the solver better understand the problem by captur- ing the ability of learning by analogy.Comparing the Unbiased-DQGF and Unbiased-GT, we can see that there is still a gap between our method and the manual labelling data.Manual annotation method can produce more diverse and completely correct data, which leads to the better performance.
Fine-grained Analysis
In this section, we will show the performance of the three components in DQGF individually.
Diverse Equations Generator Table 3 shows the comparison results among different equations generation strategies.As observed, each strategy can generate high quality and meaningful diverse equations.Concretely, the same unit based generation strategy brings the most benefit to DQGF because such strategy can generate a lot of meaningful but less noisy equations.The sub-equations based strategy and different units based strategy can also effectively generate meaningful equations, but with little improvement to the solver.There are two reasons: 1) The sub-equations based strategy can not generate enough equations since the sub-equations in the original equation are limited; and 2) The different units based strategy generates meaningful equations while bringing many noisy equations, which are thus hard to be filtered completely.
Equation-aware Question Generator We compare one baseline method that directly concatenates the scenario and equation text sequence (Shridhar et al., 2022) and utilizes BERT (Devlin et al., 2019) as encoder, and BertGeneration (Rothe et al., 2020) as decoder.Table 4 reports the comparison of the different questions generator models.We can see that EQG(w/o)IM improves the performance of baseline method.It indicates that the scenario encoder and equation encoder can better encode the structure of scenario and equation respectively than directly encoding their concatenated sequence.By integrating the interaction mechanism (IM), we can observe that it leads to a great improvement, achieving the best performance on every metric, which demonstrates that our interaction mechanism can fuse the information of scenario and equation well.
Specifically, the BLEU score is 60.5% which is not high; this is however explainable as it is a metric about text overlap.As observed, though semantically identical, some of our generated data is less overlap with the ground truth.This can also be reflected by its higher BERT F1 score which measures the semantic similarity.
Data Filter We examine the effect of beamsize k of the filter in DQGF, which is shown in Figure 3.The experimental results show that DQGF can obtain the best performance when k is 5. DQFG can achieve good performance when k is between 4 and 6, since this appears to be a suitable interval in that a lot of correct candidates can pass the filter.When k is between 1 and 3, filtering is still accurate but some correct data are filtered out.Therefore this interval can achieve competitive but not the best performance.When k is between 7 and 8, the filtering is inaccurate.It causes that some noisy data pass the filter and impacts the final data quality.
New MWP dataset DiverseMath23K
We
Results
We compare the performance of the solver training on original Math23k and Diverse-Math23k.In addition to the accuracy and Group-Accuracy, we report the Deq-Accuracy (Patel et al., 2021), which is a metric measuring the question sensitivity.The lower the Deq-Accuracy, the better the question sensitivity.Concretely, it measures the accuracy that the solver predicts the answer of a MWP by deleting questions (i.e., only input scenario).A better solver should have higher question sensitivity, thus a lower Deq-Accuracy is expected.
The results are shown in Table 5.We can see that the accuracy can be improved from 63.6% to 68.4%, and Group-Accuracy is boosted from 56.9% to 60.2%.These results indicate that Di-verseMath23k can enable the model to better understand MWPs and improve its ability to solve different questions in the same scenario, even our In the future, we will focus on optimizing the model in the solver to improve its ability of learning by analogy and increase the group accuracy on the MWPs with diverse questions.
Limitations
Our DQGF still exists some limitations.While our generated data improves performance in diverse questions settings, there is still some noise in the generated data that affects the performance of original single question.In the following, we will give the limitations of our DQGF on its three components.
The diversity of the question depends on the diversity of the equations.Our equation generator is based on heuristic rules, resulting that the generated equations are very simple.In the future, we will try a model based equations generator to generate more diverse equations.In the question generator, it can only recognise equations with the operator "+-*/" due to the limited operator set in our training dataset UnbiasedMWP.In the future we will expand the operators so that the generation model can recognise more operators and be more universal.Filtering strategy is also important.Using the answers of expert model as a criterion for evaluation still exists bias and leads to the noisy data.In fact, we have tried to generate more diverse equations but all are filtered by the current data filter.We will look for better filtering strategies in the future.
Figure 1 :
Figure 1: An overview of DQGF.Each generated equation from the Diverse Equations Generator and scenario description of the original MWP are fed into the trained Equation-aware Question Generator to generate corresponding questions.In this way, we will obtain diverse questions with their equations and form a new MWP.Finally, the candidate MWPs are further filtered using the Data Filter.
Figure 2 :
Figure 2: Equation-aware Question Generator )where C (i) represents the index set of child nodes of e i .Finally, the representation of equation can be written as H e :H e = [h e 1 , h e 2 , ..., h e n ] .(5)InteractionMechanism In order to generate a question based on both scenario and equation, the interaction between them is crucial.Inspired by iterative deep learning(He and Schomaker, 2019;Schick and Schütze, 2021), we propose an interaction mechanism which uses numbers as bridge to fuse the information of both scenario and equation.It consists of the following two processes.
Figure 3 :
Figure 3: Different beamsize k of expert model in Filter.
Table 1 :
Examples of math word problem (MWP) generation by different methods.(a) original MWP, (b) MWP generated by back translation method
Table 3 :
Comparison of different equations generation strategies.
Table 4 :
Comparison of the different questions generator models.The baseline directly concatenates the scenario and equation text sequence.EQG means the Equationaware Question Generator, while EQG(w/o)IM means removing Interaction Mechanism.
Table 5 :
Performance of solvers training on different data.Ori and DQGF means the original Math23k and DiverseMath23k, respectively.
Table 6
Prediction Results Analysis Table7reports the prediction result of solvers trained on different data.The solver trained on the original Math23k can correctly solve Question1, which has a similar MWP in training.However, it cannot solve Question2, which is simpler than Question1.Moreover, it cannot solve other questions like Question3.It indicates that the solver merely learns shallow heuristics but failing to understand the MWP.When trained on DiverseMath23k, the solver would gain the ability of learning by analogy, i.e., the solver could solve different questions even if the question is changed (see Question2, and Question3).5 Conclusion and Future WorkIn this paper, we explore the ability of learning by analogy for MWP solvers.To do this, we propose a 11098 diverse questions generation framework (DQGF) to automatically generate diverse questions with their corresponding equations for a give MWP, which consists of Diverse equations Generator, Equationaware Question Generator and Data Filter.Based on the trained DQGF, we further produce a new MWP dataset (DiverseMath23K) with diverse questions.Experimental results demonstrate that DQGF could generate high-quality diverse questions and improve effectively the overall performance of the MWP solver. | 2023-06-16T01:15:38.174Z | 2023-06-15T00:00:00.000 | {
"year": 2023,
"sha1": "0ba49945649b40f205503dba3443e2bf550c7115",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-acl.705.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "da1c9a3f9142d4a54bbd3c3a9de3e940dfa6666a",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234247857 | pes2o/s2orc | v3-fos-license | Review of Mechanical Bar Couplers for Splicing Precast Concrete Members
One of the methods for connecting precast concrete members is the application of grouted sleeve couplers. The use of this type of connection is becoming more common over the years and in many cases provides for a time-effective, labor-friendly, and economic alternative to other connection types. However, their application has been limited to non- or low-seismic regions due to the reduction in the ductility capacity attributed to these connections. Recent studies have shown that they can be implemented in the moderate and high seismic zones providing that they are placed in the locations away from damage protected members or arranged to allow yielding of connected bars. This paper aims at reviewing the performance of grouted sleeve couplers under tensile and bending loads as well as gathering in one place recent approaches in improving their ductility behavior. The results of this study are expected to pave the way for researchers to apply this type of connection in seismic areas and motivate new ideas for improving their performance.
Introduction
The application of precast members provides for a rapid, durable, and labor-friendly method of construction. Prefabricated elements are often joined by splicing or coupling steel reinforcement.
Joining prefabricated elements can therefore be performed using mechanical splicer/couplers. Mechanical bar couplers have been in use since 1973 [1]. Among all types of mechanical bar couplers that 4-Dynamic and impact tests of grouted splices by Noureddine, (1995) and Rowell and Hager, 2010 [7]; and 5-Multiple tests of splice sleeves in Japan and Malaysia (NMB Splice Sleeve).
Due to the reduction in the ductility of member attributed to mechanical connections, however, the application of grouted sleeve splices/couplers in seismic regions has been limited or allowed with specific consideration/guidance as per ACI 318-14 and NCHRP report [8,9]. The reason for reducing the ductility is explained by higher rigidity of the coupler region compared to bars alone; thus, limiting the yielding at the connection zone. Recent investigations have devised means to improve this condition and raise ductility behavior in order to implement these connections in the seismic regions.
The goal of this paper is to review types of mechanical bar couplers and their performance under gravity as well as lateral loads. It will help researchers to know the tensile and ductile behavior of the grouted sleeve couplers as well as their failure modes. This paper also covers the design criteria for such couplers.
As such, it paves the way for wider implementation of this type of connections for the precast concrete elements, especially in the seismic areas.
Mechanical Bar Couplers
Mechanical bar couplers offer an attractive time-and costeffective means for connecting precast elements. In general, they are divided into five categories ( Figure 1). These are: Shear screw splice is made of lock shear screw, shear rails, and coupling sleeve. Equal length of bars is placed into the sleeve from each side, then the screws are tightened till the heads of screw shear off. In headed bar couplers, the plated end of bars is encased in sleeves that are threaded into each other. In grouted sleeve, two bars are connected to each other by placing them into a steel sleeve followed by grout filling of the sleeve. In one version of grouted sleeve splice, the length required for coupler is reduced by adding threads inside the sleeve. In threaded couplers, the threaded bars are turned into a coupler with matching internal threads. Both straight and tapered threads have been used for the bar. In swaged coupler, straight bars are connected to pressed steel sleeve [10,11].
The most common type of mechanical couplers are grouted sleeve couplers. The load is transferred between the bars by the coupler. They have been used to connect prefabricated elements.
One prefabricated element has the role of host. There are holes at the face of the host element to receive the protruding bars from the element to be connected. The connection between the host and the joining element is established when the bars are inserted into the sleeves and filled with the grout (Figure 2). Pouring the grout in the sleeve can be performed before positioning of the bars, referred to as pre-grout, or it can be performed after placing the bars in the sleeve using grout pump [9]. Figure 3 shows several types of couplers that are in the market.
Grouted Sleeve Couplers Under Axial Tensile Loads
The behavior of grouted couplers under monotonic axial load has been investigated by several researchers.
Alias et al. Based on the results, they concluded that the development length of the reinforcing bar is the most effective parameter in the performance of the connectors. Additionally, the required development length can be reduced to 10 times diameter, where, in the design standards, it is considered 40 times of bar diameter [14].
The development length reduction can be attributed to the confinement provided by the sleeve.
Einea et al. (1) where K is a constant and its value varies from 25 to 30.
They also stated that confining pressure on grouted sleeve couplers, f n , can be calculated based upon the Equation 2: (2) s where ε , t, E, and d i are tangential strain in the sleeve, the thickness of sleeve wall, the module of elasticity of the sleeve material, and inside diameter of the sleeve, respectively [3].
From their work, it can be seen that as the inside diameter of the sleeve increases without changing its thickness wall, the confining pressure decreases. Hence, the ultimate bond strength decreases. With decreasing the bond, the capacity of the splice decreases. Also, as the wall thickness of the sleeve, t, increases, the confining pressure increases. Thus, the ultimate bond stress increases. Accordingly, the splice capacity increases.
Seo et al. (2016) investigated a type of mechanical grouted
coupler called headed-splice sleeve (HSS) (Figure 7). This splice has a large opening on one end and a small opening on the other. Through the large opening, the rebar with head connected mechanically to the rebar is inserted into the sleeve.
The smaller opening is threaded and receives the other rebar by threading into the sleeve. The grout is poured into the sleeve to bond and anchor the headed rebar and to provide the splice with its capacity. In their investigation, the bond behavior of the splice system was studied. Variables of their studies were development length of the rebar and presence and size of the head.
They found that the presence of the head prevented the brittle failure of splice. In other words, it caused the failure to occur with the failure of the rebar rather than the grout failure. Also, they noticed that the diameter of the head was dependent on the geometry of the sleeve and calculated the proper ratio of diameter of the sleeve to diameter of the head to be 1.3 [15].
In their work, the bond strength of the HSS was calculated by Equation 3.
where A m is the mortar bearing surface area (area inside the sleeve) (mm 2 or in 2 ) and A h is the bearing area of head (mm 2 or in 2 ) [15].
Another type of grouted sleeve coupler in precast members Their proposed sleeve coupler is a grout-filled round pipe in which the diameter, length, and thickness of the sleeve are determined based on the bars to be spliced. Design of the sleeve is based upon shear-friction theory for transferring the force of one bar to the grout, from the grout to the steel pipe, from the steel pipe to the grout, and at the end from the grout to the other bar.
In their investigation, two types of grouted sleeve coupler, Types P and T, as shown in Figure 9, were designed and tested. Type T had threads along the interior surface of the entire sleeve, while Type P had the threads at the upper part only and was welded to a steel plate at the bottom part. Their study included testing of 18 specimens, 9 specimens of each type with different sleeve length. According to the test results, the embedment length of the bar ( b ) was calculated (Equation 6), in which b is the diameter of the bar, P is the force, and F b is the bond strength between the bar and the surrounding grout. (6) where the bond strength between the grout and the bar ( b ) was found based on the shear-friction theory and estimated according to the radial confinement stress ( n ) multiplied by the coefficient friction between grout and bar (μ) (Equation 7).
The coefficient of friction between grout and bar (μ) was considered 1 for the deformed bar surface. For calculating the radial confinement stress ( n ) the equilibrium of forces in one unit-long section of the half sleeve was used (Equation 8).
(8)
where t, s , and d are the wall thickness, yield stress, and inside required embedment length of the splices was calculated based on Equation 9.
The required sleeve length ( s ) was also estimated to be twice the embedment length rounded up to the nearest inch [16].
Lateral Loads
Although strength has been the most important criteria for the application of grouted sleeve couplers for many years, reaching sufficient ductility has recently emerged as a concern [17].
Significant amount of research has been conducted to investigate the effect of grouted sleeve couplers on the ductility of the precast members in seismic regions. It was found, in many cases, that the connection made with grouted couplers is too rigid to allow yielding within its length. Thus, the entire elongation occurs in the rebar close to the end of the sleeve. Therefore, the ductility of the member is reduced. Many approaches, including debonding the bars and moving the grouted sleeve couplers farther inside the elements, have been implemented to improve the ductility performance of these connectors in the precast elements.
Haber et al. [4] and Pantelides et al. [18] investigated the displacement ductility of several precast column connections using grouted and headed sleeve couplers. The results are summarized in Table 1. diameter of the sleeve, respectively. They also found that to avoid crushing failure of grout, the radial confinement stress should be less than 0.2f'c, where f' c is the compressive strength of the grout. Finally, for the proposed sleeve coupler, the Table 1, for precast column connections using grout filled coupler (GC), displacement ductility ratio is as low as 60 % of that of cast-in-place (CIP) connections.
As shown in
In their research, Tazarv and Saiidi, 2014, developed an approach to quantify the reduction in the displacement ductility. Based on the pull-out tests and numerical simulations, they found that reduced analytical plastic hinge length, L sp p , due to the presence of any type of mechanical coupler can be calculated based on Equation 10 [5,10,19]. (10) where: Several solution schemes have been attempted to improve the ductility behavior of the grouted sleeve couplers in precast members. Debonding of connected bars outside the coupler ( Figure 11) was found to be an effective approach since it postpones the rebar fracture and reduces spalling of adjacent capacity-protected elements [20][21][22]. Accordingly, the ductility of the system and as a result, the energy dissipation capacity of the structure will be increased with debonding solution. Pantelides et al., 2014, experimentally studied the effect of debonding of bars on the ductility of the member. They found that debonding increased the displacement ductility capacity from 5.4 to 6.8. There is a need for more research to investigate the effect of debonding material, debonding length, and location of debonding on the ductility behavior of a member in seismic regions [18]. This requirement ensures that premature failure, bar pullout or coupler fracture will not impact the system's ductility. Therefore, Type 1 of mechanical couplers is not allowed to be implemented in plastic hinge regions for all Seismic design categories (SDCs).
Specifications
However, Type 2 is permitted to be used anywhere except within half a member depth from the face of the connecting element in special moment frames (SMF) with ductile connections if the connections are not "strong" [8].
AASHTO LRFD does not use Type 1 and 2 terminologies.
It requires that in SDCs C and D (Seismic Zone 3 and 4), any mechanical coupler to be used outside the plastic hinge region [23].
It also determines that in piles or shafts, where such placement is virtually impossible, mechanical couplers can be used in plastic regions if they develop the tensile strength of the bars.
Summary and Conclusion
In this study, the performance of a grouted sleeve couplers under the tensile and lateral load was investigated. The limitations on their application were discussed and recent approaches to overcome this constraint was reviewed and explained. Following is a summary of the results: 2. The most effective parameter for the performance of grouted sleeve couplers under tension load is the development length.
The required development length was reported to be at least 10 times the bar diameter.
3. The capacity of grouted couplers is related to their bond strength. As the inside sleeve diameter increases the confining pressure decreases; hence, the bond strength decreases.
Accordingly, the splice capacity decreases.
4.
The presence of head in the headed-splice sleeve prevents premature failure of the splice as it results in the bar rupture rather than grout failure. Therefore, it helps to avoid brittle failure.
5.
In headed sleeve splices, the head diameter is reliant on the sleeve geometry, and the ratio of the diameter of the sleeve to the diameter of the head can be estimated to be 1.3.
6.
Two methods are used to increase the ductility performance of the grouted sleeve couplers under lateral loads: 1-Shifting away the location of the grout-filled coupler from the end of connection (and away from damage protected members) and 2-Debonding of bars outside the grouted sleeve coupler.
7.
The debonding approach can increase the ductility capacity of the member up to 25 percent.
In a parallel research study, the authors have provided a general review of means and methods for splicing precast piles complementing the study reported in this paper [24].
Acknowledgments
The study reported in this paper is supported by the Accelerated
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-05-11T00:04:09.920Z | 2021-01-12T00:00:00.000 | {
"year": 2021,
"sha1": "5156abea0f322ffb1feb8f836437df8d6ae51f00",
"oa_license": "CCBYNC",
"oa_url": "https://irispublishers.com/sjrr/pdf/SJRR.MS.ID.000551.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7d75faffcf5e2c5bc27279b0a165e05e90b13b10",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
221346951 | pes2o/s2orc | v3-fos-license | Characterizing street-connected children and youths’ social and health inequities in Kenya: a qualitative study
Background Street-connected children and youth (SCY) in Kenya disproportionately experience preventable morbidities and premature mortality. We theorize these health inequities are socially produced and result from systemic discrimination and a lack of human rights attainment. Therefore, we sought to identify and understand how SCY’s social and health inequities in Kenya are produced, maintained, and shaped by structural and social determinants of health using the WHO conceptual framework on social determinants of health (SDH) and the Convention on the Rights of the Child (CRC) General Comment no. 17. Methods This qualitative study was conducted from May 2017 to September 2018 using multiple methods including focus group discussions, in-depth interviews, archival review of newspaper articles, and analysis of a government policy document. We purposively sampled 100 participants including community leaders, government officials, vendors, police officers, general community residents, parents of SCY, and stakeholders in 5 counties across Kenya to participate in focus group discussions and in-depth interviews. We conducted a thematic analysis situated in the conceptual framework on SDH and the CRC. Results Our findings indicate that SCY’s social and health disparities arise as a result of structural and social determinants stemming from a socioeconomic and political environment that produces systemic discrimination, breaches human rights, and influences their unequal socioeconomic position in society. These social determinants influence SCY’s intermediary determinants of health resulting in a lack of basic material needs, being precariously housed or homeless, engaging in substance use and misuse, and experiencing several psychosocial stressors, all of which shape health outcomes and equity for this population. Conclusions SCY in Kenya experience social and health inequities that are avoidable and unjust. These social and health disparities arise as a result of structural and social determinants of health inequities stemming from the socioeconomic and political context in Kenya that produces systemic discrimination and influences SCYs’ unequal socioeconomic position in society. Remedial action to reverse human rights contraventions and to advance health equity through action on SDH for SCY in Kenya is urgently needed.
Background
Children (persons ≤18 years of age) and youth (persons between the ages of 15 and 24) living and working on the streets have been known by various terms and definitions, which have been used to convey their circumstances and connections to the streets and public spaces. Terminology has evolved to reduce stigmatization and negative connotations associated with the label 'street child', and the terms 'street-connected' or 'children and youth in street situations' have been adopted to identify children and youth for whom the streets plays a significant role in their everyday lives and social identities [1,2]. Street-connected children and youth (SCY) in low-and middle-income countries (LMICs), are a socially and economically distinct group of young people, who experience numerous health inequities that are avoidable [2][3][4]. SCY in LMICs report that structural and social inequities, namely abject poverty, family conflict, and abuse, precipitate their migration to the streets [5]. Subsequently, the context in which children and youth find themselves living and working on the streets exacerbates the social, economic, and health inequities experienced by this population [3,[6][7][8].
SCY are prevalent in Kenyan cities [9][10][11], however no accurate national estimate of the number of children and youth connected to the streets has been published. Known by the public as chokoraa (garbage pickers), these children and youth are subject to human rights violations and experience tremendous stigmatization, social exclusion and discrimination, all of which have an impact on their health and wellbeing [6,10,[12][13][14]. This marginalized group disproportionately experiences preventable morbidities including but not limited to: a heightened prevalence of human immunodeficiency virus (HIV) and sexually transmitted infections, posttraumatic stress disorder, substance use and misuse, and negative sexual and reproductive health outcomes [9,[15][16][17][18][19][20][21][22][23]. Moreover, SCY in Kenya succumb to death prematurely through preventable causes of mortality [24,25]. SCY also experience social and economic marginalization and participate in a street-based or informal labor economy, where they earn on average between 50 and 100 Kenyan shillings (Ksh) per day (~US$0.50 -US$1.00) [6,26]. SCY are frequently involved in the criminal-justice system and report experiencing conflict with the police, arrest and incarceration, and harassment, violence, and beatings from authorities [6,13,14,22,27,28]. Despite the robust evidence of social and health inequities experienced by SCY, to our knowledge no studies have been conducted to date that explore how the social and health inequities experienced by SCY in Kenya are produced, maintained, and shaped by structural and social determinants of health (SDH) in this context. We theorize that the extensive social and health inequities experienced by SCY in Kenya are socially produced and result from systemic discrimination and a lack of human rights attainment for this marginalized population.
The WHO conceptual framework on SDH can be used to explore and identify how the social, economic, and political context in a specific country influences socioeconomic positions in society, whereby populations are stratified by social class, gender and sexual identity, ethnicity (racism), income, education, and occupation [29]. Contextual factors and structural mechanisms that give rise to social stratification (e.g. social class, income, education, etc.) and an individual's socioeconomic position are the structural and social determinants of health inequities. An individual's socioeconomic position, in turn shapes specific determinants of health status known as 'intermediary determinants of health'. These intermediary determinants include material and psychosocial circumstances, such as housing, food availability, stressful living circumstances, and social support, and behavioral and biological factors, such as nutrition and drug and alcohol consumption. As a result of SDH and intermediary determinants of health, individuals experience differences in exposure and vulnerability to healthcompromising conditions, which ultimately impact health equity [29].
The WHO describes health inequities as "health differences that are socially produced, systematic in their distribution across the population, and unfair" [29]. When individuals in a society have unequal rights and access to key determinants of health, including but not limited to freedom from discrimination, food, clothing, housing, education, and medical care, health inequities arise. Moreover, an individuals' right to enjoy the highest attainable standard of physical and mental health, is influenced by the socioeconomic, political, and environmental conditions in a particular context [29]. Given the strong link between achieving health equity and attainment of human rights, a human rights framework can be used with the conceptual framework on SDH to explore and analyze the underlying processes of systemic discrimination and the social production of health inequities in a specific socioeconomic-political context [29].
The Convention on the Rights of the Child (CRC), stemming from the Universal Declaration of Human Rights, recognized that children are in need of special protection and assistance, and came into force in 1990 [30]. Kenya is a signatory of the CRC [31], and all children under the age of 18 years should be safeguarded as per the CRC under the Kenya Children's Act, which legally outlines children's rights and welfare in the country [32]. In 2017, a General Comment on Children and Street Situations was released by the Committee on the Rights of the Child to provide authoritative guidance to States to respond to injustices experienced by SCY and improve their circumstances using a child rights approach building on the CRC [2]. The General Comment no. 21 recognizes SCY's rights to several structural, social, and intermediary determinants of health, such as non-discrimination (Article 2), the right to education (Article 28), and the right to an adequate standard of living (Article 27) [2].
Given the social and health inequities experienced by SCY in Kenya and a lack of research conducted to understand how these inequities arise in this context, we sought to explore and understand how health inequities experienced by SCY in Kenya are produced, maintained, and shaped using the WHO conceptual framework on SDH and the CRC General Comment No. 21 (2017) on Children in Street Situations [2,29].
Study setting
The study was conducted across five counties in Kenya: Trans-Nzoia, Bungoma, Kisumu, Uasin Gishu, and Nakuru. We interviewed participants in the respective capital of each county: Kitale, Bungoma, Kisumu, Eldoret, and Nakuru. These study sites were purposively selected given the large numbers of SCY known to live and work in these towns in western Kenya [9,10]. The counties' respective populations and poverty demographics are shown in Table 1. Eldoret, the capital of Uasin Gishu, was the primary study site. It is home to Moi University, Moi Teaching and Referral Hospital (MTRH), and the Academic Model Providing Access to Healthcare (AMPATH), a long-standing partnership between Moi University, MTRH, and a consortium of universities from North America [33].
Epistemological principles and study design
The study rests on critical theoretically engaged qualitative research principles [34]. Given our interest in underlying power structures and how they impact SCY's social and health inequities, the study privileged qualitative methodology to explore and describe the public perceptions of, and proposed and existing responses to, the phenomenon of SCY in Kenya. We opted for an explanatory and descriptive qualitative design using multiple data generation methods to illuminate how the social and health inequities experienced by SCY in Kenya are produced, shaped, and maintained. Qualitative research is particularly well suited to explore and produce knowledge on how power structures in this context contribute to and produce inequities on this issue [34,35].
Sampling considerations and study participants
This study sought to purposively sample a diverse range of participants who had knowledge of and experience interacting with SCY. The overall aim in using purposive rather than probability sampling was to include information-rich cases for in-depth study [36].
The research team has a long-standing relationship with the street community in Eldoret, Kenya where they have conducted participatory research with this population for over 15 years. Our established relationship with the local community in Uasin Gishu County enabled us to reach a diverse group of participants. In Uasin Gishu County we included community leaders (Chiefs and Elders), County Children's Coordinator, Children's Officers, police officers, vendors, general community members, stakeholders, parents of street children, former and current SCY, peer navigators, and healthcare providers at MTRH and AMPATH. Across the other counties we engaged Children's Officers, police officers, and SCY. SCY were eligible to participate if they were aged 15 to 24, and other social actors had to be aged at least 18.
Recruitment and enrolment
We purposively recruited community members, including, community leaders, government officials, vendors, police officers, general community residents, parents of SCY, and stakeholders, and contacted them by phone or in person to explain the purpose of this study and to invite them to voluntarily participate. We contacted government officials initially with a formal letter informing them of the purpose of the study and followed up in person. Healthcare providers and peer navigators working at MTRH and AMPATH were recruited through our established networks and contacts. We invited social workers, clinical officers, nurses, and HIV testing and counselling practitioners in order to gather a broad range of opinions from individuals providing healthcare. Government officials in Uasin Gishu were consented and interviewed in their offices, while all other participants in Uasin Gishu were invited to the referral hospital, or Moi University offices for consent and interviews. Participants in other counties were consented and interviewed in their offices and places of employment.
SCY aged 15-24 in all counties were purposively sampled from street venues called "bases/barracks" (primary locations in which street children reside). Street outreach and study sensitization occurred at these sites to establish rapport and trust with SCY. In these street venues, the purpose of the study was explained, and SCY were invited to participate voluntarily in the investigation. In Uasin Gishu, SCY were invited to the Rafiki Centre for Excellence in Adolescent Health at MTRH to undergo enrolment, consent, and participate in interviews. In other counties, SCY were enrolled, gave consent, and provided interviews in street venues. The number and breakdown of participants, type of interview, and their location is presented in Table 2. In total, the study recruited 100 participants, 48 women and 52 men. The median age of community members interviewed was 42 years, and SCY 16 years. The number of participants selected for this study was adequate to permit deep case-oriented analysis and to produce credible and analytically significant findings resulting in a new and richly textured understanding of how social and health inequities experienced by SCY in Kenya are produced, shaped, and maintained [37].
Ethical considerations
This study received ethics approval from Moi University-MTRH Institutional Research Ethics Committee and University of Toronto Research Ethics Board. The study received a waiver of parental consent for minors. SCY participants were asked to provide documented verbal consent (those aged 18 to 24) or assent (those aged 15 to 17) at each encounter. Research on cognition and capacity suggests adolescents and younger children show significant ability to provide informed consent [38]. We have established a process for conducting ethical research with SCY including a process for informed consent/assent [39]. Qualified team members were present at all interviews with SCY to make qualitative determination as to whether the youth understood what they were assenting to. Written informed consent was obtained from all other participants. Participants were made aware that their interviews would be audiorecorded; nine participants (police officers and children's officers) declined to be audio-recorded but agreed to be interviewed and gave the interviewer permission to take notes. Community participants and SCY were compensated for their time with 200 Ksh (~US$2.00) and government officials 1000 Ksh (~$US10.00).
Data generation
From May 2017 to September 2018 we used multiple data generation methods, including focus group discussions, in-depth interviews, archival review of newspaper articles, and analysis of a government policy document to generate a data corpus for this qualitative analysis.
The study team, trained experts in qualitative research methodology, conducted focus group discussions and indepth interviews in either English or Swahili. We conducted 41 in-depth interviews and seven FGDs ( Table 2). In total, 22 interviews were conducted in English and 26 were conducted in either Swahili or a mix of Swahili and English. Focus group discussions took on average one and a half hours and in-depth interviews 40 min. Focus group discussions and in-depth interviews used an interview guide that asked participants about their general perceptions of the population, their experiences In addition to focus group discussions and in-depth interviews, we included 11 newspaper articles published from 2015 to present and a government policy document [40][41][42][43][44][45][46][47][48][49][50]. Kenya lacks a national policy on SCY [51]. The Street Families Rehabilitation Trust Fund (SFRTF) was established in 2003, which sought to address the needs of SCY and street families and safeguard their rights, however no official policy documents exist with respect to this program [51]. The existing and strong relationship the research team has in Uasin Gishu County with stakeholders and government officials made the research team aware of a local Uasin Gishu county policy document that was included in the present analysis. The government policy document was provided to the research team through the Uasin Gishu Children's Forum. To the best of our knowledge, there are no other policy documents available in the other counties, however it is possible they exist and are not publicly available.
Given that media influence public perceptions, exploring news articles on SCY can shed light on how they are portrayed, policies that have been implemented, and interactions that SCY have with the community at large in this context [52]. Newspaper articles provided additional information on the sociocultural and political context in Kenya that influenced the social and health inequities experienced by SCY. These newspaper articles were randomly selected from a set of national newspaper publications that covered issues related to the portrayal, the living conditions and treatment of SCY in Kenya, with the exception of one international article, which was purposively included. The use of simple random sampling strategy provided a representative of articles on SCY. Due to limited time and resources it was not possible to review and analyse all newspaper articles related to SCY.
Research team and reflexivity
The research team are committed to improving the health and well-being of SCY in this region and other LMICs. The multi-disciplinary research team who conducted this research consists of 4 women and 2 men from Kenya and Canada, whom have expertise in public and population health, child and adult psychology, social and behavioral sciences, epidemiology, and human rights. We understood that our positionality may have impacted the interview, that participants' responses may have been mediated by our presence. To empower SCY we conducted focus groups and interviews in familiar environments or one of their own choosing. As one-onone interviews can be intimidating especially for young people, we conducted focus group discussions with SCY to offset the power imbalance that may exist between the research team conducting the interviews and our participants [52]. Constructive rapport was established through the relationship built over several years of participatory action research with SCY in this setting creating a bond of trust. This encouraged participants' disclosure, to tell personal and detailed stories. Participants were assured that their response would be anonymous and that, if at any time during the interview they were uncomfortable and no longer wished to participate, they could leave without any repercussions.
Qualitative data analysis
Transcription and translation are deeply interpretative processes [53,54]. Interviews and focus group discussion were transcribed verbatim. All audio files and transcripts were reviewed by the authors to ensure quality. RK transcribed interviews and focus group discussion completed in English. RK (Kenyan) also translated and transcribed those completed in Swahili. Quality check performed by LE (intermediate Swahili language skills) and PS (Kenyan). Several years of conducting participatory research with this population enabled the authors to contextualise and interpret the data. Iterative processes and continuous questioning of the understanding of data and reviewing of findings, provided opportunities for minimising descriptive and interpretive biases.
A deductive qualitative data analysis approach was conducted. In analytic meetings, using the CRC General Comment No. 21 on Children in Street Situations (henceforth referred to only as CRC), we mapped specific human rights articles onto the WHO conceptual framework for SDH to guide our analysis to understand the processes of systemic discrimination and production of health inequities.
Interview transcripts and newspaper articles were read multiple times by the research team to achieve immersion prior to code development. The deductive approach to identify the coding scheme allowed for the development of codes corresponding to specific concepts which were then used to generate themes [55]. We developed a series of codes using specific articles outlined in the CRC to capture upholding or contravening SCY's human rights. To capture SDH, we coded broadly for context to include all social-cultural, economic, and political mechanisms that maintain social order with the following context sub-codes: criminal-legal, cultural, economic, political, and religious. We developed the final codebook by repeatedly testing its validity and comprehensiveness through test-coding transcripts. Transcripts, newspaper articles and the policy document were coded by four of the authors (LE, AG, RK, PS) and compared for consistency. Analytic notes and annotations made during coding by each author were used in a series of interpretive meetings to define and refine themes. To enhance the reliability of the data, we triangulated data from multiple sources (e.g. interviews with different types of participants) as a data validation strategy [36]. We also triangulated data from multiple data collection methods. By combining focus groups with individual interviews we maximized the strengths of each while overcoming their unique deficiencies [56]. To increase validity and reliability of the analysis, inter-rater reliability was also employed, a type of researcher triangulation by which multiple researchers are involved in the analytical process [57].
Findings
Summarized in the SDH framework ( Fig. 1), our analysis is divided into 3 major themes 1) socioeconomic and political context; 2) socioeconomic position, 3) and intermediary determinants of health to characterize how the structural and social determinants of health inequities and intermediary determinants of health impact SCYs equity in health and well-being in this context. In the first theme exploring the structural determinants of health inequities, we characterized the influence and role of the socioeconomic and political context in Kenya in producing and maintaining structural determinants of health inequities through the four sub-themes: governance; macroeconomic policies; social policies, societal and cultural values; and public policy. Our analysis uncovered the exertion of power over SCY as evidenced by human rights contraventions. In the major theme of 'socioeconomic position' we identified multiple and intersecting forms of discrimination and how they shape and influence SCY's position in society in two sub-themes: social class, gender, and ethnicity; and education, occupation, and income. Table 3 summarizes our findings and shows how structural and social determinants and corresponding human rights contraventions impact SCY's social and health inequities. Lastly, we explored the intermediary determinants of health through two sub-themes of material and psychosocial circumstances. Table 4 summarizes our findings with respect to the intermediary determinants of health, human rights contraventions, and impact on SCY's social and health inequities.
Socioeconomic and political context
Governance Governance refers to "[the] system of values, policies, and institutions by which society manages economic, political, and social affairs through interactions within and among the state, civil society and private sector. It is the way a society organizes itself to make and implement decisions" [58]. CRC General Comment No. 21 (2017) Article 4 on appropriate measures outlines the responsibility of the State to provide appropriate legislative, administrative and other policies to ensure children have essential levels of each of the social, economic, and cultural rights. Participants interviewed recognized the responsibility of the State in intervening to protect SCY, as stated by one healthcare provider: The government has to be involved because the moment the street kids are on the street we are creating a generation for criminals, those who won't get proper care, social and economic support and they end up becoming worse [off] than they were, if no proper interventions are made. (Clinical Officer) Participants identified several limitations within the system of governance that act as barriers to realizing SCY's rights. A government Children's Officer recognized the State's position of power to intervene. However, he also recognized the lack of political resolve for anyone to take responsibility for SCY: It's a horrible situation because they live on the streets without the support of county or national government and they are a group of persons that have been rejected by the society... We in positions of authority and more so as a department having the powers of actually removing them from the streets provided there is political goodwill and the [will of] members of the society and the community at large. Because there seems to be no goodwill, the principal of every man for himself and God for us all, to the extent that nobody wants to take responsibility that these children are either in their docket or generally that they have the power to assist them. (Children's Officer) It was suggested that the issue of government inaction was not one of resource constraints, but one of power, where the State exercises power to shape the political agenda and power over decision-making regarding SCY. Only when the problem of child and youth street-involvement impacts an officials' own position of power will they act, as explained by one Children's Officer: It's not even about the economics, it's our leaders' concern about these people and you will realize that they don't even talk about them. They only talk about them when they [officials] have been affected. (Children's Officer) Children's Officers across counties agreed that the devolved system of governance has resulted in the issue of responsibility for SCY being contested, thereby leaving a gap in policy and services for this population. Officials concurred that cooperation between arms of government on this issue is required: I think first of all as much as they say issues of street children are not devolved, that's why you don't see anyone speaking about it, at the department of children's services, there is no place where you will hear them speak about these street children. They will tell you we don't have that capacity, it's not our work... I would propose it becomes a joint effort. They involve the national government and the county governments. (Children's Officer) As a result of political inaction and deficiencies in governance, SCY lack essential elements of social, economic and cultural rights.
Macroeconomic policies
The system of governance shapes and manages macroeconomic policies in Kenya and controls the economy. Participants point to Kenya's economy and development level as a structural determinant of children and youths' street-involvement: It's a problem because of our economy as a third world country; we are not able to get them out of the streets to better places. (Nurses) Kenya has seen inconsistent economic growth over the past decade with a fluctuating gross domestic product; this has been in conjunction with a rising cost of living, particularly with respect to commodity prices [59,60]. The unstable economy likely leaves families unable to meet their needs as a Clinical Officer suggests: "I can also say the cost of living is high; the parents are poor so they will go to the streets." Moreover, participants recognized that it was the role of the State to improve the economy to reduce the burden on impoverished households as communicated by a community member: I think they should be assisted, if the economy was better off I don't think children would be on the streets, the economy should be good so that people can afford. They go to the streets because life "The government has no interest; they are being looked at as a problem. The government handles these issues with backwardness, they want to tackle them on the streets and push them home instead of solving things that are attracting them on the streets and creating more systems to prevent them from coming to the streets." (Stakeholder 2) • Political inaction and poor public policies carried out by the state impact SCY's life circumstances, socioeconomic position, and therefore social and health inequities. • SCY and their health is not a priority in the government's agenda, with limited resources allocated to their issues. • Lack of political will and a disregard of responsibility of the State for the phenomenon of SCY, despite legal obligation as a CRC signatory.
Social Policy
Article 18 on parental responsibility: states are obliged to provide assistance to parents and guardians to prevent children ending up in street situations.
"At the national level we have what we call The Street Family Trust Fund, which is based in Nairobi. It's supposed to come to the major cities and work together with us so that we can have such programs." (County Children's Officer) • Inadequate and unimplemented social welfare programs for SCY and their families leaves them without a social safety net. • On-going structural forces place pressure on households leaving them unable to adequately provide and care for children's needs.
Public Policy Article 7 on birth registration and 8 on identity: states should ensure free, accessible, simple and expeditious birth registration is available to all children at all ages and street-connected children and youth should be supported to obtain legal identity documents.
"Some have reached the age of getting IDs, for one to get an ID one has to have a birth certificate and the parent's IDs so most of these children can't get them. Also, when they are sick, they don't get treatment easily so the government should work on that." (Community Leader 3) • SCY lacking identity have difficulties accessing education, health, other social services, justice and family reunification all of which have a long-term impact on their socioeconomic position and health and well-being.
Public Policy
Article 15 on the right to freedom of association and peaceful assembly: states should ensure that streetconnected children and youths' access to public space in which to associate is not denied in a discriminatory way.
"So, like, the county government, what it did, it was, I'll use that term 'making it unfriendly for them in town' so that once you see them even police officers, enforcement officers, they are put in strategic places. So, these children, totally they will not step into the central business district (CBD)." (Children's Officer) • Limiting SCY's access to public spaces and use of police and other officers to enforce this restriction is discriminatory and contravenes to their right to associate in public places. • Practices that limit SCY's access impacts their social, psychological, and physical health if they are unable to associate freely in their social networks or access particular services within restricted public spaces.
Public Policy
Article 20 on the right to special protection and assistance for children deprived of a family environment (44.) types of care: states are the de facto caregiver and are obliged to ensure alternative care to a child temporarily or permanently deprived of his or her family environment. Deprivation of liberty, in detention cells or closed centres, is never a form of protection.
"Ideally, we have programs and activities that we can do, the only challenge we have is resources. The biggest challenge here is if you have to rescue them, you have to take them to a safe place, and you have to find time to engage them and find out why they are on the streets. As we speak now, we don't have a holding facility. The rescue center is not in a position to hold all street children. The rescue center is not just for street children bit for any child that requires to be rescued because children are also abused in their families." (County Children's Officer) • Inadequate shelters, rescue centers, and alternative care environments leave SCY without protection, shelter, and other basic needs thereby impacting their health and well-being. • The use of prison, remand homes/ juvenile detention, or cells, are inappropriate alternative care environments and impacts SCY's social, psychological and physical health and well-being.
Public Policy
Article 20 on the right to special protection and assistance for children deprived of a family environment (45.) applying a child rights approach: states should ensure that children are not forced to depend on the street for survival and that they are not forced to accept placements against their will. States should ensure that State and civil societyrun shelters and facilities are safe and of There is a case I have witnessed, he came to the streets, but the family is well off, he had no valid reason but just said he liked the streets more than his home. We took him home twice but still went back to the streets. I don't pity him because he has parents and a home, he claimed it was due to hostility by the parents, but they denied that (Clinical Officers) • Unsafe, inappropriate and poor-quality shelters and facilities for SCY leave them vulnerable and susceptible to an array of health compromising conditions. • Forceful placements are psychologically and potentially physically harmful for SCY.
became hard at home so the government should improve the economy. (General Community member) Social policies and societal and cultural values Kenya's social welfare policies and programs such as the cash-transfer to orphaned and vulnerable children, may be missing many vulnerable households with children and youth at-risk of migrating to the streets as a result of poverty [61]. As one Children's Officer suggests, the social welfare program needs to be expanded to meet the growing number of impoverished households requiring support: Even if the parents have died, we still have extended families. The national government has programs like cash transfer programs whereby the government gives 2,000 shillings every month that is paid after every 2 months, I think it needs to enable us to target more families because the poverty level is
Public Policy
Article 37 and 40 on juvenile justice: states should ensure the use of restorative rather than punitive juvenile justice, and should support protection rather than punishment of street-connected children and youth.
"We don't get along well with the police because when they go down there, they just want to beat up someone... They go there and beat up people; there are even those who used to rape girls in town." (Street-connected young woman 2) • SCY are targeted with repressive street sweeps and are subject to police misconduct, which exposes them to physical violence, and leaves them with social and health inequities with lifelong consequences. • Physical, psychological and sexual violence perpetrated by law enforcement has a lasting impact on the physical, sexual, and psychological health of SCY. • Criminal records may impact SCY's life chances and have long-term consequences on their socioeconomic position thereby affecting their health.
Public Policy / Socioeconomic Position
Article 2 on non-discrimination (25.) non-discrimination on the grounds of social origin, property, birth or other status: states must respect and ensure the rights of street-connected children and youth are upheld without discrimination of any kind.
"We might be seated here and when a police officer comes, he will see us as bad people and starts chasing us and beating us,yet we have not done any wrong. You go to prison for like 6 months, won't you leave there as a bad person." (FGD, street-connected young man) • Discriminatory practices have life-long consequences on SCY's socioeconomic position. • Discrimination leaves SCY without adequate access to social and health services which has a direct impact on their health and well-being.
Public Policy / Socioeconomic Position
Article 2 on non-discrimination (26.) systemic discrimination: states are required to protect street-connected children and youth from direct and indirect forms of discrimination, including disproportionate policy approaches involving repressive efforts, including criminalization, street sweeps, and targeted violence.
"Two weeks ago, we rounded up street children. The community was concerned with the insecurity created by the street children. While street children do not commit all crimes, the situation overtime is problematic because of the numbers of children on the street. My job is to address issues of security. Community stakeholders told me that I should not arrest the street children as the police lack appropriate facilities, my priority is to protect the community." (Police Officer) • Repressive strategies to tackle homelessness may have a direct impact on SCY's health when they are exposed to or experience violence as a result of round ups and targeted violence by enforcement officers. • Criminalization of street-involvement may have life-long lasting consequences on SCY's socioeconomic position.
Socioeconomic Position
Article 28 on education: states should make adequate provision, including support to parents, caregivers and families, to ensure that street-connected children and youth can stay in school and that their right to quality education is fully protected.
"Poverty at home, maybe they don't have food, money to access education nor materials so the child will decide to go to the streets because he will feel better in the streets by begging from people. Also, maybe the parent did not give the child right to education and the child feels he has nothing to do at home, so they go to the streets to find something to do and earn a living." (Religious Leader, Stakeholder) • A lack of education impacts SCY's longterm life circumstances and influences their ability to attain employment, and socioeconomic position; thereby impacting their health and ability to access resources to health. • SCY whom lack knowledge and skills attained through education may have reduced health knowledge, be ill equipped to navigate health services or communicate with health providers. "The basic needs, they don't have food because most of the times you will find them eating from the bins. For clothes they have rags and they don't have shoes. They also sleep outside. They don't get loved due to separation so some of them are lonely; they don't mingle with other people freely." (Police Officer) • A lack of essential basic needs, such as food, clothing, and shoes leave children and youth vulnerable to malnutrition, and exposed to health compromising conditions and at risk for acquiring infectious and non-infectious diseases. • SCY are also at risk of psychological consequences associated with streetinvolvement and a lack of an adequate standard of living.
Material Circumstances
Article 27 on the right to an adequate standard of living (50.) Adequate Housing: states should sure that children and youth connected to the street have a right to live somewhere in security, peace and dignity.
"I live near them. I meet them in the morning while going to town; they can come and sleep in the vibandas (stalls) then go to town in the morning. When we walk at night, we warn them about sleeping there because someone being chased can also hide there." (Community Leader 1) • SCY lacking adequate housing and whom sleep in precarious or makeshift structures are at risk for numerous morbidities due to exposure to the elements and inadequate sanitation. • A lack of secure housing leaves SCY vulnerable to experiencing physical and sexual violence.
Socialenvironmental or psychosocial circumstances
Article 6 on the right to life, survival and development (29.) on the right to life: states should ensure street-connected children and youth are free from acts and omissions intended or expected to cause their unnatural or premature deaths.
"Some of them are offenders. They did a mistake and ran away, so you have to sit with the family for several sessions, prepare them and tell them we have found your child any maybe tell us the history. 'Ah that one is a thief, that one use to steal chicken, that one stole maize, even if he comes back'. Like they are some we took back to Baringo, and we didn't know fully the felony they had committed. You know they were lynched!... Yeah the villagers in the community just tied them and lynched them." (Children's Officer) • SCY experience unnecessary psychosocial stressors associated with infringement on civil and political rights, vigilante justice, and extrajudicial killings. • SCY disproportionately experience violence, which impacts their physical and mental health and often results in preventable and premature mortality.
Socialenvironmental or psychosocial circumstances
Article 6 on the right to life, survival and development (32.) ensuring a life with dignity: states have an obligation to respect the dignity of street-connected children and youth, including in relation to procedural and practical funeral arrangements to ensure dignity and respect for children who die on the streets.
"My issue is with the morgue, when a street child dies, they are thrown inside a container and when we go to collect the body it becomes an issue because postmortem has to be paid for and maybe what we have collected isn't enough. We want it to be buried in a proper way. So, they will refuse to give us the body and eventually end up throwing it away. When we go to the HOD they tell us to look for its family and maybe they came, and they can't afford to pay the charges. If they see some are smartly dressed, they say that we have to pay. Maybe the body has stayed for like 40 days and the charges have increased, they will even tell you that they are going to throw away that body. Sometimes we have to protest so that the body is released." (Peer Navigator) • SCY experience social inequities even in death due to income inequality and an inability to pay mortuary fees. • As a result, their peers experience unfair psychosocial stress to support their burial and a right to a dignified end of life.
Socialenvironmental or psychosocial circumstances Article 9 on separation from parents: states should not separate children from their families on the basis of their streetinvolvement, nor should states separate babies or children born to children themselves in street situations.
"For a mother with a child it also depends why that mother is there and from the experiences of interviewing them, these mothers use their babies to get sympathy from the public so whatever action we take will be a stern one like getting a court order to rescue those children and have the mother face the full face of the law. For the babies I take immediate actions because the environment is basically hostile to them." (Children's Officer) • Separating street-connected babies and children from their parents or families is stressful for both their parent and the child and can have long-term psychological consequences for both.
getting high so we need to target more deserving families into the program. We also need programs centered on the street families. (Children's Officer) While the Children's Officer also pointed to the extended family to care for orphaned children, traditional cultural and kinship values that previously acted as a social safety net for vulnerable children have eroded. Increasing economic pressures and individualistic values have shifted sociocultural norms resulting in children and youth turning to the streets as explained by a Clinical Officer: The community as a whole has also failed because in the event that a child becomes an orphan or the family is not in a good position, they don't come to solve the problem before it goes out of the boundary. Everybody lives for himself and God for us all, so what happens to the minors? They go to the streets and live their lives there. (Clinical Officers) Article 31 on rest play and leisure: street-connected children and youth have a right to utilize informal settings for play, and states should ensure they are not excluded in a discriminatory way from parks, and should adopt measures to assist them in developing their creativity and practising sport.
"Like last month we had a tournament, and some sponsored us and gave us playing kits and food. Our problem is not food alone; it should be something that makes sense and not just bread daily. You can give us food but also something to help us. Like supporting some of us who play football." (FGD, Street-connected young men) • Access to resources and ability to engage in rest, play, and leisure can reduce stressful life circumstances thereby ameliorating SCY's health and well-being.
Socialenvironmental or psychosocial circumstances
Article 19 and 39 on freedom from all forms of violence: states have the responsibility to protect street-connected children and youth from all forms of violence, including corporal punishment, familial violence, providing mechanisms for reporting violence, and holding perpetrators of violence accountable.
"Some are orphans, some come from dysfunctional families, maybe families where there is a lot of issues of abuse...Others you find they will tell you that there is a lot of violence at home. So, a child opts to run away and then eventually they end up in the streets." (Children's Officer) • SCY may experience physical violence prior to street-involvement as well as once on the street due to their vulnerability and socioeconomic position. • Physical violence is linked to long-term physical and psychological health consequences including post-traumatic stress disorder. • Experiencing physical and sexual violence is linked to long-term physical and mental health consequences. • SCY may experience sexual abuse and exploitation prior to street-involvement as well as once on the street due to their vulnerability and socioeconomic position.
Socialenvironmental or psychosocial circumstances
Article 32 on child labour: states have responsibility to protect children and youth from economic exploitation and child labour.
"Others will move to other towns like if you go to Molo, Maunarok, mostly there is the issue of child labor. So, they will prefer to go to somewhere like Maunarok, where they know there are a lot of farms. And most of these people, they tend to use these children as casual laborers; you know they get something small.... if you go to Njoro, Molo where we have the flower farms, you will find many children, and the majority will tell you that we used to live on the streets." (Children's Officer) • Child labour exposes SCY to stressful life circumstances including the possibility of violence or threats of violence. • Child labour may result in exposure to potentially harmful chemical, environmental and ergonomic factors and working conditions that are hazardous to their physical and psychological health.
Behavioural and biological factors
Article 33 on drugs and substance abuse: street connected children and youth should have access to free healthcare services and states should increase the availability of prevention, treatment, and rehabilitation services for substance use.
"The things we use are very strong especially gum. It is stronger than alcohol and people who sniff gum are hard to talk to. They just do what they want. We use a lot of things, not just glue. Not all of them will understand things, you may tell me this and that but after sniffing gum I forget everything." (FGD, street-connected young men) • The health damaging effects of alcohol and substance use are well established. • SCY report that they use substances as coping and survival behaviour in response to the harsh environment on the streets. • These detrimental substance use practices are associated with their streetinvolvement and thereby socioeconomic position.
Once on the street, few social welfare programs exist to directly assist SCY. The lack of social welfare programs focused on SCY and impoverished families, combined with the dissolution of the traditional socialcultural safety net, leaves many vulnerable children and youth without a support system due to insufficient public policies.
Public policies Numerous public policies result in systemic discrimination, human rights violations, and impact SCYs' social and health inequities (Table 3). In text, we explore in-depth how public policies in relation to Article 20 on the right to special protection and assistance to children deprived of a family environment and Article 2 on non-discrimination contribute to inequities.
Article 20 on the right to special protection and assistance for children deprived of a family environment When SCY are without parent(s)/guardian(s), the State is the de facto guardian and is obliged to ensure safe alternative care to a child temporarily or permanently deprived of his or her family environment; this does not include detention cells or closed centers where children and youth are deprived of liberty. Children's Officers cite resource constraints in counties and a lack of appropriate shelters preventing them from ensuring safe alternative care for SCY: You see now in [county redacted] we don't have a rescue center, we don't have a rehabilitation facility, so there is nothing much we can do, yet you are an officer in that capacity who is supposed to be protecting these children. (Children's Officer) When rescue centers do exist, in many towns they aren't State-run and may not have the capacity to care for all SCY as reported by a Children's Officer: The nearest is actually Machakos if you are talking about the national government, even Eldoret doesn't have a rescue center nor Nakuru and Kitale. What in those towns some people call rescue centers are owned by NGOs and sometimes private, but remember they are private entities and can only work to a certain level. So, what we want, and we are looking forward to, are street policies for street persons and establishment of rescue centers like you have asked me which are very necessary. (Children's Officer) In cases where no rescue center exists, SCY may end up in temporary holding cells, as a Police Officer describes: We don't have a children's office in police stations in this country. Children need their own separate room, even with some beds. Sometimes they take three days to get help, so they need somewhere to sleep. If they are too small, we can use the hospital. We have a children's cell, but that's not the child protection unit. (Police Officer) Insufficient or non-existent alternative care, or the use of children's detention cells contravenes the right to special protection and assistance for children deprived of a family environment. Contrary to this obligation, the response to the issue of SCY is often characterized by criminalization, repressive and discriminatory policies and practices, which we will now explore.
Article 2 on non-discrimination States are required to respect and ensure that the rights of the child outlined in the CRC are upheld without discrimination. Yet, children and youth in Kenya are discriminated against on the basis of their street-involvement, and thereby 'other status' (Article 2.25). Figure 2. shows the repressive public policy in one county concerning SCY, which contravenes Article 2 (25) and (26) on non-discrimination. Moreover, this policy is contrary to the right to an adequate standard of living (Article 27), which includes the provision of food, and Article 6 (31) on the right to survival and development, and potentially infringes on Article 32 on child labor and the criminalization of begging.
Direct discrimination in the form of street sweeps, criminalization of street-involvement, and targeted violence by police are common across counties in Kenya as documented in the numerous newspaper articles analyzed [41-43, 45-48, 50]. Fears of public insecurity and a towns' image are reasons typically cited for these repressive actions as explained in a popular daily newspaper in Kenya: The reason given by the Mombasa county law enforcers for the arrest of more than 150 street families, most of them children, was "to fight insecurity, especially during this festive season". (Daily Nation, December 31, 2015) Discrimination against SCY in Kenya is far-reaching, and not limited to repressive strategies implemented by county governments. As a result of their streetinvolvement and social identities, SCY may continue to experience discrimination when they return to school and integrate into society, as a stakeholder explains: If you go to school, children who saw you on the streets will always call you chokoraa. These are professionally trained teachers, but if you do something nasty, they will remind you that you were a street child. (Stakeholder) The significance of being connected to the streets has long-term consequences on a young person's life chances and social and health inequities. Once a child or youths' social identity is defined as chokoraa, they are positioned in the social hierarchy as a social underclass.
Socioeconomic position
Social class, gender, and ethnicity SCY in Kenya face multiple and intersecting forms of discrimination based on their 'other status' as chokoraa, and on the basis of social class, gender, and ethnicity. In general, SCY migrate to the streets as a result of poverty in impoverished households of a low social class [5,6]. Once on the street, they are further subjugated to an even lower social class with extremely limited power, control, and prestige. SCY are generally perceived and characterized as juvenile delinquents and shunned by the public, as described by one street-connected young woman: You know being a street child doesn't mean that you are dirty; it depends on how you keep yourself. There are those who love water and others don't. So, if we board a car, they won't want us to sit at the front, they don't see us as normal passengers. They see you as a thief and you smell so you can sit at the back. So, because I am with them, I will also sit at the back. If you go to the shop, they won't attend to you fast because they think you don't have money. The market women chase these children away when they go to beg. It's like they are no one's children. (FGD, Street-connected young women) Their identity as chokoraa intersects with their gender. Discrimination on the basis of gender is prominent on the streets. Girls and young women connected to the streets, generally elicited feelings of 'pity' and 'sympathy' from participants because they are 'weak' and 'victims' who are subject to violence, rape, and have infants and children in their care. In contrast, boys and young men were characterized as 'strong', 'dangerous', and able to take care of themselves as explained by a Children's Officer: For a girl you will really sympathize, and you would even want to get immediate assistance for them because of even that cultural perception that is not the environment especially for a girl child. For the boys we kind of assume that they are tough, and they can sort of handle it for some time. (Children's Officer) These gender divisions result in differential treatment and disparate social, economic, and health outcomes for SCY, all rooted in socially constructed gender norms and roles. Discrimination on the streets also extends to ethnicity. Kenya has a complex and long-standing history of ethnic conflict and division on the basis of tribe rooted in historical colonialism [62]. Tribalism is used as a tool to abdicate county level responsibility for SCY as described by a stakeholder: There is that perception that these children are not from this community, that they have come from other tribes, they are not ours so we cannot allow them to live here... The government says that they should go back to their people; they have come to make the town dirty, yet their children are safe at home. (Stakeholder) This compounded discrimination on the basis of SCYs' 'other' status, social class, gender, and ethnicity, is the result of socially produced differences constructed by the socioeconomic and political context. As a social Education, occupation and income Children and youth have a right to accessible, free, safe, relevant and quality education (Article 28). Attaining a formal education is an important component of an individuals' socioeconomic position, influences life circumstances and has long-term impacts on a persons' health. As a stakeholder explains, fees and the need to purchase school uniforms prohibit many parents from being able to send their children to school: School opportunities, the parents are not able to afford a new pair of shorts, no shoes, no fees and all those factors will make the boy not hang in the right places. They don't go to school. (Stakeholder) In turn, with little to no education SCY have few opportunities for employment, and when they do, it is usually informal exploitive labor that may expose them to harmful social-environmental circumstances as described by a community leader: For the youths it's the unemployment on their side. We have been having the children on the streets for so long and some have been used by other people to do good or bad things, some work for them and are underpaid. (Community Leader) The precarious informal work undertaken by SCY results in extremely low levels of income, as one former street-connected young man explains: They depend on collecting plastics where a kilo goes for 10 shillings (~0.10 USD); a sack full of plastics can't even get to 10 kilos, so per day they can make even 5 (~0.05 USD) shillings. (Former streetconnected young man) SCYs' poor socioeconomic position, shaped and maintained by the socioeconomic-political context in Kenya, determines their differential exposure and susceptibility to intermediary health compromising factors.
Intermediary determinants of health
The intermediary determinants of health include material circumstances, social-environmental or psychosocial factors, and behaviours and biological factors that affect health (Table 4). We comprehensively explore Article 27 on the right to an adequate standard of living in association with material circumstances, and Article 6 (29) on the right to life, survival, and development in association with psychosocial factors in text.
Material circumstances Article 27 on the right to an adequate standard of living.
SCY have a right to an adequate standard of living for their physical, mental, spiritual and moral development. This includes State support to parents, others, or directly to the child to ensure they have adequate nutrition, clothing, housing, free and accessible medical care, and education. Participants across counties unanimously expressed that SCY lack basic essential needs, which leave them exposed to health compromising conditions as explained by a Police Officer: I feel for them, it is not right for them to be on the street. They are vulnerable children; they are supposed to be in school. They have many risk factors for disease; they have no food or shelter. They eat dirty food and so are exposed to diseases. (Police Officer) At the foundation of material circumstances is the right to adequate housing. Housing has substantial impacts on health, and SCY have the right to live somewhere in security, peace and dignity. Typically, SCY are homeless or precariously housed in makeshift structures.
Like for me I come from California in Huruma where they dump wastes and many people sleep there, the place is dirty with many flies and they eat from there with pigs and dogs, so it is easy to get sick.... They survive without shelter; they have nowhere to put their belongings. (Former Streetconnected young man) SCYs' right to an adequate standard of living through support to parents, caregivers, and children, and the right to adequate housing are unmet. These inadequate material circumstances create stressful living conditions and contribute to psychosocial stressors and poor physical condition.
Social, environmental and psychosocial factors Article 6 on the right to life, survival and development.
SCY have the right to be free from acts and omissions intended or expected to cause their unnatural or premature death, and to enjoy a life with dignity. Participants reported that SCY succumb to mob and vigilante justice. A County official stated that he is unable to protect SCY from public revenge when they commit crimes: You know in some situations you cannot help, if one of them kills or steals from the member of the public, the public will retaliate. (County Children's Coordinator) Yet, SCY may be blamed for crimes they did not commit due to the public's perception of them as thieves and juvenile delinquents as explained: When bad things happen, they get blamed for it and they get mistreated. It is very easy for them to get killed. Some people steal in town, so the community assumes it's the street children who do that, so many get killed for something they haven't done.
(Former street-connected young woman) Regardless of the responsibility for criminal activities, the use of mob and vigilante justice is present. SCY frequently succumb to death due to assault [24]. Moreover, the State may be complicit in some SCYs' deaths, as reports of extrajudicial killing of SCY have been documented in the news [45]. SCY require the State's protection from acts that cause their unnatural or premature death, and circumstances that infringe on their civil and political rights. The passivity of the State in vigilante justice, extrajudicial killings, or murder of SCY by adults or peers, contravenes Article 6 on the right to life, survival and development.
Discussion
Our findings demonstrate the numerous structural, social, and intermediary determinants of health impacting SCY in Kenya. Utilizing the combined frameworks, the WHO conceptual framework on SDH and the CRC General Comment No. 21 (2017) on Children in Street Situations [2,29], we have shed light on how the numerous social and health inequities experienced by SCY in Kenya are produced, maintained, and shaped by structural and social determinants of health and violations of their human rights. Principally, our findings suggest that the vast social and health disparities experienced by SCY in Kenya are a consequence of failures of governance and the State to take all appropriate legislative, administrative, and other measures for the implementation of the rights recognized in the CRC as a signatory State, and in the Kenyan Children's Act [2,30,31,63]. Our findings suggest abdication of responsibility, dysfunction in the system of devolved governance, and a lack of political will to exercise power and invest resources to intervene more appropriately. Repressive public policies instituted by the State include street sweeps, forced migration, criminalization of street-involvement, and targeted violence, and suggest these powers are exercised to uphold social order, political prestige and resources, resulting in oppression of SCY. The use of these and other repressive strategies, and the lack of special protection and assistance for children deprived of a family environment require an immediate response and remedial action from the State. Our findings are supported by a Save the Children report [64] which found that SCY lived in "sub-human circumstances and their very basic right to life is at risk every day" and that the government's response has been to criminalize this marginalized group. The report also noted that the Kenyan government "failed to provide concrete and appropriate policies and strategies to uphold the rights of SCY, and initiatives to address the needs of SCY were "uncoordinated, human and financial resources were inadequate, and the rehabilitation of children was slow". In other low-and middle-income countries, Human Rights Watch has documented several violations against SCY that contravene their human rights and ultimately impact their health and social well-being [13,[65][66][67][68]. Moreover, a report prepared by the Consortium on Street Children outlines several legislative and policy gaps and a failure to implement and uphold the CRC for SCY in low-and middle-income countries, which is in alignment with our findings [69].
Governments are responsible for protecting and enhancing health equity, and as a signatory of the CRC, Kenya has a legal obligation to implement policies to protect and provide assistance to SCY [2,[29][30][31]63]. It is crucial for stakeholders and civil society, with the engagement and input from SCY, take collective action to advocate for a fundamental shift from repressive harmful responses to child-rights approaches as outlined in the CRC [2]. Upholding children's rights and implementing child-rights policies and interventions to reduce social stratification, exposures, vulnerabilities, and unequal consequences of ill health will likely improve health equity for SCY [29].
Our findings show the issues impacting SCY's health equity are far reaching, and beyond that of the health sector. Intersectoral government and civil society action on the structural and intermediary determinants is essential [29]. A commitment and collaboration between county and national level government with input from civil society and SCY to create contextually relevant and streamlined policies and programs using a child-rights approach is crucial. Structural determinants can be influenced and modified. Our findings suggest that action on key issues including poverty-reduction and social welfare programs, education, halting discriminatory practices, and ensuring special protection and assistance for children deprived of a family environment, should be immediately enhanced and strengthened. Expanding existing social policies, such as Kenya's Cash-transfer to orphaned and vulnerable children program, to cover SCY and more vulnerable families should be a priority [61]. No less critical is action on the intermediary determinants. Material circumstances may have the most significant impact on marginalized populations, and policy to uphold the right to an adequate standard of living is a fundamental starting point [29]. Housing first initiatives have been successfully implemented with other homeless populations in high-income countries and similar policies and interventions may be adaptable to the context of SCY in Kenya [70]. Programs that aim to reduce/eliminate individual and systemic discrimination attached to being street involved should be developed to create an environment where SCY can be less stigmatized and public policy can legitimately recognize and uphold their rights.
Civil society may play a role in holding government and political leaders accountable for action on the SDH inequities [29]. The participation of civil society, SCY, and their families is a vital component of advancing policy to promote health equity, through stimulating political action. For example, the Children Act of Kenya encourages child participation in any procedure affecting a child [32] as their engagement is critical so that their views about their needs, access to resources and their human rights may be heard and taken into account. One possibility for stimulating political action may begin with researchers and stakeholders disseminating relevant and practical evidence to civil society organizations and policymakers, which can help put the issue of SCY on the political agenda and inform policy and decision-making. Moreover, civil society organizations engaged in legal and human rights work can petition policymakers to take action, while monitoring and evaluating their progress regarding the legal obligation of State as a CRC signatory [31,71]. The process of empowerment and participation in shifting the political process likely presents challenges within the political context in Kenya and requires careful judgement and thoughtful engagement by social actors working to influence policy.
Avenues for intervening to advance health equity require context-specific action from the micro-to macrolevel tackling both structural and intermediary determinants [29]. While this analysis sought to identify and understand how SCYs' social and health inequities are produced and maintained by SDH, it did not identify specific avenues and strategies for action. Additional research to identify strategies for tackling the SDH inequities experienced by SCY in Kenya is required and will be fundamental in influencing policy and designing and implementing programs, to ameliorate the health and well-being of this vulnerable population.
This research has both strengths and limitations. This investigation included a wide range of social actors across western Kenya, which makes our findings contextually relevant to these counties. Our analysis was situated in the widely used and well-regarded WHO conceptual framework on SDH in conjunction with the CRC, making it appropriate and applicable to address health equity through legal and political changes. The use of newspaper media has both strengths and limitations. Newspapers included in this analysis provided supporting evidence of current events with respect to SCY and the sociopolitical policies targeting this population; however, it is important to recognize that news media may be biased in their reporting, and therefore this evidence should be interpreted with caution. Finally, our findings may not be representative or generalizable to all counties in Kenya as we only interviewed participants from five county capitals. The age group of SCY included in the study was also a limitation. SCY younger than 15 years of age may offer other perspectives, as evidence suggests that younger SCY were seen as "vulnerable" and may be worthy of assistance. Research that include evidence from and about younger age groups of SCY should be explored to provide a comprehensive understanding of health inequities and how/if they are addressed by the State.
Conclusion
SCY in Kenya experience numerous social and health inequities that are socially produced, avoidable, and unjust. These social and health disparities arise as a result of structural and social determinants stemming from the socioeconomic and political context in Kenya that produces systemic discrimination and influences SCYs' unequal socioeconomic position in society. In turn, SCY lack access to material resources, such as housing, basic material needs, and experience numerous psychosocial stressors, such as violence, which directly impact health equity. Remedial action to reverse human rights contraventions and to advance health equity through action on SDH for SCY in Kenya is crucial. This article contributes to understanding how social and health inequities experienced by SCY are produced and maintained and highlights how critical it is to uphold SCY's human rights to improve their health equity. | 2020-08-28T14:25:10.494Z | 2020-08-28T00:00:00.000 | {
"year": 2020,
"sha1": "4d180e93c593b4258783a27ca792298f906b293f",
"oa_license": "CCBY",
"oa_url": "https://equityhealthj.biomedcentral.com/track/pdf/10.1186/s12939-020-01255-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d180e93c593b4258783a27ca792298f906b293f",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
} |
266097060 | pes2o/s2orc | v3-fos-license | The cardio-oncology continuum: Bridging the gap between cancer and cardiovascular care
Cancer and cardiovascular disease are two of the leading causes of death worldwide. Although cancer has historically been viewed as a condition characterized by abnormal cell growth and proliferation, it is now recognized that cancer can lead to a variety of cardiovascular diseases. This is due to the direct impact of cancer on the heart and blood vessels, which can cause myocarditis, pericarditis, and vasculitis. Additionally, cancer patients frequently experience systemic effects such as oxidative stress, inflammation, and metabolic dysregulation, which can contribute to the development of cardiovascular risk factors such as hypertension, dyslipidemia, and insulin resistance. It is important to closely monitor patients with cancer, especially those undergoing chemotherapy or radiation therapy, for cardiovascular risk factors and promptly address them. This article aims to explore the clinical implications of the underlying mechanisms connecting cancer and cardiovascular diseases. Our analysis highlights the need for improved cooperation between oncologists and cardiologists, and specialized treatment for cancer survivors.
INTRODUCTION
Cardiovascular disease (CVD) and cancer are the two most prevalent and potentially fatal medical conditions worldwide 1 .In the United States, cancer ranks as the secondleading cause of death 2 , accounting for 8.8 million annual fatalities per year, compared to 17.7 million caused by CVD 3 Given the enormous burden these diseases impose on humanity, cancer and CVD research have become top priorities for medical professionals, researchers, and data analysts worldwide.Over the past 25 years, the incidence rates of all malignancies combined have increased by 13%, and this figure is expected to rise by an additional 2% in the next 20 years 4 .
The presence of multiple comorbidities, such as hypertension (HTN) and diabetes, in cancer patients can significantly affect their clinical outcomes and care 5 .Although improvements in cancer care have resulted in increased chances of survival, treatmentrelated adverse effects have raised morbidity and mortality rates 6 , especially since the 5-year survival rate has significantly improved over the last 30 years 7 .
Despite the apparent differences between cancer and CVDs, mounting evidence suggests that they are biologically linked due to the negative effects of anticancer treatment on cardiovascular health and shared risk factors prevalent in the aging population 8 .Therefore, it is critical to recognize and treat cardiovascular (CV) complications in cancer patients 9,10 .
Chemotherapy and radiation therapy increase the risk of stroke, coronary artery disease, and other CVDs such as heart failure 11 .Cancer therapy can cause cardiotoxicity, which is the most common adverse effect that directly affects heart structure and function.In addition, accelerated development of cardiovascular disease (CVD) can occur, particularly when conventional CV risk factors are present 12 .Cancer can indirectly increase a person's susceptibility to CVDs by causing inflammation, raising the risk of blood clots, and reducing immunological function 13 .Despite the significant impact of CVDs and cancer on global health, little research has been conducted on the connection between these two diseases.Clinical trials of anticancer therapies linked to CV damage usually lack identification of relevant cardiac outcomes 14 .Heart failure and cancer co-occur frequently, and the relationship between these two diseases is becoming increasingly evident 15,16 .Therefore, cardiologists' involvement in managing cancer treatment-related CV side effects and assisting in overall cancer care from diagnosis to survivorship and beyond has become increasingly recommended.This approach is called cardio-oncology 17,18 .The objective of this paper is to examine the latest studies on the relationship between cancer and CVD, scrutinize the various mechanisms connecting these two diseases, and explore potential strategies for reducing the risk of CV complications in individuals with cancer.The ultimate goal is to enhance the treatment and outcomes of patients with both cancer and CVD, offering them optimism for a brighter future despite these serious health conditions.
Adverse effects of cancer therapy
Cancer and its treatment modalities have long been recognized to induce CV functional decline in individuals with cancer, encompassing conditions such as left ventricular diastolic dysfunction, thromboembolic diseases, and HTN (Figure 1).Given the escalating prevalence of CV risk factors within our population, the additional harm caused by cancer therapies can have a detrimental impact on the CV health of those receiving therapy.Therefore, it is crucial to understand the diverse mechanisms underlying cardiac toxicities associated with prevailing cancer treatment modalities, as well as to explore various approaches to prevent or manage these risks.Anthracyclines such as doxorubicin, daunorubicin, epirubicin, and idarubicin, are among the most widely used chemotherapeutic agents and have been shown to be effective against various cancers such as breast cancer, lymphoma, and other solid organ tumors 19 .They exert their chemotherapeutic effect by inhibiting the activity of topoisomerase-2, causing irreversible double-stranded breakages in genomic regions of active DNA synthesis, which eventually leads to cell apoptosis, preferentially in cancer cells that are actively proliferating as opposed to normal mitotic cells 20,21 .However, the emergence of cardiotoxicity, which adversely affects patient outcomes and severely restricts oncological treatment options, may compromise the clinical efficacy of chemotherapeutic agents.They have been proven to be highly cardiotoxic by generating free radicals and disrupting antioxidant defence mechanisms and cell repair 22 .Additionally, it disrupts the mitochondrial membrane structure, generates cardiotoxic cytokines, causes intracellular calcium overload, and uncouples the electron transport chain 22 .The detrimental effect of anthracyclines on cardiac cells is dosedependent as they cause cell apoptosis at lower doses and necrosis as the dose increases, with more cases showing a decline in left ventricular ejection fraction within the first year after the end of treatment 23 .However, it depends significantly on the patient's pretreatment left ventricular ejection fraction (LVEF) as early detection of decline in LVEF with periodic monitoring of cardiac function over time, and prompt initiation of heart failure treatment has been shown to improve the patient's cardiac function 24 .The risk is higher in those who have undergone chest radiation, those who have received high anthracycline doses, and those with pre-existing CV risk factors such as HTN, obesity, diabetes mellitus, smoking, and dyslipidemia.Additionally, genetic changes in the ATPbinding cassette transporter (ABC) genes also influence anthracycline cardiotoxicity.The active cellular efflux of drugs, such as anthracyclines, is promoted by ABC genes, which contribute to multidrug resistance.If the activity of these genes is reduced, anthracycline accumulation within cells increases, leading to cardiotoxicity (Table 1) 25 .
Another class of chemotherapeutic drugs that have proven to be cardiotoxic are HER2/neu protein inhibitors, which include taxanes and monoclonal antibodies like trastuzumab.They are widely used in the treatment of HER2/neu receptor-positive breast cancers, which account for nearly 15-20% of invasive breast tumors.Overexpression of the HER2/neu receptor, responsible for regulating cell survival, leads to increased cell proliferation and growth 26 .Trastuzumab, a monoclonal antibody that targets the HER2/neu protein's extracellular domain, is effective in treating HER2/neu receptorpositive breast cancer 27 .Trastuzumab therapy was previously considered generally safe and well-tolerated 28 , however advances in research and patient survival rates later illustrated the cardiotoxic effects of trastuzumab, which unlike anthracycline cardiotoxicity, is not dose-dependent and is reversible 23 .It causes a decline in contractile activity of cardiomyocytes rather than necrosis and alone shows minimal signs of reduced LVEF and heart failure.
To ensure the safety of patients receiving trastuzumab, discontinuation of the drug is recommended if there is a decline in LVEF of >15% or if the LVEF falls to 50% or lower at any time during treatment.It is important to note that patients who receive trastuzumab alone have a lower incidence of heart failure and symptomatic LVEF decline compared to those who receive it in combination with anthracyclines, and hence require less frequent cardiac monitoring (Table 1) 27,29,30 .
Lapatinib HER2 receptor inhibitor that inhibits the HER2 receptor's intracellular tyrosine kinase domain and can act on both ligand-induced and ligand-independent pathways of HER2 signaling, can be given in patients with advanced and aggressive HER2/neu-positive breast cancer who have undergone previous anthracycline and trastuzumab treatment.
According to recent research, lapatinib is less likely to cause cardiotoxic adverse effects than trastuzumab 31 .Other HER2 receptor inhibitors, such as neratinib, afatinib, and pertuzumab, have also demonstrated lower cardiotoxicity when administered alone or in combination with trastuzumab 32 .Radiation therapy has significantly reduced the risk of cancer recurrence, however, research indicates that radiation therapy increases the risk of developing various CVD 33 .The underlying pathologies in the pericardium and myocardium that contribute to these CV events include free radical damage, microvascular and macrovascular injury, and valve stenosis, which typically manifests 10-15 years post-radiation exposure 34 .
Studies have also shown that women who received radiation therapy for breast cancer have a significantly higher risk of developing pericarditis, valvular heart defects, and coronary stenosis 35 .Women undergoing radiation therapy for left breast cancer exhibit a higher occurrence of stenosis in the mid and distal portions of the left anterior descending coronary artery than those undergoing radiation for right breast cancer 33 and this correlation cannot be accounted for by factors such as pre-existing heart disease, underlying cardiac risk factors, or radiation exposure 36 (Table 1).
HTN is also documented as the most common severe adverse event in patients with cancer receiving chemotherapy 37 and is also one of the major risk factors contributing to CVDs such as ischemic heart disease, stroke, and heart failure, as well as kidney disease 38 .Cancer therapies cause HTN through a variety of mechanisms including direct effects on vascular endothelial cells or indirect renal effects and are most notably seen with vascular endothelial growth factor (VEGF) inhibitors such as, tyrosine kinase inhibitors, proteasome inhibitors, calcineurin inhibitors and adjunctive therapies like corticosteroids, exogenous erythropoietin, and non-steroidal anti-inflammatory drugs 39 .
VEGF signaling pathway inhibitors (VSPI) such as bevacizumab, sorafenib, and sunitinib are widely used in terminating tumors such as renal, hepatocellular, thyroid, and gastrointestinal stromal tumors, and are associated with cardiovascular comorbidities such as HTN, with a reported incidence of 20-90% 40 .They act by inhibiting the VEGF receptors on the endothelial cells, thus inhibiting angiogenesis and depriving the tumorous growth of oxygen and nutrient supply 41 .VEGF inhibitors have been documented to decrease the left ventricular diastolic function, thereby directing an increase in systemic vascular resistance (SVR) to be the cause of elevated blood pressure which is a product of cardiac output and SVR 42 .Antiangiogenic VSPIs can cause an increase in SVR due to a decrease in the density of microvessels, downregulation of nitric oxide synthesis 43 , and increasing concentrations of vasoconstrictors, such as endothelin-1, which contributes to an increase in vascular resistance and hence causes HTN.Through their indirect effect on renal endothelial cells and podocyte VEGF expression, VSPIs also lead to thrombotic microangiopathy and thereby HTN 44 .They are associated with increased synthesis of reactive oxygen species including H 2 O 2 , and O 2− , causing vascular oxidative stress 45 and contributing to the cardiotoxic profile of VSPIs (Table 2).
A meta-analysis of five randomized controlled studies found that the use of fibrosarcoma B-type (BRAF)/mitogen-activated kinase (MEK) inhibitors, such as vemurafenib-dabrafenib and encorafenib-trametinib, in the treatment of BRAF-mutant melanoma and BRAF-mutant colorectal cancer was also linked to hypertension, with a reported incidence of 19.5% 46 .The claimed mechanism of action is linked to a decrease in nitric oxide generation and an increase in CD47 in melanoma cells cultured in vitro, which blocks nitric oxide/cyclic guanosine monophosphate signaling (NO/cGMP) via thrombospondin-1 46 .
DISCUSSION
The probability of death due to CVD in individuals with cancer is influenced by multiple factors, including age at diagnosis, sex, race, primary cancer stage, year of diagnosis, and use of surgery 47 .With the declining cancer-specific mortality rate and aging of the surviving population, there is an increasing overlap between patients with heart disease and cancer 48 .Recent statistics show that the number of cancer survivors in the United States has exceeded 16.7 million 49,50 .The American Heart Association (AHA) has systematically evaluated and classified scientific data into clinical practice guidelines to improve CV health 51 .
In recognition of the emerging field of cardio-oncology, the National Comprehensive Cancer Network (NCCN) also offers guidelines on the stages, treatment, and surveillance of cardiomyopathy in cancer patients 52 .A study analyzed 7,529,481 cancer patients to investigate deadly cardiac disease in cancer and found that 5.24% (394,849) of them died from the disease (Table 3) 47 .The mortality rate due to heart disease in all cancer patients was 10.61 per 10,000 person-years, and the SMR for fatal heart disease was 2.24 (95% CI: 2.23, 2.25, relative risk: p = 0.0001), as shown in Table 3 47 .Among the various types of cancer, lung, breast, prostate, and colorectal cancer patients are at a higher risk of dying from fatal heart disease than patients with other types of cancer 47 .In the case of patients with gastroenteropancreatic neuroendocrine neoplasms, the risk of CVD was 1.2 times greater than that of the general population in the United States 53 .A study found that among the GIST patients identified between 2000 and 2019, 477 (4.0%) died from CVD, and their risk of CV mortality was 3.23 times higher than that of the US population, with a 95% CI of 2.97−3.52(Table 3) 54 .Medical conditions with the highest SMR were aortic aneurysm and dissection, with an SMR of 3.58 and a 95% CI of 1.54−7.04,followed by cerebrovascular disorders, with an SMR of 3.28, a 95% CI of 2.62−4.04,and various illnesses of the arteries, arterioles, and capillaries, with an SMR of 10.35, and a 95% CI of 2.82-26.49(Table 4) 54 .
Smoking, HTN, obesity, and genetic predispositions are risk factors associated with an increased likelihood of both cancer and CVD (Figure 2) 55 .Aberrant Wnt signaling is a crucial factor in the pathophysiology of atherosclerosis and several malignancies 56,57 .Various genetic risk factors for cancer and CVD have been identified, and reports suggest that mutations in LRP6, a Wnt binding protein, may cause CVD and other cancers 58 .In addition, acquired mutations in DYRK1B, which is overexpressed in many types of tumors and has a functional mutation associated with CV disease, support the genetic connection between heart failure and cancer 59,60 .Nicotine, one of the harmful pathways activated by smoking, has been implicated in the pathogenesis of both cancer and CVD 61,62 .Patients with pre-existing CVD are at risk of developing smoking-related cancers 63 .
Obesity, a risk factor for one in five cancer types, has also been linked to CVD and cancer 64 .Adipose tissue is a significant source of estrogen, making women more susceptible to hormone-driven malignancies such as breast and ovarian cancers 65 .Obese individuals are at an increased risk for developing heart failure but may have a survival advantage, a phenomenon currently under investigation in cancer patients 66,67 .
HTN was found to have the highest incidence among stomach and ovarian cancer patients, with over 30% of the 25,000 cancer patients diagnosed with HTN in a retrospective cohort (Table 5) 68 .In comparison to periods when chemotherapy was not administered, treatment with cytotoxic, targeted, or combination chemotherapy was associated with an average hazard ratio between a 2-and 3.5-fold increase in the risk of any degree of HTN 59 .Chemo-and radiation-induced cardiotoxicity are becoming more prevalent in the present scenario as these modes of treatment are the primary options for most cancer treatment regimens (Table 6).Chemotherapy can cause two types of cardiac toxicities: type 1 chemotherapy-induced cardiotoxicity and type 2 chemotherapy-induced cardiotoxicity, as stated in a study 69 .Type 1 chemotherapy-induced cardiotoxicity is characterized by cardiomyocyte damage, with anthracyclines being the prototypical drug class.In contrast, type 2 chemotherapy-induced cardiotoxicity, which is characterized by treatment-induced cardiotoxicity and the absence of structural abnormalities that are reversible upon therapy termination, is exemplified by trastuzumab (herceptin), according to the same study 69 .However, recent research using cardiac magnetic resonance imaging (MRI) has shown that the outlined classification pattern may not be as rigid as previously believed, as scar formation has been detected in patients presumed to have type 2 cardiotoxicity, and appropriate heart failure therapy improved the presumed type 1 cardiotoxicity 70,71 .VEGF plays a significant role in angiogenesis, endothelial cell survival, vasodilation, and cardiac contraction 72 .HTN is the most common CV adverse effect of VEGF-targeted angiogenesis inhibitor therapy, indicating the importance of VEGF and related signaling pathways in blood pressure regulation 73 .The hypertensive effect of VEGF-targeted therapy seems to be dose-dependent, since it is more frequently observed in patients receiving a higher dose of anti-VEGF cancer therapy, as reported in previous studies 74, 75 .
In the treatment of males with advanced testicular cancer, vascular damage is one of the most significant long-term side effects of cisplatin-based chemotherapy 76 .The main CVD that has been researched are Raynaud's phenomenon, early atherosclerosis, dyslipidemia, HTN, coronary artery disease, and thromboembolic events 77 .Patients who survive testicular cancer often develop CVD risk factors, such as obesity, HTN, hyperlipidemia, and diabetes mellitus 78 .
Owing to the increasing incidence of thoracic tumors and prolonged patient survival, radiation-induced heart disease has become more prevalent.Radiation has an impact on every aspect of the heart, causing subtle histopathological alterations or overt clinical illness 79 .The most typical type of involvement involves the pericardium and comprises constrictive pericarditis and asymptomatic pericardial effusion 79 .Compared to women without breast cancer, women with breast cancer had a higher incidence of CVD events, a higher mortality rate linked to CVDs, and a higher overall mortality rate 80 .The rate of major coronary events increased by 7.4% per gray when the heart was accidentally exposed to breast cancer radiation therapy 81 .This increase is inversely correlated with the mean dosage to the heart, which begins a few years after exposure and lasts for at least 20 years 81 .
In view of the aforementioned factors, there is an increasing need to address and provide recommendations with suitable scoring systems to identify patients with cancer who are at a high risk of CVD-related morbidity and mortality.The American Heart Association has made a strong case for the value of cardiac rehabilitation programs for cancer patients to mitigate CV risk and has recommended the use of the CORE (cardio-oncology rehabilitation) algorithm to recognize high-risk patients 48 .According to AHA/ACC, referring patients with acute coronary syndromes to cardiac rehabilitation programs has always been a Class I recommendation because it improves patients' functional capacity through supervised exercise programs, facilitates CV risk reduction by offering appropriate counseling, lowers the rate of recurrent hospitalizations and, as a result, lowers CV morbidity and mortality 48 .The same strategy, when implemented properly with a multimodal collaboration between oncologists, cardiologists, and primary care physicians, can help reduce the CVD risk in cancer patients as well.
A multimodal collaboration, with treating providers working together to evaluate patients' CV risk in a timed fashion, taking appropriate actions, can undoubtedly be a solution to this problem.Patients' baseline CV risk should be evaluated before the onset of cancer therapy using a detailed history, thorough physical examination, and supplementary diagnostic modalities like ECG or echocardiography 69 .Once treatment has started, it must be determined which patients need to have CV follow-up.Following cancer treatment, follow-up recommendations should be tailored to the patient's individual CV risk and comorbidity profile, specific anticancer therapy used, the overall survival prognosis of the underlying malignancy, and adverse cardiac effects during treatment.
Long-term cardiac surveillance programs are highly recommended for patients with breast cancer, lymphoma, and individuals who have had mediastinal radiation therapy 69 .AIC is frequently irreversible if not detected early and can result in progressive endstage heart failure despite the use of medical therapy based on heart failure guidelines (Table 6) 73 .
It is predicted that more cancer survivors will develop chemotherapy-induced cardiomyopathy and require orthotopic heart transplantation because more recent chemotherapy drugs have demonstrated potential cardiotoxicity and cancer mortality is declining 82 .Trastuzumab withdrawal or discontinuation (4-8 weeks) can cure systolic dysfunction in the early phases of trastuzumab-induced cardiotoxicity, allowing the initiation of normal heart failure treatment (ACE inhibitors and beta-blockers) (Table 6) 83 .Although there is disagreement regarding the precise recovery from trastuzumab-induced cardiotoxicity, most patients report improved health following trastuzumab withdrawal along with the management of cardiac symptoms 84 .
Despite emerging data showing that CV treatment can improve both cardiac-specific and cancer-specific outcomes, cardio-oncology patients continue to receive inadequate CV care, although cardio-oncology services are becoming more prevalent in academic centers and local communities 85,86 .Another requirement for cardio-oncology programs is prompt access to testing and consulting, as both delayed treatment of cardiac issues and non-adherence to recommended cancer treatments are linked to unfavorable results 87 .Additional research is urgently required to determine when invasive treatments such as revascularization are beneficial given the extreme prevalence and coexistence of the two disease processes 88 .The utilization of a cardio-oncology service line, which integrates crucial infrastructure components based on a standardized system of care, is a practical and efficient care model for enhancing cardio-oncology care quality, patient access, and health equity in sizable, multi-hospital health systems 89 .
CONCLUSION
To effectively prevent and treat cancer and CVD, it is crucial to understand the relationship between these two diseases.Cancer patients are more likely to develop CVD, such as heart failure, coronary artery disease, and cardiomyopathy, due to CV issues.This risk is particularly high for patients undergoing chemotherapy, radiation therapy, and targeted therapies for cancer treatment.The association between CVD and cancer is complex and involves intricate underlying mechanisms.Although the exact mechanisms remain unclear, vascular damage, oxidative stress, and inflammation are believed to play significant roles.Moreover, certain cancer treatments can cause cardiotoxicity, leading to impaired cardiac function and increased risk of CV events.These outcomes have important implications for therapy, as patients with cancer are more likely to experience CV problems even years after their cancer treatment has ended.This underscores the importance of careful monitoring and follow-up care for cancer survivors, with a focus on early identification and treatment of CV issues.
More collaboration is needed between oncologists and cardiologists, and cardiooncology should be developed to address the unique needs of cancer patients with CVD.This collaboration should include the development of risk assessment tools and treatment guidelines tailored to this population as well as the investigation of innovative interventions and treatments for preventing and managing CV complications in cancer patients.The significant impact of cancer on CV health highlighted in this review underscores the importance of increasing awareness, working collaboratively, and providing specialized care for those at risk.Through continued research and collaboration, we can improve the outcomes and quality of life of cancer survivors, ultimately reducing the burden of CVD associated with cancer.
Figure 2 .
Figure 2. Common risk factors associated with CVD and cancer.
Table 1
Adverse effects of chemotherapy and radiation therapy.
agents Anthracyclines HER2 Neu protein inhibitors Lapatinib Other HER2 receptor inhibitors Radiation therapy Mechanisms of Cardiotoxicity
Notes.HER 2, human epidermal growth factor receptor 2; CVD, cardiovascular diseases; N/A, not applicable; LVEF, left ventricular ejection fraction.
Table 3
Cardiovascular mortality rates in cancer patients.
Table 4
Standard mortality rate (SMR) for various cardiovascular diseases found in cancer patients.
Table 5
Incidence of Hypertension in patients with stomach and ovarian cancer.
Table 6
Various cardiovascular diseases linked with cancer therapies. | 2023-12-09T16:12:04.249Z | 2023-12-07T00:00:00.000 | {
"year": 2024,
"sha1": "384be5a4c1cab969b4a376e47943d6082391305f",
"oa_license": "CCBY",
"oa_url": "https://globalcardiologyscienceandpractice.com/index.php/gcsp/article/download/641/566",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0f535277303032462c68ce640a8613ac590a21f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
226958851 | pes2o/s2orc | v3-fos-license | IL-1β Induced Cytokine Expression by Spinal Astrocytes Can Play a Role in the Maintenance of Chronic Inflammatory Pain
It is now widely accepted that the glial cells of the central nervous system (CNS) are key players in many processes, especially when they are activated via neuron-glia or glia-glia interactions. In turn, many of the glia-derived pro-inflammatory cytokines contribute to central sensitization during inflammation or nerve injury-evoked pathological pain conditions. The prototype of pro-inflammatory cytokines is interleukin-1beta (IL-1β) which has widespread functions in inflammatory processes. Our earlier findings showed that in the spinal cord (besides neurons) astrocytes express the ligand binding interleukin-1 receptor type 1 (IL-1R1) subunit of the IL-1 receptor in the spinal dorsal horn in the chronic phase of inflammatory pain. Interestingly, spinal astrocytes are also the main source of the IL-1β itself which in turn acts on its neuronal and astrocytic IL-1R1 leading to cell-type specific responses. In the initial experiments we measured the IL-1β concentration in the spinal cord of C57BL/6 mice during the course of complete Freund adjuvant (CFA)-induced inflammatory pain and observed a peak of IL-1β level at the time of highest mechanical sensitivity. In order to further study astrocytic activation, primary astrocyte cultures from spinal cords of C57BL/6 wild type and IL-1R1 deficient mice were exposed to IL-1β in concentrations corresponding to the spinal levels in the CFA-induced pain model. By using cytokine array method we observed significant increase in the expressional level of three cytokines: interleukin-6 (IL-6), granulocyte-macrophage colony stimulating factor (GM-CSF) and chemokine (C-C motif) ligand 5 (CCL5 or RANTES). We also observed that the secretion of the three cytokines is mediated by the NFkB signaling pathway. Our data completes the picture of the IL-1β-triggered cytokine cascade in spinal astrocytes, which may lead to enhanced activation of the local cells (neurons and glia as well) and can lead to the prolonged maintenance of chronic pain. All these cytokines and the NFkB pathway can be possible targets of pain therapy.
INTRODUCTION
Astrocytes are the most abundant glial cells in the CNS, they are responsible for many functions as supporting cells in the central nervous system (CNS) e.g., maintenance of the ionic milieu, induction of the blood brain barrier, removal of excess neurotransmitters etc. (Verkhratsky and Nedergaard, 2018). However, it is clear now, that astrocytes are not merely supporting cells in the CNS, but they are capable to modulate neuronal excitability (Suter et al., 2007;Milligan and Watkins, 2009;Kuner, 2010;Ji et al., 2019) and in this way contribute to the onset and maintenance of numerous CNS pathologies including chronic pain (Gao and Ji, 2010;Chiang et al., 2012;Liddelow and Barres, 2015). One possible way of modulating neuronal activity is the astroglial production of cytokines and chemokines which can act on their neuronal receptors (Zhang and An, 2007) thus contribute to neuron-glia interactions. Such enhanced cytokine expression was observed in neuropathic and inflammatory pain as well (Calvo et al., 2012;Jayaraj et al., 2019). Some of these cytokines and chemokines are also means of glia-glia communication as their receptors are expressed by glial cells (Conti et al., 2008;Trettel et al., 2019).
In this study we focused on the role of interleukin-1β (IL-1β) which is the prototype of pro-inflammatory cytokines, it is a regulator of many immunological functions (Sims and Smith, 2010). It was shown to be involved in the pathomechanism of several inflammatory disorders which are also associated with pain (Dinarello et al., 2012). In the CNS its receptor (IL-1R1) was found to be expressed by neurons (Niederberger et al., 2007;Cao and Zhang, 2008;Zhu et al., 2008) and glial cells (Wang et al., 2006;Zhu et al., 2008;Liao et al., 2011;Gruber-Schoffnegger et al., 2013). It has been reported that IL-1β can induce astrocytic activation and astrogliosis (Herx and Yong, 2001). Our earlier findings (Holló et al., 2017) also suggest that during inflammatory pain IL-1β can act on spinal neurons and astrocytes. It has been revealed that IL-1β induce cell type-specific response in the cells of the CNS due to a neuron specific isoform of the IL-1 receptor accessory protein (IL-1RAcP), which is a required receptor partner in IL-1 signaling (Huang et al., 2011). In nerve cells IL-1β modulates neuronal excitability by e.g., potentiation of NMDAmediated intracellular calcium signaling (Viviani et al., 2003;Cao and Zhang, 2008), while IL-1β activated astrocytes produce a cascade of inflammatory mediators which can further enhance and possibly prolong neuroinflammation-induced chronic pain (Zhang and An, 2007).
In this study we intended to identify those cytokines and chemokines which are secreted by IL-1β-stimulated spinal astrocytes. We also intended to investigate the activation of the NF-kB signaling pathway which is associated with astrocytespecific IL-1β signaling.
Animals
The study protocol was reviewed and approved by the recommendations of the Animal Care Committee of the University of Debrecen, Hungary according to national laws and European Union regulations [European Communities Council Directive of 24 November 1986 (86/609/EEC)], and was properly conducted under the supervision of the University's Guidelines for Animal Experimentation. All animals were kept under standard conditions with chow and water ad libitum. The experiments were performed on male C57BL/6 mice (Gödöllő, Hungary). The animals were divided into experimental groups.
Experimental group 1 (6 control mice) and experimental group 2 (21 CFA treated animals). In animals within the treated group chronic inflammation was induced by intraplantar injection of 50 µl 1:1 mixture of physiological saline solution and complete Freund-adjuvant (CFA) (Sigma, St Louis, United States) into the right hindpaw of mice according to the method described earlier (Hylden et al., 1989).
Nociceptive Behavioral Test
Control and CFA-treated animals were tested for paw withdrawal responses to noxious mechanical stimuli. Mechanical sensitivity of the animals was detected by a modified von Frey test (Dynamic Plantar Aesthesiometer, Ugo Basile, Gemonio, Italy). Animals were placed into a cage with acrylic sidewalls and mesh floor. After 15 min of habituation, a flexible, von Frey-type filament (diameter = 0.5 mm) exerted increasing force on the plantar surface of the hindpaw, until the animal withdrew it. The mechanical withdrawal threshold (MWT) for both hind paws were measured before CFA injection and the tests were repeated daily after CFA injection. The MWT was detected automatically. The test was repeated five times for each paw with 2 min intervals alternating between the right and the left paw. From the experimental data mean value and standard error of mean (SEM) were calculated. Statistical differences among the data were calculated according to the One Way ANOVA test.
IL-1β Quantitative ELISA
Mouse IL-1β/IL-1F2 Quantikine ELISA kit (RnD Systems, Minneapolis, United States, cat. no. MLB00C) was applied for the measurement of total IL-1β amount in spinal cord tissue homogenates. Briefly, control animals (n = 3) and CFA-treated animals (n = 3/day) on experimental days 1-5 were sacrificed (after measuring the mechanical pain sensitivity levels), the spinal cord was dissected and the dorsal horn of the L4-L5 segments was removed, the treated (right side) and the non-treated (left side) of the tissue was handled separately. After measuring their weight, the tissue samples were mechanically homogenized in ice cold RIPA buffer supplemented with protease inhibitors (Pierce Protease Inhibitor Mini tablet, Thermo Scientific, Rockford, United States). After 20 min. of gentle rocking on ice the samples were centrifuged (10 min, 15000 rpm, at 4 • C) to remove insoluble tissue debris. 50 µl of supernatant was used in triplicates to determine the IL-1β content of the tissue homogenates. Then the experiments were performed according to the instructions by the manufacturer. Finally, the IL-1β content was calculated for 1 mg of spinal cord tissue.
Primary Astrocyte Cultures
For the production of spinal cord astrocyte cultures we basically followed the procedure already described (Hegyi et al., 2018) with slight modifications. Briefly, the whole spinal cord was removed from 2 to 4 day old C57BL/6 and IL-1 receptor type-1 (IL-1R1) deficient/(B6.129S7-Il1r1 tm1/mx,/J stock #:003245) created by Labow et al. (1997) and purchased from Jackson Laboratories (Bar Harbor, ME, United States)/pups after decapitation and placed into ice-cold dissecting buffer (136 mM NaCl, 5.2 mM KCl, 0.64 mM Na 2 HPO4, 0.22 mM KH 2 PO 4 , 16.6 mM glucose, 22 mM sucrose, 10 mM HEPES supplemented with 0.06 U/ml penicillin and 0.06 U/ml streptomycin). The isolated spinal cords were carefully cleaned to remove meninges then placed into fresh dissecting solution containing 0.025 g/ml bovine trypsin (Sigma, St Louis, United States) and were incubated at 37 • C for 30 min. Then the solution was replaced by Minimum Essential Medium (MEM, Gibco, Life Technologies Ltd., Parsley, United Kingdom) supplemented with 10% Fetal Bovine Serum (FBS, Hyclone, GE Healthcare Bio-Sciences, Pittsburgh, United States). After 5 min. incubation at room temperature the tissue pieces were gently suspended by a Pasteur pipette and the cell suspension was filtered through a nylon mesh (Cell Strainer, Sigma, pore size: 100 µm), the cell number was identified and the suspension was diluted to a cell density of 1 × 10 6 /ml, the cells were placed into 24 well tissue culture plates (0.5 ml/well). The cell cultures were kept at 37 • C in a 5% CO 2 atmosphere, the medium was replaced the following day and every second day thereafter.
Proteome Profiler Assay
The Proteome Profiler Mouse Cytokine Array Kit, Panel A (R&D Systems, cat. no. ARY006) was utilized for the parallel determination of the relative levels of selected mouse cytokines and chemokines produced by the primary astrocyte cultures. Prior to the assay, cells were stimulated by 10 pg/ml recombinant murine IL-1β protein (PeproTech) for 24 h. Then the supernatants were pooled from 12 well non-treated, control and 12 well IL-1β-stimulated cultures. After centrifugation (10 min, 800 rpm at 4 • C) 1 ml of the supernatants were mixed with Array Buffer and the Antibody Detection Coctail and were added to the nitrocellulose membranes pre-coated with the capture antibodies. Then we followed the instructions by the manufacturer. The signal was developed by DuoLux Chemiluminescent Substrate Kit (Vector Laboratories, Burlingame, United States, cat. no. SK-6604) and the image was captured by FluorChem E (Protein Simple, San Jose, United States).
The developed membranes were analyzed by the Image-J software. First, pixel densities were determined for each spot on the membranes, then the background signal was subtracted from each value. Finally, we compared the corresponding signals on different arrays to determine the relative change (fold change) in cytokine levels between samples. Statistical differences among the data were calculated according to the ANOVA test, the difference between groups was considered significant if p ≤ 0.05.
Immunohistochemistry
Immunohistochemistry was performed on astrocyte cultures which were kept on coverslips placed into 24 well culture dishes. After 7-10 days of culturing the coverslips were removed and the cells were fixed with 4% paraformaldehyde (15 min.). Then the cells were washed in PBS containing 100 mM glycine followed by 10 min. incubation in PBS. Aspecific labeling was blocked by PBS containing 10% normal serum for 50 min. Then the cells were incubated with the primary antibodies (anti-IL-6, anti-GM-CSF, anti-CCL5,/PeproTech, produced in rabbit); anti-NF-κB p65/Thermo Fisher-Invitrogen, Waltham, United States, produced in rabbit/, GFAP/Synaptic Systems, Göttingen, Germany, produced in mouse/) overnight at 4 • C.
The following day the cells were incubated with the appropriate secondary antibodies (120 min. RT). Finally, the cell nuclei were stained with DAPI. Immunofluorescent images were acquired by an Olympus FV3000 confocal microscope with a 60× oilimmersion lens (NA: 1.4). Single 1-µm-thick optical sections were scanned from the cell cultures, the confocal settings (laser power, confocal aperture and gain) were identical for all methods. The scanned images were processed by Adobe Photoshop CS5 software.
Fluorescent Double Immunostaining of Spinal Cord Sections
Non-treated C57BL/6 mice (n = 3) and CFA-treated mice on post-injection day 1 (n = 3) and on post-injection day 4 (n = 3) were deeply anesthetized with sodium pentobarbital intraperitoneally (50 mg / kg.) and transcardially perfused with Tyrode'solution (oxygenated with a mixture of 95% O2, 5% CO2), followed by a fixative (4% paraformaldehyde dissolved in 0.1 M phosphate buffer/PB, pH 7.4/). After transcardial fixation the lumbar segments of the spinal cord were removed and placed into the same fixative for an additional 4-5 h, and immersed in 10 and 20% sucrose dissolved in 0.1 M PB until they sank. In order to aid reagent penetration the spinal cord was freezethawed in liquid nitrogen. After cryoprotection tissue pieces were embedded into agarose, and the L4-L5 segments of the spinal cords were sectioned at 50 µm on a vibratome, followed by extensive washing in 0.1 M PB.
In order to study the co-localization of GFAP with the cytokine markers double immunolabelings were performed. Before staining with the primary antibodies tissue sections were kept in 10% normal donkey serum (Vector Labs) for 50 min. Free-floating sections were then incubated with a mixture of antibodies that contained (a) GFAP (diluted 1: 2000, Synaptic Systems, produced in mouse) and one of the following antibodies: (b) anti-IL-6 (1:500, Peprotech, produced in rabbit), (c) anti-GM-CSF (1:500, Peprotech, produced in rabbit), (d) anti-CCL5 (1:500, Peprotech, produced in rabbit). The sections were kept in the primary antibody mixtures at 4 • C for 2 days and then they were placed into the solution of secondary antibodies for 2 h (goat anti-mouse IgG conjugated with Alexa Fluor 488/diluted 1:2000, Invitrogen/and goat-anti-rabbit IgG conjugated with Alexa Fluor 555/diluted 1:2000, Invitrogen/). Finally, sections were mounted on glass slides and covered with Vectashield mounting medium (Vector Labs).
Single 1-µm-thick optical sections were scanned with an Olympus FV3000 confocal microscope. Scannings were performed with a 10× objective lens (NA: 0.4). All the setting (laser power, confocal aperture) were the same for all scans. Images obtained from three non-treated, and six CFA-injected animals (3 on post-injection day 1 and 4, respectively) were processed with Adobe Photoshop CS5 software. By filtering the background staining, basal threshold values were set for both GFAP and the other markers.
Quantitative Analysis of the Spinal Cord Sections
The confocal fluorescent z-stack sections of spinal dorsal horn specimen were captured with 60× oil-immersion lens (NA: 1.4) of an Olympus FV3000 confocal laser microscope. All confocal settings were adjusted with the same parameters (confocal aperture, laser aperture-and intensity). The obtained image stacks were further analyzed with IMARIS software (Bitplane), which evaluated the co-localization data between the immunoreactive spots of IL-6, GM-CSF, CCL5 cytokines and the GFAP labeled astrocyte profiles rendered by the algorithm based on the staining in a standard way. The co-localization was validated if the detected spots and astrocyte surfaces were within the distance of 0.3 µm from each other. The analysis was carried out on five randomly selected images at each marker taken from control and treated superficial spinal dorsal horn sections of day 1 and 4 following CFA injection, respectively. The co-localization numbers for each cytokine and time point were averaged and standard error of means were calculated. Statistical differences between groups was analyzed by the Kruskal-Wallis statistical probe followed by Mann-Whitney pairwise comparison.
Cell Treatment and Western Blotting
10-12-day-old astrocyte cultures were stimulated with different concentrations of IL-1β (ranging between 0.1 and 100 pg/ml) for variable durations (between 2 and 24 h). For NF-κB inhibition the cultures were treated with BAY-11-7082 for 4 h together with 10 ng/ml IL-1β in PBS. The SN50 peptide was used as a pretreatment for 1 h which was followed by 4, 8 or 16 h of IL-1β application. After the treatments the culture supernatants were removed and the proteins of 200 µl of supernatant was precipitated with two-times amount of ice cold acetone. The precipitated proteins were separated by centrifugation (10 min. 3000 rpm 4 • C). Cells attaching the culture plates were lysed and cytosol and nuclear fractions were separated as it has been already described. Supernatants and cytosol/nuclear fractions were kept in aliquots at −70 • C until they are used for western blotting.
Antibody Controls
The cytokine antibody specificity was tested on control spinal cord sections by antibody depletion as it has been described earlier (Holló et al., 2017). Briefly, the diluted anti-IL-6, anti-GM-CSF, anti-CCL5 antibodies were mixed with recombinant murine IL-6, GM-CSF and CCL5 peptides (PeproTech), respectively. The mixtures and also diluted primary antibodies alone, were incubated at 4 • C for 18 h and then centrifuged (4 • C, 30,000g, 30 min). Thereafter the spinal cord sections were treated with one of the cytokine antibodies or with one of the preincubated mixtures for 48 h at 4 • C, then placed into biotinylated goat anti-rabbit IgG dissolved in TPBS (diluted 1:200, Vector Labs, Burlingame, United States) for 4 h at room temperature. The color reactions were developed by 3,3 -diaminobenzidine. Photomicrographs were taken by Olympus CX-31 epifluorescent microscope with fixed settings. The pre-adsorption of cytokine antibodies with the appropriate cytokine proteins abolished the specific immunostaining (Supplementary Material).
Mechanical Pain Sensitivity of C57/BL6 Mice During the Course of CFA-Induced Inflammatory Pain
The mechanical withdrawal threshold (MWT) was very similar in all animals before CFA injection (4.91 ± 0.039 g).
On the treated, right (ipsi-lateral) hind paw, the CFAtreatment induced a highly significant increase in pain sensitivity on the first experimental day, MWT dropped to 2.96 ± 0.052 g (p = 0.000101). The reduction of the MWT values continued until day 3 and 4 when MWT values decreased to 2.03 ± 0.117 and 1.99 ± 0.065 g, respectively. There was no statistically significant difference between MWT values on day 3 and day 4. We followed the measurement for an additional day and we found that on day 5 the nociceptive sensitivity significantly attenuated (p = 0.000381), MWT values reached 2.43 ± 0.118 g. However, this significant change was not observed on the left (contra-lateral) side and the MWT levels were stable through the course of the experiment ( Figure 1A).
According to other authors (Hylden et al., 1989;Pitzer et al., 2016) the drop in MWT values peaks on the third day after CFA administration, which is in accord with our observations.
Peripheral Inflammation Evoked by CFA Injection Induced Elevation of Spinal IL-1β Expression
Although the expression of IL-1β has already been demonstrated in the spinal cord (Raghavendra et al., 2004;Liu et al., 2008), there has been no data in the literature to follow the time dependent changes in the expression of the cytokine in inflammatory pain. Thus, we intended to explore how the expression of IL-1β changes in CFA-induced inflammatory pain at the protein level ( Figure 1B) in the spinal dorsal horn tissue extract of the L4-L5 spinal segments, which is known to receive primary afferent inputs from the plantar surface of the hind paw (Molander and Grant, 1985).
Measuring the quantity of IL-1β protein with the quantitative ELISA method we found that CFA-evoked plantar inflammation induced a significant elevation in the expression of IL-1β. The basal level of the cytokine was 4.15 ± 0.43 pg/mg in the spinal dorsal horn tissue extract of the L4-L5 spinal segments which significantly (p = 0.049) increased to 9.43 ± 0.73 pg/mg on experimental day 1, correlating the significant drop of MWT measured on the ipsi-lateral hindpaw of the CFA injected animals on the same day. Highest cytokine level was measured on day 4 (15.08 ± 3.66 pg/mg) corresponding the highest mechanical pain sensitivity. We followed the cytokine level for an additional day and observed that on day 5 of the experiment the IL-1β concentration dropped to 8.69 ± 0.12 pg/mg. Our earlier experiments showed that the mechanical pain sensitivity gradually attenuates after day 5 and finally returns to the basal level on post-injection day 11 (Holló et al., 2017).
We found in the literature only one paper where the IL-1β content of spinal cord is given to the weight of the tissue. Wang et al. (1997) found lower level of the cytokine in the control rat spinal cord. This difference can be due several reasons e.g., species differences between rats and mice, or due to the different sample collection as they used the entire spinal cord for the measurement while we extracted only the dorsal horn tissue.
Significant changes in the nociceptive behavior and robust elevation of the IL-1β production in the spinal cord suggest that the first 24 h is a very critical period during the course of the CFAevoked pain. Thus, in the further experiments we detected the activation and cytokine/chemokine secretion of spinal astrocyte cultures in this period.
Cultured Spinal Astrocytes Express the Ligand Binding Unit of the IL-1 Receptor (IL-1R1) and They Are Activated by IL-1β in a Concentration-Dependent Way
Astrocytic activation is increasingly accepted as a factor which contributes to chronic pain states through its contribution to central sensitization. Thus we wanted to investigate astrocytic activation using IL-1β as a stimulating agent which is upregulated in the spinal cord during inflammatory pain and its upregulation correlates with the nociceptive sensitivity during the course of CFA-evoked pain.
Although we and others (Wang et al., 2006;Holló et al., 2017) already showed that the spinal astrocytes express IL-1R1, we confirmed its expression on the cultured astrocytes (Figure 2A).
As a next step, cell activity (MTT) assay was performed to identify the lowest IL-1β concentration which is suitable to increase astrocytic activity significantly. We used serial dilutions (1-100 ng/ml) of the recombinant IL-1β protein and found that 24 h stimulation with 10 ng/ml of the protein increased significantly the cellular activity to 167 ± 12.23% (p = 0.02815) of the control level ( Figure 2B). Thus, we used this concentration of the cytokine in the further stimulation experiments. As a negative control, we also tested the IL-1β responsiveness of spinal astrocyte cultures isolated from IL-1R1 knock-out mice and we found no significant upregulation of mitochondrial activity (data not shown).
Spinal Astrocytic Secretome Profile Reveals IL-1β-Induced Overexpression of 3 Cytokines and Chemokines Out of the 24 Produced by Non-treated Astrocyte Cultures
We detected the secretion of 24 cytokines/chemokines in the supernatant of spinal astrocyte cultures out of the 40 molecules included into the assay (Figure 3A). When comparing the cytokine levels in culture supernatants obtained from control and IL-1β-treated spinal astrocytes, we observed the production of the same cytokines/chemokines, we have not detected any newly synthetized molecules. We, however, found significant upregulation of three molecules: CCL5 (RANTES), GM-CSF, IL-6. Highest, approximately seven fold change was measured for the production of CCL5 (RANTES), which was also highly significant (p = 0.0002) if compared with the control level of the chemokine. GM-CSF levels exceeded approximately four fold (p = 0.0025) and IL-6 levels approximately 2 fold (p = 0.0022) the control levels of the cytokines, respectively ( Figure 3B).
We followed the time-dependent changes of the three chemokine/cytokine levels during the critical 24 h as we observed in the spinal cord the significant enhancement of IL-1β secretion during this period. The time-course experiments showed slightly different secretion pattern of the three molecules. Early activation of CCL5 secretion, was observed even after 2 h of IL-1β treatment it considerably elevated level (179.0 ± 2.59%) and it reached its peak after 8 h of stimulation (253.4 ± 0.5.6%, p = 0.006272). While the IL-6 and GM-CSF activation was slower, increased gradually until the end-point of the experiment. At this time the two cytokine levels were significantly elevated if compared with the control level (IL-6: 189.5 ± 1.74%, p = 0.002369; GM-CSF: 219.3 ± 2.35%, p = 0.000209), in accordance with the Protein Profiler data (Figure 4A).
We also carried out western blot analysis from the supernatants of the control and IL-1β-stimulated cultures for the detection of the three significantly upregulated cytokines. We observed immunoreactive bands at the expected molecular weight of each cytokine/chemokine. The changes in cytokine levels, observed on the western blots, were comparable of the data received by the ELISA experiments. In the cell lysates of the cultures IL-1β-induced gradual increase of GFAP expression, similarly as it has been described previously by other authors (Liddelow et al., 2017;Yang et al., 2019; Figure 4B).
To further demonstrate the astrocytic expression of the cytokines, we also performed double immunostainings on control astrocytes and on cultures which received 24 h of IL-β-treatment. The 1 µm thick optical images showed co-localization of the three cytokines/chemokines with the GFAP marker in the control cultures and enhanced expression of the cytokines and GFAP in the IL-1β-stimulated astrocytes (Figure 5).
Spinal Expression of IL-6, GM-CSF and CCL5 Is Increased During the Course of CFA-Evoked Inflammatory Pain
To determine if the three selected cytokines were also overexpressed in the spinal dorsal horn upon peripheral CFAinjection, spinal cord sections were prepared from the L4-L5 segments which receive primary afferent fibers from the hindpaws. In control sections low levels of cytokines were detected which was mostly restricted to the superficial laminae (corresponding to Rexed laminae I and II, Figures 6.1-6.3, 6.10-6.12, 6.19-6.21). To follow the time-dependent changes of the selected cytokines during the course of the pain model we obtained confocal images from the initial phase of the model (post-injection day 1) and at the time of highest mechanical sensitivity (post-injection day 4). Interestingly, in case of IL-6 and GM-CSF we could observe considerable elevation of cytokine expression on both sides of the spinal cord, on the first post-injection day. The intensity of the immunoreactivity was increased further on post-injection day 4 and was more pronounced on the ipsi-lateral (right) side (Figures 6.4-6.9, 6.13-6.18). The CCL5 chemokine showed different time course: on the first post-injection day just a moderate enhancement of the CCL5 + signal was visible (Figures 6.22-6.24), and it was only on post-injection day 4 (Figures 6.25-6.27) when the pronounced elevation of the chemokine was detected in the superficial layers of the ipsi-lateral (right side) spinal dorsal horn.
Quantitative Analysis of Spinal IL-6, GM-CSF and CCL5 Expression and Co-localization With GFAP Marker During the Course of CFA-Induced Pain Model IMARIS software analysis shows that peripheral CFA injection significantly enhanced the number of IL-6, GM-CSF and CCL5 immunoreactive spots on experimental day 1 and 4 in spinal dorsal horn. Changes in the absolute numbers of immunoreactive puncta (Figures 7A-C) was in all cases highly significant (p < 0.001). Similarly, the number of co-localized cytokine spots on astrocytes was found to be significantly higher in the CFAtreated animals compared to control animals (Figures 7D-F). The co-localization between GFAP and IL-6, GM-CSF and CCL5 was quite similar in control conditions (6.25 ± 1.03%, 5.04 ± 0.64%, and 6.79 ± 0.48%, respectively). However, after the CFA-treatment the ratio of co-localization was highest between IL-6 cytokine and GFAP marker both on day 1 (11.65 ± 1.33%, p = 0.00085) and on day 4 (21.10 ± 1.21%, p = 0.00019). The enhancement in co-localization values for the other two cytokines were more moderate, but still significant. On experimental day one the co-localization with GFAP profiles increased to 7.33 ± 0.69% (p = 0.002) in case of GM-CSF and to 8.93 ± 0.77% (p = 0.012) in case of CCL5. The co-localization values for the latter two cytokines significantly elevated further on experimental day 4 (GM-CSF: 14.57 ± 0.064%, p = 0.00019; CCL5: 17.53 ± 1.26%, p = 0.00012).
We also demonstrated on high magnification confocal images the co-localization between the three studied cytokines and the GFAP+ astrocytic profiles (Figures 7G 1-9).
IL-1β Activates the NF-κB Pathway in Spinal Astrocytes
IL-1β signaling is associated with the NF-κB and MAPK pathways (Medzhitov, 2001). As astrocyte-specific activation of the NF-κB pathway upon IL-1β stimulation has already been reported by Srinivasan et al. (2004) in the hippocampus, we intended to explore the role of the pathway in the spinal astrocyte cultures.
To reveal the effect of IL-1β on the NF-κB signaling we studied several members of the pathway in the astrocyte cell lysates (Oeckinghaus et al., 2011). We detected the cytosolic and nuclear levels of the p65 (RelA) and the inhibitory iκB. When following the time course of these IL-1β-induced NF-κB activation we found the elevation of the NF-κB p65 protein in the cytosol of astrocytes within 2 h. The translocation of p65 protein to the nucleus occurred after 4 h of stimulation and it was paralleled by the decrease of the inhibitory iκB unit in the cytosol ( Figure 8A). The nuclear translocation of the p65 FIGURE 5 | Localization of cytokines in cytoplasmic compartment of GFAP + spinal astrocyte cultures: enhanced cytokine expression was observed in IL-1β stimulated spinal astrocytes. Micrographs of single 1 µm thick laser scanning confocal optical sections illustrating co-localization between GFAP astrocytic marker (green) and IL-6, GM-CSF, CCL5 cytokines (red). Panels (a-c,g,h,l-n) represent control, while (d-f,i-k,o-q) show cultures which received 24 h of IL-1β treatment. Mixed colors (yellow) on the superimposed images (c,f,h,k,n,r) indicate double labeled structures. On all images DAPI was used to label cell nuclei (blue). Scale bar 10 µm.
protein was also demonstrated by immunohistological staining of astrocyte cultures, which showed the presence of the protein after 2 h of IL-1β stimulation (Figures 8B e-h) and increased further after 2 additional hours of treatment (Figure 8Bi-l). These data demonstrate the IL-1β-induced activation of the NF-κB pathway in the spinal astrocytes.
FIGURE 6 | The spinal expression of IL-6, GM-CSF, and CCL5 cytokines during the course of CFA-evoked pain. Representative 1 µm thick confocal images illustrating the co-localization between immunolabeling for IL-6 (red; 6.2, 6.5, 6.8), GM-CSF (red; 6.11, 6.14, 6.17) CCL5 (red; 6.20, 6.23, 6.26) and the immunoreactivity of astrocytes (GFAP, green; first column of the figures) in the superficial spinal dorsal horn. Mixed colors (yellow) on the superimposed images (last column of the figures) indicate double-labeled structures. For each cytokine the first row of the images are taken from control samples, whereas the second row of images represents the first day after CFA injection and the third row shows the spinal expression of the cytokines on post-injection day 4. Increased labeling of the cytokines was apparent on the ipsi-lateral (right side) of the spinal cords on post-injection day 4 (panels 6.8, 6.17, and 6.26). In each case scale bars: 500 µm. immunoreactive puncta in spinal dorsal horn on post-injection days 1 and 4. Data are shown as mean ± SEM of five randomly selected spinal cord specimen (ANOVA, followed by Tukey's pairwise comparison ***p < 0.001 versus control group). (D-F) Bar charts represent the ratio of co-localization between the IL-6 (D), GM-CSF (E) and CCL5 (F) cytokine and GFAP in control and treated (on post-injection day 1 and 4) spinal cord specimen. Data are shown as mean ± SEM of five randomly selected spinal cord specimen. (Kruskal-Wallis statistical probe, followed by Mann-Whitney pairwise comparison * p < 0.05, * * p < 0.01, * * * p < 0.001 versus control group). (G) Representative high magnification superimposed confocal images of GFAP + astrocyte profiles (green) and IL-6 (red, G.1-3), GM-CSF (red, G.4-6) and CCL5 (red, G.7-9) immunoreactive puncta. First row of images (G.1, G.4, G.7) represents control specimen. Second (G.2, G.5, G.8) and third (G.3, G.6, G.9) rows show specimen on post-injection day 1 and 4, respectively. Mixed color (yellow) indicates co-localization of the markers. Scale bar: 10 µm.
IL-1β-Induced IL-6 Secretion Is
Significantly Suppressed by the NF-kB Inhibitor BAY 11-7082 To confirm the role to NF-κB activation in the IL-1β induced cytokine/chemokine secretion we utilized the NF-κB inhibitor BAY 11-7082, which is an irreversible inhibitor of iκB-α phosphorylation resulting in the inactivation of NF-κB pathway (Pierce et al., 1997;Phulwani et al., 2008). As in the previous experiments we detected the nuclear translocation of the p50 protein after this time period, the cultures were treated for 4 h with IL-1β and different concentrations (ranging between 1 and 10 µM) of the NF-κB inhibitor. To validate the experimental system we confirmed that the inhibitor blocks the activation of NF-κB and we observed decrease in the cytosolic p50 and nuclear p65 proteins upon treatment with the inhibitor (Figure 9A). When measuring the cytokine levels in the supernatant of the cultures we observed decreased concentrations in the BAY 11-7082-treated culture supernatants, but it reached significant level only in case of IL-6 production (67.3 ± 8.6%, p = 0.009) (Figure 9B).
IL-1β-Induced IL-6, GM-CSF, CCL5 Secretion Is Significantly Suppressed by the NF-kB Inhibitor SN50 at Different Time Points As a second approach to influence NF-kB signaling we used the SN50 peptide, which is a cell permeable inhibitor of NF-kB nuclear translocation (Lin et al., 1995). Its specificity for the NF-kB pathway is confirmed in lower concentration (Boothby, 2001)., thus we used the SN50 peptide for 1 h pre-treatment of the astrocyte cultures in 5 and 10 µM concentrations, which was followed by 4, 8 or 16 h of IL-β stimulation. We could detect significant attenuation of the GM-CSF expression of the SN50 pre-treated cultures after 8 h of IL-1β treatment if compared to the IL-1β-treated cultures. The relative GM-CSF expression values of the IL-β-treated cultures was 141.5 ± 16.21% which dropped to 94.14% ± 1.22%; (p = 0.02) upon 5 µM SN50 pre-treatment ( Figure 10A). For IL-6 and CCL5 the significant reduction of the cytokine levels was observed after 16 h of stimulation ( Figure 10B). The IL-β-induced relative expression of IL-6 and CCL5 was 112.78 ± 3.77% and 120.24 ± 2.38%, respectively. These levels were significantly reduced due to 1 h pre-treatment with 10 µM SN50 (for IL-6 83.22 ± 3.57%; p = 0,002 and for CCL5 106.16 ± 6.03%; p = 0.043).
DISCUSSION
In summary, we found correlation of spinal dorsal horn IL-1β expression with the nociceptive behavior during the course of CFA-evoked inflammatory pain. We show that the stimulation of spinal astrocyte cultures by IL-1β results in significantly enhanced secretion of three inflammatory cytokines/chemokines: IL-6, GM-CSF and CCL5 (RANTES). The overexpression of the three selected cytokines was also confirmed in the spinal dorsal horn during the CFA-induced pain model. We studied the activation of the NF-kB signaling pathway which is associated with astrocyte-specific IL-1β signaling and we found its time-dependent activation during the course of IL-1β treatment. Finally, the inhibition of the NF-kB pathway resulted in the significant attenuation of cytokine production.
IL-1β exerts its action through its receptor which has a ligand binding unit (IL-1R1) and an accessory protein responsible for the signal transduction (IL-1RAcP) (Ren and Torres, 2009). The IL-1R1 was shown to be expressed by neurons and glial cells in the CNS. However, the ligand binding induce cell type specific responses, which is aided by the neuron-specific isoform of the IL-1RAcPb (Huang et al., 2011). FIGURE 10 | Astrocytic cytokine expression was significantly reduced by SN50, inhibitor of NF-κB nuclear translocation. (A) Histogram shows relative GM-CSF levels determined by ELISA method after 1 h of pre-treatment by SN50 and 8 h of IL-1β stimulation. The GM-CSF secretion of the astrocyte cultures is significantly down regulated by 5 µM SN50 if compared with the IL-1β-treated cultures. (B) Histogram shows relative IL-6 and CCL5 levels in the supernatant of the astrocyte cultures measured by ELISA method after 1 h of SN50 pre-treatment and 16 h of IL-1β stimulation. Both cytokine levels show significant reduction due to 10 µM SN50 treatment if compared with the IL-1β-treated cultures. Data are shown as mean ± SEM of two independent experiments in duplicate. One-way ANOVA, followed by Student-Newman-Keuls pairwise comparison, * p < 0.05; * * p < 0.01. -Schoffnegger et al. (2013) suggested two possible ways how cytokines can act in the CNS: directly on neurons and indirectly by activating local glial cells to secrete further neuroactive substances which in turn modulate nerve cell functions. In this study our aim was to explore the latter way of cytokine actions by detecting the IL-1β-induced cytokine/chemokine production in spinal astrocytes.
Gruber
Astrocytic activation can be induced by numerous agents including lipopolysaccharide (LPS) (Tarassishin et al., 2014), adenosine-triphosphate (ATP) (Adzic et al., 2017), glutamate (Aronica et al., 2005), cytokines etc. Upon different activation signals (or their combinations) astrocytes can turn into neurotoxic A1 or neuroprotective A2 phenotype. While in different pain states A1 astrocytes can release neurotoxins which can trigger the death of neurons and glial cells, A2 astrocytes induce cell survival and tissue regeneration (Li et al., 2019). Recently, however, it was suggested that astrocytic activation is possibly not an all-or-nothing phenomenon, but it is rather "gradated" (Matias et al., 2019). Liddelow et al. (2017) showed that in many cases the reactive astrocyte phenotype is not so polarized. Among others they showed that IL-1β stimulation induced upregulation of A1 and A2 marker genes. In another study neuroprotective aspects of IL-1β stimulated astrocytes was also revealed as genes encoding neurotrophic factors were upregulated in the cultures (Teh et al., 2017).
During CNS insults or injury both astrocytes and microglia are activated and their bidirectional communication can be important for their further fates and their possible role in CNS disorders. Microglia usually react faster to pathological stimuli and by their secreted molecules they can contribute to the consecutive activation of astrocytes (Jha et al., 2019). It was shown that IL-1β can be one of those microglia-derived factors which can lead to astrocytic activation. On the other hand, although astrocytic activation has a "lag phase, " it is more prolonged -in this way, they can have a role in the maintenance of enhanced pain states (Gao and Ji, 2010).
Interestingly, we found IL-1β-induced significant upregulation of pro-inflammatory cytokines/chemokines and we did not detect significant downregulation of any antiinflammatory cytokines. All three identified molecules were shown previously to be expressed in different areas of the CNS. IL-6 is classically described as a pro-inflammatory cytokine which induce T lymphocyte population expansion and B lymphocyte differentiation, but it is well described now that in a context-dependent way IL-6 may have anti-inflammatory functions as well (Hunter and Jones, 2015). Its ligand binding receptor (mIL-6R) is expressed on very limited cell types (e.g., monocytes/macrophages, microglia). But by an alternative "trans-signaling, " using the soluble IL-6 receptor (sIL-6R), it can exert its effect on many cell types including neurons (Zhou et al., 2016). The role of IL-6 in nervous tissue is "casesensitive" as it was reported for the immune system: the classical signaling through mIL-6R initiates anti-inflammatory signals whereas the trans-signaling pathway is more pro-inflammatory (Rothaug et al., 2016).
GM-CSF is more known of its hemopoietic function as a colony stimulating factor of the granulocyte-macrophage lineage, but it is also a pro-inflammatory cytokine with numerous functions in innate and adaptive immunity (Donatien et al., 2018). GM-CSF was already shown to play role in tumornerve interaction in bone cancer pain (Schweizerhof et al., 2009).
And it was found to initiate central sensitization by inducing microglial BDNF secretion in chronic post ischemic pain (Tang et al., 2018). Besides microglia its receptor (GM-CSFR) is expressed on cortical and hippocampal neurons (Krieger et al., 2012).
The chemokine CCL5 (RANTES), as other members of the chemokine family regulate leukocyte trafficking to sites of inflammation and it is a dominantly pro-inflammatory agents.
CCL5 can bind to three receptors chemokine (C-C motif) receptor 1 (CCR1), CCR3 and CCR5 and is involved in several pain states (Yin et al., 2015). The main receptor, CCR5 is expressed by microglial cells and it induces the alternatively activated M2 phenotype differentiation (Laudati et al., 2017), which is known to inhibit pro-inflammatory signals. In this way, CCL5 may have an anti-inflammatory, anti-nociceptive role when it activates the CCR5 receptor.
The NF-kB pathway plays crucial role in the astrocyte specific response for IL-1β stimulation. Besides its important function during inflammation in peripheral tissues, it is also a mediator of cytokine and chemokine secretion by astrocytes. Genes encoding the three selected cytokines were shown earlier to be associated with the NF-kB signaling pathway in other cell types (Schreck and Bauerle, 1990;Wickremasinghe et al., 2004;Son et al., 2008).
In summary, the three of the selected cytokines/chemokines and also the NF-kB pathway can all be important target molecules in inflammatory pain conditions. Specific receptors of the three molecules were shown on microglial cells. GM-CSFR is expressed on neurons and for IL-6 trans-signaling is available for its neuronal effect. The astrocytic production of these molecules support the theory of the indirect action of IL-1β, by inducing astrocytic secretion of mediators which can be means of complicated networks of glia-neuron and glia-glia interactions and can contribute to the enhanced activity of the local cellular networks thus can be part of the process of central sensitization. Another interesting feature of this system is, that astrocytes themselves produce IL-1β and it would be important to understand the consequences of this "autocrine-loop" and possibly influence its functioning to shift astrocytic activation toward the A2 "protective-type" to attenuate the activity of the local cells which may lead to the inhibition of the pronociceptive processes.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The animal study was reviewed and approved by Animal Care Committee of the University of Debrecen, Hungary according to national laws and European Union regulations [European Communities Council Directive of 24 November 1986 (86/609/EEC)].
AUTHOR CONTRIBUTIONS
KHo designed the experiments, analyzed data and prepared the manuscript. AG and KHe carried out the immunocytochemical stainings on the spinal cord sections. AG, KHe, and EB conducted the behavioral experiments and prepared the astrocyte cultures. LD performed the IMARIS analysis of the sections and reviewed the manuscript. AG photographed the immunostained sections. KHo and EB carried out the Proteome Profiler, ELISA and western blot experiments. All authors read and approved the manuscript.
FUNDING
This work was supported by the Hungarian National Brain Initiative, KTIA_NAP_13-1-2003-0001. This study was also supported by the Neuroscience Ph.D. program of the University of Debrecen, Hungary. | 2020-11-16T14:12:21.739Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "5b8aeb59d2a49abd0b4d35f927bb346e3ec184f8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.543331/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b8aeb59d2a49abd0b4d35f927bb346e3ec184f8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49659096 | pes2o/s2orc | v3-fos-license | In vitro evaluation of microbial adhesion on the different surface roughness of acrylic resin specific for ocular prosthesis
ABSTRACT Objective: The purpose of this study was to evaluate the influence of surface roughness in biofilm formation of four microorganisms (Staphylococcus epidermidis, Staphylococcus aureus, Enterococcus faecalis, and Candida albicans) on acrylic resin surface of ocular prostheses. Materials and Methods: Acrylic resin samples were divided into six groups according to polishing: Group 1200S (1200 grit + silica solution); Group 1200; Group 800; Group 400; Group 120 and Group unpolished. Surface roughness was measured using a profilometer and surface images obtained with atomic force microscopy. Microbial growth was evaluated after 4, 24, and 48 hours of incubation by counting colony-forming units. Statistical Analysis Used: For roughness, it was performed 1-way ANOVA and parametric Tukey test α5% (P ≤ 0.05). For CFU data found, it was applied Kruskal-Wallis and Mann-Whitney tests. Results: Group 120 and 400 presented the highest roughness values. For S. epidermidis and S. aureus, Group 1200S presented the lowest values of microbial growth. For E. faecalis at 4 hour, microbial growth was not observed. C. albicans did not adhere to the acrylic resin. Except for Group 1200S, different surface roughnesses did not statistically interfere with microbial adhesion and growth on acrylic surfaces of ocular prostheses. Conclusions: The roughness did not interfere with the microbial adhesion of the microorganisms evaluated. The use of silica decreases significantly microbial growth.
INTRODUCTION
An ocular prosthesis is an option for rehabilitation of anophthalmic patients, being fabricated principally from acrylic resin. Although the ocular prosthesis can be adopted adequately in the anophthalmic cavity, in the majority of cases a "dead space" is observed between the posterior surface and the bottom of the cavity. Lacrimal secretion, mucous, and stagnant residue in this space constitutes an excellent culture environment for the growth of bacteria. [1] One of the most important pathogens of prosthetic infections is the Staphylococcus. [2] Staphylococcus epidermidis can adhere and proliferate on polymer surfaces, especially on lenses and intraocular prostheses. [3,4] Staphylococcus aureus lives principally on mucous surfaces and is considered as one of the most versatile and dangerous human pathogens. [5] Enterococcus faecalis is a natural Gram-positive streptococcal of the intestinal tract, able to cause serious infections such as endophthalmitis and corneal ulcers. [6,7] Another microorganism encountered in the ocular region is Candida albicans, which also occurs among the fungal infections of maxillofacial prostheses. [8] The hematogenic dissemination of C. albicans to the eye is associated with intravenous drug abuse, grave debilitation, or immunodepression, recent surgery (especially gastrointestinal), the use of broad-spectrum antibiotics, diabetes, and alcohol abuse, among others. [9,10] Recently, a large number of reports in respect to the impact of the physical properties of the materials in the adhesion of microbe, [11][12][13][14][15][16][17] and a particularly strong relation between the bacterial adhesion and surface roughness, have been highlighted. [14][15][16][17][18] The surface roughness of the material is considered a relevant property for the process of microbial adhesion. This is because polymer surface irregularities, such as grooves or fissures, promote an increase in the surface area, and depressions that provide more favorable sites for colonization which protect the microbe against the forces of shearing. [11,19] Since the behavior of microbial adhesion on polymers depends on the microorganism colonizer and the surface roughness of the material, it is important to evaluate the relation of the surface roughness of acrylic resins for ocular prostheses in the adhesion of different microorganisms.
Therefore, this study has the objectives of simulating different surface roughnesses of acrylic resins used in the fabrication of artificial sclera for ocular prostheses, and to evaluate the interference of roughness on the adhesion and formation of biofilm of different microorganisms (S. epidermidis, S. aureus, E. faecalis and C. albicans). The null hypotheses of this study were that the surface roughness of acrylic resin does not differ among the different kinds of surface polishings and that the adhesion of microorganisms would not be influenced by the roughness of the acrylic resin.
MATERIALS AND METHODS
For the fabrication of the test specimens in acrylic resin, a molded metallic matrix was used which contained ten circular compartments in its interior, each with dimensions of 10 mm diameter and 3 mm thickness. The matrix adhered to a rectangular glass slide. The glass slide and matrix set were then placed in a special muffle for polymerization in a microwave oven (VIPI STG; VIPI Industria, Pirassununga, Sao Paulo, Brazil), filled with special plaster Type IV (Durone; Dentsply Ind and Com Ltd, Rio de Janeiro, Brazil). After the plaster crystallized, another glass slide was positioned over the already enclosed matrix, a contra muffle was positioned, and special plaster type IV was poured over the surface of the last glass slide. The muffle was opened after the crystallization, and the N1 heat-activated acrylic resin (white color) for artificial sclera (Artigos Odontologicos Classico Ltda., Sao Paulo, Brazil) was proportioned, manipulated, and inserted in the matrix. After the insertion, the contra muffle was positioned and carried to a hydraulic press, (VH; Midas Dental Produtos Ltda., Araraquara, São Paulo, Brazil) and placed under a force of 1200 kg/F for 2 min. Subsequently, a 30 min bench polymerization was performed, followed by a 10 min microwave polymerization. After the resin polymerization, the muffle was opened, and the specimens were removed. The specimens were then submitted to polishing for 3 min using metallographic sandpaper of different granulations (Buehler, Illinois, USA). Blue colloidal silica solution (Buehler, Illinois, USA) with a grain of 1 micrometric (µm) was used on one group. After polishing, ultrasound cleaning was performed to remove possible debris.
A total of 432 test specimens were fabricated. Of these, 144 were divided for the periods of 4, 24, and 48 h of microbial growth and adhesion. Thirty-six test specimens were inoculated with one of the four microorganisms evaluated. For each microorganism, the test specimens were distributed randomly in six groups (6 specimens per group with 3 for each experiment): Group 1200S: polished with 1200-grain sandpaper and the diamond solution of 1 µm; Group 1200: polished with 1200-grain sandpaper; Group 800: polished with 800-grain sandpaper; Group 400: polished with 400-grain sandpaper; Group 120: polished with 120-grain sandpaper; and Group unpolished. After the surface polishing, the test specimens were sterilized with ethylene oxide.
Strains of S. epidermidis (ATCC 35984); S. aureus (ATCC29213); E. faecalis (ATCC 29212) and C. albicans (ATCC 90028) were used. All strains were donated by the Oswaldo Cruz Foundation-FIOCRUZ, Rio de Janeiro, Brazil. The microorganisms were maintained at −70°C in a solution containing 25% glycerol, seeded in plates containing culture medium adequate for each microorganism, and incubated aerobiotically at 37°C for 24 h. The culture mediums used were as follows: Mannitol Salt Agar Initially, growth curves were performed for each microorganism with the purpose of identifying the number of hours necessary for each to achieve its greatest microbial multiplication phase (log phase), which was determined by the optic density values. The cultures were then seeded in BHI broth (Difco) or Sabouraud Dextrose and maintained at 37°C in an aerobiotic incubator for 24 h. The microorganisms were adjusted halfway through the logarithmic phase, being diluted ×10, ×100, or ×100 in BHI broth with ×2 concentration, depending on each microorganism, to obtain 10 7 microorganisms/ml.
After growth and obtainment of 10 7 microorganisms/ml, 1 ml of medium containing the microorganism that was used was placed in contact with the acrylic resin test specimens positioned in 24-well microtiter plates. Microbial suspensions were incubated in wells without the test specimen for negative control. The positive control of this study was the BHI broth or sabouraud dextrose for the tests with C. albicans without the microorganism. The wells were incubated at 37°C in an aerobiotic incubator for 4, 24, and 48 h. After each period, the nonadhering microorganisms were removed from the test specimens by way of a wash in 1 ml of saline solution. Subsequently, each test specimen was inserted in a test tube containing 1 ml of saline solution. The tubes underwent an ultrasound (USC 700; UNIQUE Ultrasonic Cleaner, Sao Paulo, Brazil) cleaning bath at 50 kHz, 150 W, for 20 min, and agitation (Vortex QL-901; Biomixer, Curitiba, Parana, Brazil) for 1 min, to promote detachment of adhered microorganisms. Subsequently, a series of seven dilutions were performed, transferring 10 ul of this solution to 90 ul of saline solution. Dilutions 4 and 7 were plated in petri plates containing an adequate medium for each microorganism evaluated, and incubated for 37°C in an aerobiotic incubator for 24 h. After the incubation period, the counting of the colony-forming units (CFU/mL) was done. The tests were performed in duplicate on two independent days.
The surface roughness of all test specimens was determined using a profilometer (Dektak d-150; Veeco, Plainview, New York, USA). One test specimen was positioned individually in the center of the equipment, and the profilometer measuring tip was focused on its surface. The Ra (surface roughness arithmetic mean) values were measured using a cut-off of 500 µm in a 12s time constant. Three readings were performed on each surface, and the mean was calculated. The original values were given in Angstrom (Å), and were then transformed to the µm scale.
A test specimen from each sandpaper polish group was analyzed using atomic force microscopy (AFM; Veeco Metrology Inc., Santa Barbara, CA, USA). The images obtained were transported from the microscope to a computer. Subsequently, they were sent to the NanoScope Analysis program (2004; Veeco Instruments Inc., Santa Barbara, CA, USA) and submitted to filters ("lowpass" and "medium"). All three-dimensional (3D) images were standardized in the minimum height scale of −100 nm and a maximum of 100 nm (z-axis) for later qualitative comparison among the groups.
The surface roughness value and test specimen microorganism count for each group, as well as the time of incubation and type of microorganism, were submitted to the normal curve adherence test to determine if they provided a normal distribution. A normal distribution for roughness values was found, consequently performing a one-way ANOVA and parametric Tukey test with a 5% (P ≤ 0.05) level of significance. A nonnormal distribution for CFU data was also found and hence, appropriate nonparametric statistic tests were applied to compare the measurements through the Kruskal-Wallis and Mann-Whitney tests.
RESULTS
Using the one-way ANOVA, it was observed that there was a significant statistical difference comparing the Ra roughness values obtained by the different polishings (F = 1361.651; P ≤ 0.001), with the 120 and 400 groups differentiated statistically between themselves and in relation to all the other groups, with Group 120 presenting the greatest Ra roughness value, followed by Group 400. The nonpolished group did not differ statistically from Group 800, 1200, or 1200S [ Table 1].
The qualitative analysis regarding the surface smoothness of 3D images of the study reveals apparent differences in the images, as to the formation of irregularities along the extensions evaluated. Groups 120 and 400 present images that reveal irregularities with more significant cracks and orifices. These include the interior of a crack with the formation of a deeper "valley" without the definition of the highest point above the "peaks," as compared to those that still maintain the characteristic of a polished surface (800, 1200, and 1200S, and nonpolished), although the nonpolished group presented small irregularities on its surface [ Figure 1 1.1-1.6]. For E. faecalis, it was observed that the initial adhesion probably took more than 4 h to occur since there was no microorganism growth observed in this period. In addition, except Group 400 at 48 h, all the materials differentiated from the control, but not among themselves [ Figure 3].
Adhesion of C. albicans was not observed on the acrylic resin surfaces evaluated.
DISCUSSION
The first hypothesis of this study that the different acrylic resin surface roughnesses do not interfere with the growth of microorganisms was confirmed by the fact that there were no significant differences among the different polish groups. The second hypothesis that the different sandpaper grains for polishing do not promote significant differences in the surface roughness values was partially accepted since there was a significant statistical difference for E. faecalis in the growth of microorganisms in the group polished with 400 grain sandpaper. The bacterial growth model used in this study simulated in vitro conditions of static biofilm growth that is encountered on the contact surface of an ocular prosthesis with the conjunctive membrane tissue.
The initial microorganism adhesion to the surface of the materials is the key requirement for the colonization of the material. [19,20] When a bacterium adheres and proliferates on the surface of a material, it produces extracellular polymer substances and forms the biofilm, which covers and protects against the immune systems and antimicrobial agents. [18] During the process of adhesion, the bacteria adheres firmly to the surface of the material through physicochemical interactions which include the hydrophobicity and charge of the cell surface, as well as the chemical composition and the surface roughness of the material. [19,21] Several published studies showed that in vivo bacterial adhesion is determined primarily by a Ra surface roughness >0.2 µm. [18,[22][23][24][25] However, this study observed that, in general, even the groups with lower mean Ra values (0.02, 0.03, and 0.05) demonstrated bacterial growth [ Table 1 and Figures 2-4]. This data corroborate with Yoda et al., [18] which evaluated the adhesion of microorganisms on surfaces of different biomaterials and obtained a result which showed that even surfaces with roughness levels below 30 nm (0.03 µm) could promote bacterial adhesion. In addition, Lee et al. [26] observed bacterial growth with roughness levels below 0.2 µm on composite resin (Ra = 0.179), titanium (Ra = 0.059), and zirconia (Ra = 0.064) surfaces.
These affirmations indicate that there is no consensus with regard to the minimum roughness level for microbial adhesion and could differ according to the material used and the capacity of the microorganism to adhere to different surfaces. Thus, bacterial adhesion is a multifactorial phenomenon, and surface roughness is not the only influential characteristic in the process. [18] In this way, other factors are also connected to the potential microbial adhesion to the acrylic resin surface, such as hydrophobicity, electrostatic interactions, and surface energy. [27] The chemical composition of the material, the adhesion capacity level, and the size and mode of microorganism division also play a role. These factors could explain why E. faecalis took more time to adhere to the acrylic resin surface [ Figure 3] compared with the staphylococcus since the later presented a characteristic of early adhesion. [19] Published studies report that microorganisms appear to have a preference for adhesion on rougher surfaces with scratches and grooves. [19] However, the current study observed that except Group 1200S, which received polishing with 1200-grain sandpaper and silicone-based diamond solution, the other groups did not differ among themselves as far as the bacterial growth values [ . This data corroborate with Taha et al., [23] which evaluated microorganism adhesion, including S. aureus and C. albicans on 3 types of orthodontic wire with different roughnesses, and did not encounter a significant difference among the groups.
In agreement with the results of the current study, some authors [28][29][30][31] reported that a linear relation between bacterial adhesion and surface roughness is not always observed. A small increase in roughness could lead to a significant increase in bacterial adhesion, while a large increase in roughness might not have a significant effect on adhesion. [19] Previous published studies [24,25] reported that small variations of surface roughness do not present a significant effect on bacterial adhesion, which could justify the absence of a significant relationship between microbial adhesion and the roughness observed in this study. [23] Although the AFM images show a certain irregularity in the surface of the nonpolished group [ Figure 1f], the Ra mean value of this group was low and similar to the 800, 1200, and 1200S Groups [ Table 1]. Still, the nonpolished group did not differ statistically for the polished group about bacterial growth [ . This data could have resulted from the compression of the acrylic resin against a glass slide at the moment of fabrication of the test specimens with polish, which could have resulted in a smooth surface similar to the groups that received refined polishing, and thus, similar roughness and bacterial growth values were observed. [22,32] It was observed that the group with more refined polishing (1200S) was statistically different from the control for all periods and bacteria evaluated, principally for S. epidermidis and S. aureus. In general, this group presented smaller values of bacterial growth, although it can be observed in the AFM images [33] and the Ra roughness values that there was no difference about the 800, 1200, and nonpolished groups [ Table 1 and Figures 1-4]. This result could have occurred due to the silicone ion (Si) implantation present in the diamond solution used in the polishing of this group, which made the surface less susceptible, affecting the adhesion process. The Si ion can reduce the surface energy and the contact angle, influencing the wettability and hampering the adhesion of microorganisms. The result obtained is in agreement with the studies of Zhao, in which it was observed that there was a reduction in microorganism adhesion in the presence of SiF3 in stainless steel. [32] In addition, Rashid et al. [34] reported that the use of the diamond solution for polishing favored the prevention of microorganism accumulation on the surface of the materials.
An interesting fact observed in the results of the current study, which differs from the majority of studies encountered [27,35,36] was that the C. albicans did not adhere to the surfaces of the test specimens, independent of the polishing realized. [37] The low indices of C. albicans adhesion may be associated with the morphology of the fungus. This may be because C. albicans fungus is large (4-6 µm) [35] and possess long filaments in their shapes, limiting adhesion to the narrow recess of surface cracks, making it less stable. [34] It is known that the presence of bacteria facilitates the adhesion of Candida fungus to acrylic resin, principally through the production of extracellular polymers as well as through an increase in acidity, which creates favorable environmental conditions for the growth of fungus. [36,38] In this study, however, isolated biofilms of each microorganism were evaluated, and in this way, the C. albicans strains could not count on the presence of other microorganisms to aid in their growth.
Although the results of this study revealed a weak relation between surface roughness and microbial adhesion, the stages of polishing during the fabrication of ocular prostheses should not be overlooked, since bacterial adhesion is not determined only by this isolated property. In addition, roughness should never be considered alone about microbial adhesion. Other factors are also associated with this property, such as comfort and satisfactory esthetics of the patient.
Diverse limitations should be considered to interpret the results. The pathogenicity of prosthetic devices is a complex process involving interactions between the pathogen, material, and the host. An in vitro study cannot count on the defense of the host and other factors such as fluctuation of temperature conditions and nutrition. [18] CONCLUSIONS Thus, considering the limitations described, and even though, the low values of roughness obtained through different polishings could have been important in preventing the adhesion of C. albicans, it can be concluded that the small variations in acrylic resin surface roughness did not produce differences in adhesion or formation of biofilms of the bacteria evaluated. This suggests that roughness is not the only property that should be considered when evaluating microbial adhesion. Silica appears to interfere with microorganism adhesion since microbial growth decreases significantly compared with other groups and the control group when silica solution is associated with polishing. | 2018-08-01T19:03:36.832Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "e00f73621ce928edc66d136a4935b795053c8096",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/ejd.ejd_50_18.pdf",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "c2d48e9c208deb291bf8d141241a173ef79ab549",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
119096699 | pes2o/s2orc | v3-fos-license | A New Stellar Outburst Associated with the Magnetic Activities of the K-type Dwarf in a White-dwarf Binary
1SWASP\,J162117.36$+$441254.2 was originally classified as an EW-type binary with a period of 0.20785\,days. However, it was detected to have undergone a stellar outburst on June 3, 2016. Although the system was latter classified as a cataclysmic variable (CV) and the event was attributed as a dwarf-nova outburst, the physical reason is still unknown. This binary has been monitored photometrically since April 19, 2016 and many light curves were obtained before, during and after the outburst. Those light and color curves observed before the outburst indicate that the system is a special CV. The white dwarf is not accreting material from the secondary and there are no accretion disks surrounding the white dwarf. By comparing the light curves obtained from April 19 to September 14, 2016, it was found that magnetic activity of the secondary is associated with the outburst. We show strong evidence that the $L_1$ region on the secondary was heavily spotted before and after the outburst and thus quench the mass transfer, while the outburst is produced by a sudden mass accretion of the white dwarf. These results suggest that J162117 is a good astrophysical laboratory to study stellar magnetic activity and its influences on CV mass transfer and mass accretion.
Subject headings: Stars: binaries : close -Stars: binaries : eclipsing -stars: magnetic activities -Stars: individuals (1SWASP J162117.36+441254.2) -Stars: outbursts 1. Introduction 1SWASP J162117.36+441254.2 (=CSS J162117.3+441254=SDSS J162117.35+441254.1, hereafter J162117) was originally identified as an EW-type eclipsing binary by several authors (Palaversa et al. 2013;Lohr et al. 2013;Drake et al. 2014a). The derived orbital period (P=0.207852 days) places the system near the short-period limit of contact binaries (Rucinski 1992(Rucinski , 2007Qian et al. 2017). An outburst was reported by Drake et al. (2016) to have occurred on June 3, 2016 (UT = 10.8 h). The pre-discovery observations given by Maehara (2016) from May 28 to June 3, 2016 revealed that the object was already in outburst at V = 13.13 mag on June 1, 2016 (UT = 14.78 h). If the system is really a contact binary, the outburst could be explained as the beginning of a rare binary merger event similar to V1309 Sco (Tylenda et al. 2011;Zhu et al. 2016). However, the conclusion was ruled out by follow-up photometric monitoring and spectroscopic observations. Drake et al. (2016), using GALEX data as well, suspect that the outburst is more likely from an unusual cataclysmic variable (CV). Also, the spectroscopic emission lines observed by Scaringi et al. (2016) support that the outburst event is associated with accretion onto a compact object. Therefore J 162117 is possibly a long-period CV above the period gap. In addition to broad emission lines (Scaringi et al. 2016), the spectral observations obtained by Thorstensen (2016) show a strong contribution from a K-type secondary star. By analyzing the radial velocities, the masses of the compact object and donor star were estimated as 0.9 M ⊙ and 0.4 M ⊙ , respectively.
After the report of the outburst from J 162117 (Drake et al. 2016), the binary system was monitored continuously (Zejda & Pejcha 2016;Pavlenko et al. 2016;Zola et al. 2016;Kjurkchieva et al. 2017). The phased light curve obtained by Zejda & Pejcha (2016) during the outburst showed deep primary eclipses and shallower secondary eclipses. At the time, the depth of primary minima were decreasing, while the depth of secondary minima were increasing as the outburst faded. Finally, the system returned to its quiescence state during June 14-16, 2016 with its light curve back to being similar to EW-type variability (Zola et al. 2016). A 0.052-day variability with an amplitude of 0.1 mag was reported by Pavlenko et al. (2016) that is superimposed on the out-of-eclipse light curve. These authors suspected that the variability could be related to the magnetic pole/poles of a white dwarf and that this could be a candidate intermediate polar system. The outburst, with an amplitude of about 2 magnitudes in V band and the EW-type light curve in its quiescence state, make J 162117 a very interesting target for further investigation. Although we know the outburst is associated with the accretion on a white dwarf and that the system may be an unusual CV, the physical reasons that produce this behaviour are still unknown. Some CVs, e.g., the nova-like VY Scl-type variables and the strongly magnetic CVs (polars), usually show sudden dips in their brightness at irregular intervals of weeks to months. Low luminosity states have been explained as the coverage of dark spots near the L 1 point that then quench the mass transfer (Livio & Pringle 1994;King & Cannizzo 1998). Spots, produced at preferred longitudes, may well be due to tidal forces, and were theoretically predicted by Holzwarth & Schüssler (2003). The Doppler image of the detached white-dwarf binary V471 Tau was conducted by Hussain et al. (2006) who found that the side of the star facing the white dwarf (around the L 1 point) was heavily spotted. These properties, that spots are more preferred near the L 1 point, were also observed in a few CVs (e.g., Watson et al. 2006). However, we know relatively little about the influence of magnetic activity on the mass transfer and accretions in CVs. In the paper, we present photometric data of J162117 obtained by monitoring it from April 19 to September 14, 2016. We show that the mass accretion in J162117 is ceased by dark spots near the L 1 point before and after the outburst, while the outburst may be produced by an intermittent mass accretion on the white dwarf that is caused by the local magnetic activity of the secondary. These properties indicate that it may be a new type of optical outburst associated with the stellar activity.
Photometric monitoring and the stellar outburst of J 162117
Due to its unusually short-period and showing EW-type light variation (Palaversa et al. 2013;Lohr et al. 2013;Drake et al. 2014a), J 162117 was included in our observational list of contact binary stars below or near the short-period limit (Qian et al. 2014a;Qian et al. 2015a,b;Jiang et al. 2015). We started to observe the binary on March 19, 2016, by using the 84-cm telescope in Mexico. The observations were carried out with the 0.84-m f/15 Ritchey-Chrétien telescope at OAN-SPM Baja California, the Mexman filter-wheel and the SpectralInstruments CCD detector (a deep depletion e2v CCD42-40 chip which has a 2048×2048 13.5 µm square pixel array, a gain of 1.32 e − /ADU, and a readout noise of 3.4 e − ). 2×2 binning was used during all these observations with exposure times set at 60 sec for the filter B integrations, 30 sec for V and 20 sec for R. Flat field and bias frames were also acquired during all the observing runs. Original CCD images were reduced by using PHOT (measure magnitudes for a list of stars) of the aperture photometry package of IRAF.
As shown in Fig. 1, two stars near J 162117 were chosen as the comparison and the Fig. 2. Also displayed in the figure are the B − V and B − R color curves. These pre-outburst light and color curves are very useful for understanding the properties of J 162117. The data displayed in Fig. 2 are available at the website 1 via the internet.
Although the light curves resemble those of EW types, the details are quite different. The ingress and egress of the shallower eclipsing minima are visible in the B and V -band light curves. The two shoulders around the minimum are more clearly seen in the B-band light curve and its bottom is nearly flat. These properties indicate that this minimum is caused by the eclipse of a compact object by a normal cool star, revealing that it is a WD+MS binary system. This is consistent with the conclusion derived by previous investigators (Drake et al. 2016). Meanwhile, as shown in Fig. 2, the eclipse minimum in B band is deeper than those in V and R bands, indicating that the WD is hotter than its main-sequence companion. Therefore, this shallower minimum should be the primary one that corresponds with the eclipse of the primary component, i.e., a white dwarf. During the outburst, it becomes deeper, while the other one nearly disappears.
Because of the peculiar observational properties of J 162117, after the outburst was reported on June 3, 2016 by Drake et al. (2016), we continue to monitor the target by using the 84-cm and 90-cm telescopes in Mexico. The comparison of the light curves obtained from April 27 to June 14 is shown in Fig. 3. The data obtained on June 3 indicate that the secondary minimum became shallower and nearly disappeared. To get more photometric data, J 162117 was then monitored continuously by using several small telescopes in the Czech Republic and in China. The log of the photometric monitoring for the system is shown in Table 2 where the dates and the filters used are listed in the first and the second columns. Those listed in the third and the fourth columns are the start time (in HJD-2 457 500) and the duration of the observations. As shown in Fig. 3, during the outburst the primary eclipses are very deep, while the secondary minima are shallow. As reported by Zejda & Pejcha (2016), the depth of primary minimum decreases, while the depth of secondary minimum increases as the outburst fades. The system returned to a quiescence state on June 14, 2016, and the light curves obtained before and after the outburst are nearly overlapping.
The total light curves in R band observed from March 19 to September 14, 2016, are displayed in Fig. 4 where blue dots refer to the light curves obtained during the outburst while green ones refer to those observed outside the outburst. As shown in Fig. 4, the amplitude of the outburst in R band is larger than 1.71 mag. It takes about 11 days that the system returns to the quiescence brightness state. Before and after the outburst, the brightness levels are nearly the same. All of the R-band photometric data displayed in Fig. 4 are available at the website 2 via the internet.
Discussions and conclusions
During the outburst of J 162117, the spectral observations showed some properties of CVs, i.e., the broad, two-peaked emission lines H α (FWHM corresponding to 1500 km/s), H β , and HeII 4686. These are indications of the accretion on a white dwarf (Scaringi et al. 2016). Moreover, the strong rotational disturbance of the emission lines in the eclipse indicates the presence of a rapidly-rotating disk (Thorstensen 2016). Therefore, after the merging of a contact binary was ruled out to explain the outburst of J 162117, this event was attributed to a dwarf-nova outburst (Scaringi et al. 2016;Kjurkchieva et al. 2017). However, when comparing with known CVs, J 162117 shows several peculiarities (Kjurkchieva et al. 2017), namely a deeper eclipse at outburst than at quiescence (by a factor 2.8) and outburst amplitudes at the lowest limit of dwarf nova eruptions. These properties reveal that J 162117 is not a normal CV.
The binary system was monitored photometrically from April 19 to September 14, 2016, by using several small telescopes in the world. We were lucky to obtain several high-precision light curves before the outburst. As shown in Fig. 2, there are two shoulders around the primary minima in the B-and V-band light curves. The two minima show a clearly U-type shape and their bottoms are nearly flat. The color curves plotted in the figure also show a U-type shape around the primary minima, while the color curves are flat outside the eclipses. All of these properties suggest that there are no accreting disks around the white dwarf and thus the white dwarf is not accreting material from the cool secondary at the pre-outburst quiescent state.
Apart from the eclipse of the white-dwarf component, the BVR light curves shown in Fig. 2 are dominated by ellipsoidal variability in a binary star as well as the magnetic activity of the K-type secondary. As displayed in Fig. 2, some small flare-like events are visible in B-band light curves, while these events are not seen in V and R band light curves. These are general properties of optical flares (e.g., Qian et al. 2012). Moreover, as shown in Table 1, the errors of the B-band magnitudes for J162117 and the comparison star are about ±0.006 indicating those B-band flares are true. The light curves in the figure are asymmetric and showing O'Connell effect. Meanwhile, the depth of the secondary minimum (at phase 0.5) is deeper than the primary one (at phase 0.0). These observed properties could be explained by the secondary being covered by dark spots near the inner Lagrange point L 1 where the mass is expected to flow onto the primary. The fact that regions near the L 1 point are heavily spotted has been observed in the pre-CV V471 Tau and a few CVs (e.g. Hussain et al. 2006;Watson et al. 2006). This type of dark spots, that may be caused by tidal forces and produced at preferred longitudes, was theoretically predicted by Holzwarth & Schüssler (2003). The coverage of dark spots around the L 1 region could cease the mass transfer and thus quench the mass accretion of the white dwarf (e.g. Livio & Pringle 1994;Qian et al. 2014bQian et al. , 2015c. This is the reason why there are no accreting disks around the white dwarf before the outburst. The comparison of some R-band light curves, observed outside the stellar outburst from March 18 to July 9, 2016, is shown in Fig. 5. The light curves are nearly overlapping and no changes in their shapes are noticeable. All of them are asymmetric (the O'Connell effect) and are showing deeper secondary minima. Meanwhile, the variation of the O'Connell effect (the magnitude difference between the two maxima) is displayed in Fig. 6 where red solid dots refer to the data observed during the outburst, while green ones refer to those obtained outside the outburst. The magnitudes at the two maxima were determined by averaging the data around phases 0.25 and 0.75, respectively. As shown in the figure, the O'Connell effect is varying rapidly during the outburst, while it is stable outside the outburst within the error. All of the observations of J162117 reveal that there are dark spots near the L 1 point at those quiescent states. The dark spots on the K-type secondary cease the mass accretion on the white dwarf as well as cause those deeper secondary minima and the O'Connell effect. During the outburst, the dark spots at the L 1 point may disappear and the sudden mass accretion on the white dwarf cause the stellar outburst. The observed broad, two-peaked emission lines (e.g., H α ) and the strong rotational disturbance of the emission lines during the outburst could be explained by the accretion of the white-dwarf component.
Here we speculate that the K-type secondary is only marginally filling the critical Roche lobe. The coverage of dark spots on the secondary could cause it to expand (e.g, Chabrier et al. 2007) and cause materials of the cool secondary to overflow from the Roche lobe. However, J162117 was monitored for many years by the SuperWASP project and had a stable light curve showing the O'Connell effect, as observed in the pre-outburst light curves in Fig. 2. This implies magnetic spot coverage for a long time prior to the outburst indicating that the expanding may be a long-term property of the cool secondary. After it expands to fill the Roche lobe completely, the white dwarf is then accreting mass from the secondary suddenly and producing the outburst. During the outburst, the dark spots around the L 1 region disappear because of the overflow of material onto the hot white dwarf. Finally, the shrinking of the secondary due to the disappearance of the dark spots causes the accretion rate to decrease and the system returns to its quiescent state. Meanwhile, dark spots near the L 1 region are produced again due to tidal forces, as predicted by Holzwarth & Schüssler (2003) .
The present investigation indicates that J162117 is not a normal CV and the stellar eruption detected on June 3, 2016 is not a dwarf-nova outburst, because no lasting accretion disks are around the white dwarf. It is shown that the magnetic activity of the secondary may be associating with the outburst. During the quiescent states, the local dark spots near the L 1 point cease the accretion onto the white dwarf, while the optical outburst is produced by an intermittent mass accretion. The mass accretion during the outbursts may be caused by the expanding of the secondary via the presence of the dark spots. In this way, both the greater eclipse depth at outburst than at quiescence and the low outburst amplitudes (only about 2 mag) could be explained. The eruption event in J 162117 may be a new optical outburst associated with the stellar activity. These results make J 162117 a very interesting target for future investigations on stellar magnetic activity and its influence on CV mass transfer and mass accretion.
The work is partly supported by the Chinese Natural Science Foundation (Nos. 11325315, 11573063, and 11133007). MZ was supported by the project GAČR 16-01116S. New observations were obtained with the 1.0-m and the 70-cm telescopes in YNOs and the 85-cm telescope at the Xinglong station of NAOs in China, the 84-cm and 90-cm telescopes in Mexico, and the 60-cm Newtonian telescope of Masaryk University in the Czech Republic. | 2017-09-08T08:59:24.000Z | 2017-09-08T00:00:00.000 | {
"year": 2017,
"sha1": "54b932befbab57ac0b6377a3fffd58007b8dea2b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1709.02596",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "071f520cd27c5ec8adf882176f5a7d96d3ba768b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219548246 | pes2o/s2orc | v3-fos-license | Generalized Framework for Hybrid Analog/Digital Signal Processing in Massive and Ultra-Massive-MIMO Systems
The conventional fully-digital implementation of massive-MIMO systems is not efficient due to the large required number of radio-frequency (RF) chains. To address this issue, hybrid analog/digital (A/D) beamforming was proposed and to date remains a topic of ongoing research. In this paper, we explore the hybrid A/D structure as a general framework for signal processing in massive and ultra-massive-MIMO systems. To exploit the full potential of the analog domain, we first focus on the analog signal processing (ASP) network. We investigate a mathematical representation suitable for any arbitrarily connected feed-forward ASP network comprised of the common RF hardware elements in the context of hybrid A/D systems, i.e., phase-shifter and power-divider/combiner. A novel ASP structure is then proposed which is not bound to the unit modulus constraint, thereby facilitating the hybrid A/D systems design. We then study MIMO transmitter and receiver designs to exploit the full potential of digital processing as well. It is shown that replacing the linear transformation in the digital domain with a generic mapping can improve the system performance. In some cases, the performance of optimal fully-digital MIMO systems can be achieved without extra calculations compared to sub-optimal hybrid A/D techniques. An optimization model based on the proposed structure is presented that can be used for hybrid A/D system design. Specifically, precoding and combining designs under different conditions are discussed as examples. Finally, simulation results are presented which illustrate the superiority of the proposed architecture to the conventional hybrid designs for massive-MIMO systems.
I. INTRODUCTION
Massive-multiple-input multiple-output (MIMO) and (ultramassive) UM-MIMO systems operating in millimeter wave (mmW)/Terahertz (THz) bands are the prime candidates for fifth generation (5G) and beyond 5G cellular networks [1]- [4]. In fact, base-stations (BS) with 64 antennas have been recently deployed for commercial use in some countries [5]. Moreover, an extensive theory for massive MIMO has been developed in recent years, including capacity The associate editor coordinating the review of this manuscript and approving it for publication was Miguel López-Benítez . and spectral efficiency analysis, system design for high energy efficiency, pilot contamination, etc. However, implementation of such systems faces many technical difficulties, and to this day remains very challenging and costly [6], [7]. In conventional fully-digital (FD) MIMO systems, each antenna element requires a dedicated radio frequency (RF) chain. The direct FD implementation for massive-MIMO/UM-MIMO systems, however, is not practical and efficient due to the ensuing high production costs and more importantly, huge power consumption.
Hybrid analog/digital (A/D) signal processing (HSP) is an effective approach to overcome this problem by cascading an analog signal processing (ASP) network to the baseband digital signal processor [8], [9]. While in conventional FD MIMO transmitters [10]- [12], each antenna element is directly controlled by the digital processor, in an HSP-based transmitter, the digital processor generates a low-dimensional RF signal vector, whose size is then increased by analog circuitry for driving the large-scale antenna array. Similarly, in an HSP-based receiver, the size of the large-dimensional vector of antenna signals is reduced by an ASP network, whose outputs are then converted to the digital domain for baseband processing by means of RF chains.
There are practical constraints in the implementation and design of ASP networks and only a few types of RF components are commonly used in practice. Specifically, the power-divider (splitter), power-combiner (adder), and phase-shifter are the key analog components of the ASP design [13]- [22]. In the existing hybrid beamforming structures, due to the particular configuration of the aforementioned analog components, a constant modulus constraint is imposed on the analog beamformer weights which turns the beamforming design into an intractable non-convex optimization problem [13], [14].
A. RELATED WORKS
In one of the earliest works in this field [8], it is shown that for a single data stream, two RF chains are required to achieve the performance of a FD combiner. This technique was extended to multiple stream beamforming (i.e., precoding/combining) where the required number of RF chains must be twice the number of the data streams [14], [15]. In [23], [24], we proposed a single RF chain FD precoding realization. Many researchers, however, focused on developing the hybrid beamformers directly by solving non-convex design optimization problems [13]- [21].
In [13], the beamformer design was formulated as the minimization of the Euclidean distance between the hybrid beamformer and the FD one. Then, by taking into account the sparse characteristics of the mmWave channels, compressed sensing (CS) techniques were presented to solve the underlying optimization problems. The same authors, extended their results to wide-band systems in [25]. This approach was later used in [21] and [16] where in the latter, manifold optimization algorithms as well as other low-complexity algorithms were used for hybrid beamformer design. Directly tackling the non-convex design optimization problems was attempted in [14] where the authors took advantage of orthogonalization techniques and exploited the sparsity of the channel for designing the hybrid beamformers. These results were then extended to wide-band systems in [26]. In [27], the Gram-Schmidt method was used specifically in uplink multi-user (MU) scenario for designing robust low-complexity beamformers. Robust beamformers for single-user (SU) were studied in [18] by minimizing the sum-power of interfering signals. In [17], a simple non-iterative algorithm was proposed for hybrid regularized channel diagolnalization and in [27] the mean square error (MSE) was chosen as the cost function for designing the hybrid beamformers.
The majority of the above works consider a fully-connected architecture, i.e., each RF chain is connected to all of the antenna elements. Alternatively, in a sub-connected architecture, only a subset of RF chains are connected to each antenna [16], [28], [29]. Recently, a dynamic sub-connected hybrid architecture has been proposed in [29] for multi-user equalization in wideband millimeter-wave massive MIMO systems, based on the minimization of the sum of MSE over multiple subcarriers. Although sub-connected designs require less RF components, fully-connected ones can achieve a superior performance in theory. Hence, in this study, we investigate properties of fully-connected ASP networks.
B. CONTRIBUTIONS AND PAPER ORGANIZATION
In this paper, our goal is to investigate and exploit the full potential of HSP in massive-MIMO systems. Aiming at this challenge, we can summarize our contributions as follows • We first explore the degrees of freedom in the analog domain by developing a compact mathematical representation for any given feed-forward ASP network with arbitrary connections of any number of RF components, i.e., phase-shifters, power dividers and power combiners.
• Based on the above generalization, a simple and novel ASP architecture is conceived out of the above RF components, which is not bound to the constant modulus constraint. Removing this constraint facilitates system design as non-convex optimizations are difficult to solve and global optimality of the solutions cannot usually be guaranteed.
• The transmitter and receiver sides are then studied separately by exploiting the newly proposed ASP architecture and generalizing generalizing the digital processing. Specifically, the optimization problem for the HSP beamformer is reformulated within the new representation framework, which facilitates its solution under a variety of constraints and requirements for the massive MIMO system.
• The realization of optimal FD by HSP and the problem of RF chain minimization are presented as guideline examples to illustrate potential applications of the proposed theoretical framework.
• Simulation results of optimal beamformer designs with the proposed architecture are finally presented. The results demonstrate that the new designs can achieve the same performance as the corresponding optimal FD system and hence, outperform recently published hybrid beamformer designs.
The paper is organized as follows. In Section II, the system model is explained. We then study ASP networks in Section III followed by transmitter and receiver design in Sections IV. Simulation results are presented in Section V. We then conclude the paper in Section VI. Notations: Throughout this paper we use bold capital and lowercase letters to represent matrices and vectors, respectively. Superscripts (.) H , (.) t , and (.) * indicate Hermitian, transpose, and complex conjugations, respectively. I n denotes an identity matrix of size n×n while 0 n×m denotes an all zero matrix with size n × m and 1 n is an all one column vector of size η. The element on the p th row and the q th column of matrix A is denoted by A p,q while the p th element of vector x is denoted by x p . Tr(A) and A F denote trace and Frobenius norm of matrix A, respectively. A = bd(A 1 , A 2 , . . . , A n ) represents a block-diagonal matrix, in which A 1 , A 2 , . . . , A n are the diagonal blocks of A. The Kronecker product is denoted by ⊗. By x 1 π = x 2 , it is meant that there exist a permutation matrix P π such that x 1 = P π x 2 . The greatest (least) integer less (greater) than or equal to x is denoted by x ( x ). Moreover, x = a mod n denotes the remainder of the division of a by n. The absolute value and phase of a complex number z = |z| exp(j z) are denoted by |z| and z. C stands for the complex field. A complex circular Gaussian random vector x ∈ C n with mean vector m = E{x} and covariance matrix R = E{xx H } is denoted by CN(m, R) where E{} stands for expectation.
II. SYSTEM MODEL
We consider a generic point-to-point massive-MIMO system where the transmitter and receiver are equipped with N T and N R antennas as well as M T and M R RF chains, respectively. In the context of HSP, due to practical constraints, it is further assumed that M T N T and M R N R .
A. CONVENTIONAL HYBRID BEAMFORMING Fig. 1 illustrates a point-to-point massive-MIMO system with conventional hybrid beamforming implemented at both ends. The transmitted signal over one symbol duration T s can be formulated as where s = [s 1 , s 2 , . . . , s K ] t is the symbol vector with zero-mean random information symbols s k 's taken from a discrete constellation A (such as M-QAM or M-PSK), normalized such that E{ss H } = I K and, ρ is the average transmit power. Matrices P D ∈ C M T ×K and P A ∈ U N T ×M T are the digital and analog precoders, respectively, where U = {z ∈ C : |z| = 1} and for normalization purposes, it is further assumed that P A P D 2 F = 1. The received signal can then be written as where H ∈ C N R ×N T is the MIMO flat fading channel matrix such that E{ H 2 F } = N T N R and n ∼ CN(0, σ 2 I N R ) is an additive white Gaussian noise (AWGN) vector. The decoded symbols after hybrid processing can be expressed aŝ where D D ∈ C K ×M R and D A ∈ U M R ×N R are the digital and analog combiners, respectively.
B. GENERALIZED HSPSYSTEM FORMULATION
In this work, we consider a more general formulation for HSP that extends the cascaded structure of analog and digital linear transformations presented in Subsection II-A.
We will see that this formulation can in fact bring simplifications to the conventional linear MIMO precoding/combining techniques.
In the generalized HSP-based massive-MIMO transmitter, as shown in Fig. 2, the symbol vector s is first applied as input to the digital signal processor, whose output is a baseband signal vector expressed as where F T : A K → C M T is the corresponding mapping from A K to C M T . Then, M T parallel RF chains convert the baseband signal vector x BB T into a bandpass modulated RF signal vector x RF T . The latter is next input to the ASP network whose output is the transmit signal vector, which can be expressed as where G T : C M T → C N T is the corresponding mapping. As shown in Fig. 3, the received RF signal y following from the noisy MIMO transmission as in (2) is first applied as input to the ASP network, yielding where where While only a power constraint is imposed on the baseband mappings F R and F T , the RF mappings G R and G T must be implemented by RF analog components which constrain these transformations as discussed in the following section.
III. ANALOG SIGNAL PROCESSING NETWORK
In this section, aiming at exploiting the full potential of the analog domain, we develop a mathematical formulation for the ASP network represented by the RF mappings G T and G R in the previous section. Specifically, instead of focusing on the conventional analog beamformer structure used in the recent literature [13]- [21], we consider an arbitrarily connected network of phase-shifters, power dividers and power combiners. In our developments, signal-flow graph concepts are used which provide valuable insights for analysis of linear networks [30], [31].
Let us start by formally introducing the individual RF components comprising the ASP networks. The input-output (I/O) relationship of a phase-shifter is given by b = e jθ a where a, b ∈ C are the input and output, respectively, and θ ∈ [0, 2π ] controls the phase difference between them. In this work, in order to explore the performance limits of ASP networks and find a compact representation for any arbitrarily connected ASP with common RF components, we consider infinite resolution phase shifters. 1 The passive power combiner and power divider are implemented by the same RF multi-port network but their port configuration is different. For instance, the ideal µ-way Wilkinson power divider is an µ+1 port RF network which can act as an equi-power divider if the input signal is applied to its port 1 and the outputs are taken from ports 2 to µ + 1 [32]. Conversely, it acts as a combiner if the inputs are applied to port 2 to µ + 1 and the output is taken from port 1.
To obtain a unified model for any possible ASP network with M input ports and N output ports using primary modules (i.e., phase-shifter, power divider and power combiner), we first present a convenient multi-port matrix representation of each component. We also include a permutation operation which does not require additional hardware and is used mainly for the sake of mathematical simplification. The I/O relationship of the components are defined below in terms of their input and outputs represented by vector a and b, respectively.
• Single phase-shifter: As illustrated in Fig. 4, for vector a, b ∈ C η , the corresponding η × η matrix only changes the phase of the γ th element of the RF input signal a, which can be expressed as • Single power divider: For input vector a ∈ C η and output vector b ∈ C η , the corresponding η × η matrix divides the γ th element of the input RF signal into µ equi-power signals and the remaining RF branches are not altered, and hence, η = η + µ − 1. As illustrated in Fig. 4b, this operation can be described by a block diagonal matrix b = Q(γ , µ, η)a, (9a) • Single power combiner: This transformation can be represented by the transpose of the single power divider matrix Q(γ , µ, η). Consequently, for input vector a ∈ C η and output vector b ∈ C η the corresponding matrix combines µ adjacent RF signals into the γ th output signal and the rest of the RF branches are not altered. As seen from Fig. 4c, we can write • Permutation matrix: This transformation shown in Fig. 4d corresponds to rearrangement of the elements of vector a ∈ C η into vector b ∈ C η according to a permutation π : {1, . . . , µ} → {1, . . . , µ}. This can be expressed as where P π = [e π 1 , . . . , e π M ] t , and e i denotes a column vector of zeros except for its i th element which is one (see ).
Having introduced a matrix representation of the RF components, we can now seek the mathematical formulation for any given ASP in terms of these matrices. Proposition 1: Any given RF network, with N input and M output ports, implemented by arbitrary feed-forward connections of T RF components (i.e., phase-shifters, power combiners and power dividers) can be modeled as follows where a ∈ C N and b ∈ C M are the input and output RF signals, respectively, and θ i is a 3-tuple containing the parameters of the i th RF component.
Proof: See Appendix A. To illustrate the application of this result, consider the ASP network example in Fig. 5. By using the indexing scheme introduced in the Proof of Proposition 1, this network can be reorganized as a product of basic RF transformations as shown in Fig. 6. Note that in the latter figure, permutation matrices only appear before the 7 th and 15 th RF components; for the remaining components the permutation is an identity matrix (not shown for simplicity). It is worth mentioning that the indexing is not unique and parallel components can be swapped, for instance, the order of u 2 , u 3 and u 4 does not affect the I/O relationship of the ASP network.
Theorem 1: For each one of the following products of two basic RF component matrices on the left, there exists an equivalent matrix factorization as given on the right of the The definitions of the parameters appearing on the right hand side of these identities are given in the proof.
Proof: See Appendix B. Next, we introduce three ASP sub-networks and their compact equivalent representations; these will play a key role in establishing our main results in Theorems 2 and 3.
• Phase-shifter network: This sub-network is obtained by cascading J basic phase shifter matrices (with accompanying permutations) of common size N p , i.e.
where, as illustrated in Fig. 7 E with • Power divider network: By cascading J power divider matrices of compatible sizes, we obtain where, as illustrated in Fig. 7b, which is equivalent to an RF network that divides N d RF signals into a total of M d signals. The presence of the identity matrix in (21) accounts for branches that are not divided.
• Power combiner network: By cascading J power combiner matrices, we obtain, where, as illustrated in Fig. 7c, which is equivalent to an RF network that combines N c RF signals into M c signals. The validity of the identities in (18), (20) and (22) is demonstrated in Appendix C. We can now derive a mathematical expression for the representation of any given ASP network.
Theorem 2: Any arbitrarily connected feed-forward ASP network with M inputs and N outputs, implemented by a total number of T phase-shifters, power dividers, and power VOLUME 8, 2020 combiners can be modeled as where a ∈ C M and b ∈ C N are the input and output signals, respectively, andȖ = {z ∈ C : |z| ≤ 1}. That is, all the entries of matrix A have magnitude less or equal to 1.
Proof: See Appendix D Going back to our previous example in Fig. 5 and Fig. 6, the ASP network in the latter figure can be transformed into that of Fig. 8, for which the 2 × 2 transformation matrix A satisfies the condition of the theorem. Now, we investigate whether any matrix in the convex setȖ N ×M can be realized by an ASP.
Theorem 3: Any given matrix A ∈Ȗ N ×M can be realized by an ASP network with a total number of T = 2MN + M + N RF components, i.e., N dividers, M combiners, and 2NM (unit-modulus) phase shifters, as shown in Fig. 9.
Proof: The output of the ASP in Fig. 9, corresponding to the input vector a, can be expressed as In (25a), since b i is the output of a 2M -way combiner, the normalization factor 1 √ 2M appears from (9). Similarly, the k th input, i.e., a k is divided into 2N branches which according to (9) introduces a normalization factor of 1 where the minimum possible value of L is two, i.e., A ki = 1 2 (e jφ 1 k,i + e jφ 2 k,i ). Therefore, we have: where A ∈Ȗ N ×M . Moreover, 2M phase-shifters are required for each element of b and consequently, a minimum of 2MN phase-shifters are needed. Remark 1: The significant result of Theorem 3, is that any A ∈Ȗ N ×M can be implemented with an ASP structure using conventional RF components, i.e. combiners, dividers and phase shifters, whose input-output relationship is not bound to the unit modulus constraint. That is, while the individual phase-shifter components satisfy this constraint, the overall transformation matrix implemented by the proposed structure in Fig. 9 is no longer restricted to the unit modulus constraint. Thus, the troubling non-convexity constraint found in the literature on hybrid beamforming literature can be lifted from the design optimization problems.
Remark 2: According to the above proof, non-unique solutions for phase-shifter may exist. This additional degree of freedom can be considered when designing the ASP network based on the requirements and constraints of the analog system. By writing A p,q = |A p,q | exp(j Ap, q), one possible solution for φ 1 k,i and φ 2 k,i is given by It is worth noting that in the conventional hybrid structure T = MN + M + N RF components are required [13]- [21]. In contrast, the proposed ASP structure requires MN additional phase-shifters, for a total of T = 2MN + M + N RF components. These additional components, when employed as in Fig. 9, allow to lift the constant modulus constraint for the overall transformation.
Remark 3: It is worth mentioning that since for wide-band systems it is desirable to have a common ASP network for the entire band [25]- [27] the proposed structure can be used for MIMO-OFDM systems. Particularly, since the proposed ASP structure is not bound to constant modulus constraint, it simplifies the design of hybrid MIMO-OFDM beamformers.
IV. TRANSMITTER AND RECEIVER DESIGN WITH GENERALIZED HSP
While the previous section focused on the realization of the RF mappings G R and G T , as defined in (5) and (6), using basic RF components, in this section we turn our attention to the baseband mappings F R and F T as defined in (7) and (4), respectively. To this end, we consider the ASP network in Fig. 9 for G T and G R and consequently, (5) and (6) are replaced by: We first focus on the transmitter and then on the receiver design.
A. HSPDESIGN AT THE TRANSMITTER
Considering (4), (5), and (28a), the transmitted signal of the generalized HSP can be written as follows: In the literature on hybrid beamforming, F T is usually a linear transformation, i.e., x T = √ ρA T Ps, where P ∈ C M T ×K is the precoding matrix. We first explore the properties and implementation of F T , and then discuss the design of F T and A T at the HSP-based transmitter.
Let D T (s) denote the transformation that generates the desired transmitted signal from the given vector symbol s. In effect, this function can represent a generic communication techniques at the transmitter side. For instance, the optimal eigen-mode precoding is obtained by solving the following problem: The solution is given by where the diagonal weight matrix ϒ is calculated via water filling [33] and V is a unitary matrix obtained from singular value decomposition of the channel matrix, i.e., Consequently, for this particular precoding scheme we have Note that nonlinear beaforming, channel estimation, space-time coding and many other techniques can also be represented by D T (s).
From (29), in order to generate the same transmit signal as a given D T (s) via an HSP-based transmitter, we need to find A T and F T (.) such that holds for all symbol vectors s. Hence, since D T (s) is given, F T (s) can be defined as the following set set, or multi-valued function: Note that while it might be very difficult to explicitly construct the mapping F T (.), obtaining its output, i.e., F T (s) is simple because the value of D T (s) is available. In other words, since the output of the HSP-based transmitter is given, i.e. D T (s), it is sufficient to calculate the desired output of F T (.) rather than implementing the mapping itself.
From (4) and (35), we can rewrite (34) as which means that in general the HSP objective is to find A T and x BB T such that (36) is satisfied for the given D T (s). This objective guarantees that the HSP-based system achieves the same performance as the FD one, i.e., D T (s). However, many variations can be derived according to the conditions and constraints of the system, which opens new avenues for investigation in this area.
In practice, depending on the system constraints, one may wish to design A T , x BB T and possibly some other system parameters represented by vector p on the basis of some optimization criterion. For instance, the following generic optimization problem can be used for obtaining the HSP parameters, where C(A T , x BB T , p) represents the system constraints. Alternatively, this could be formulated as where f (.) is the chosen cost function based on the objectives of the system. Note that the power constraint is not necessary as it can be taken into account when designing D T (s). One obvious choice is f (A T , x BB T , p) = 1, in which case A T must be designed such that for some set S ⊂ A K , we have D T (s) ∈ span(A T ), ∀s ∈ S, where span(A T ) denotes the span of A T . Consequently, the baseband signal is obtained from In what follows, we present different cost functions for designing precoding matrices with HSP. VOLUME 8, 2020
1) UNCONSTRAINED FD PRECODING FOR M T ≥ K
For M T ≥ K it is possible to realize any given FD precoder.
As an example, we explore optimal eigen-mode precoding, although any other precoding matrix can be obtained in the same fashion. We first consider the case M T = K and subsequently discuss the modifications needed for M T ≥ K .
From (33) and (36), both A T and x BB T must be designed such that Since A T is of size N T × M T , this problem for M T = K has the following simple solution where p 0 = vec(Vϒ) ∞ .
In the case M T > K , one possible solution that achieves the same performance as the FD precoding is to append M T − K zeros to the solution x BB T in (40b) and set the corresponding columns of A T in (40a) to zero.
Note that no constraint is enforced on the system and similar to existing hybrid solutions in the literature, A T must be updated according to the channel coherence time, denoted as T c in the sequel. Since s changes after every symbol duration T s , x BB T is also updated every T s .
2) UNCONSTRAINED FD PRECODING FOR M T < K
In this case, from using either (38) or (38), it is possible to obtain various hybrid beamformer designs depending on the system requirements. Here, we aim at minimizing the Euclidean distance between the eigen-mode FD precoder in (31) and the hybrid beamforming matrix A T . However, since the former has size N T × M T while the latter has size N T × K , we first find a beamforming matrix T of size N T × M T subject to a rank M T constraint, i.e., we can write the solution for the above problem aŝ Now by defining we can obtain x BB T by solving min which yields
3) MINIMUM NUMBER OF RF CHAINS WITH FAST PHASE-SHIFTERS
If we do not have a constraint on the update rate of the analog components, we can reduce the number of RF chains by solving the following problem This problem is shown to have non-unique solution for M T = 1 where D T (s) = Vϒs in [23] but essentially the same solution is valid for any other transmit function D T (s). Note that in this case the ASP must be updated after every symbol duration T s .
B. HSPDESIGN AT THE RECEIVER
Similar to the previous subsection, let us assume that the ideal FD decoder that maps the received RF signal y into the detected symbolsŝ, represented by the mapping D R (y), is known. Since in massive-MIMO systems beamforming and multiplexing are key techniques, linear detection is of great interest due to its simplicity. In this case, which is considered in our discussion, D R (y) = Zy where Z ∈ C K ×N R is the FD combiner matrix. However, at the price of increased computational complexity, D R (y) can be extended to more sophisticated detectors such as maximum likelihood or sphere decoding. By substituting (6) and (28b) in (7), the estimated signal at the receiver is written as: Clearly, the same approach used in Subsection IV-A for realizing the transformation F T (.) cannot be applied here because the desired output of F R (.) is unknown, i.e., we need this mapping to implement the decoding function. Ideally, we want to find a mapping F R (·) and A R such that or all y. Similar to the HSP literature [13]- [21], we consider linear transformation for the baseband processing, i.e., is the corresponding transformation matrix; however, extension to types of transformations is straightforward by using (48). Consequently, the following generic optimization problem can be considered for obtaining the HSP parameters: where C(A R , W, p) represents the system constraints. Alternatively, this could be formulated as where f (.) is a cost function designed to satisfy the requirements of the system. In what follows, FD combining for point-to-point MIMO is presented as an example.
1) UNCONSTRAINED FD COMBINING FOR M R ≥ K
We first consider the case where M R = K and subsequently discuss the case M R > K . The optimal FD combiner for a point-to-point MIMO can be obtained from From (32), the solution is given by where U = [U a , U b ] and U a contains the first K columns of U, corresponding to the K dominant singular values of the channel matrix H. Thus, A R and W must be jointly designed such that where A R ∈Ȗ M R ×N R . Note that if M T = K , for any FD combiner Z ∈ C K ×N R , this problem has the following solution where p 1 = vec(Z) ∞ . The above design can be extended to the case M R > K , although here including more RF chains adds to the cost and complexity of the system while no improvement is gained. One trivial solution that guarantees the same performance as the FD solution is to set the additional M R − K columns of W to 0, i.e., using W = p 1 The FD realization for the multi-user case can be similarly obtained. First (51) must be replaced by the desired optimization problem for finding the FD combiner. Analog and digital combiners are then calculated by (54).
2) FD COMBINING FOR M T < K
In the case of linear decoding, there must be at least K independent equations to recover K transmitted symbols. Hence, the minimum number of required RF chains is M R = K . Consequently, combiner design for M R < K is not practical in this case.
3) MINIMUM NUMBER OF RF CHAINS WITH FAST PHASE-SHIFTERS
Even with the same assumption as in Subsection IV-A.3, i.e. the phase-shifters can be updated every T s , at least K RF chains are required. Since only the channel matrix is known at the receiver which changes each T c , a faster update rate of the phase-shifters does not provide any extra degrees of freedom and hence does not help in reducing the number of RF chains at the receiver. Consequently, the minimum number of possible RF chains for digital linear combining is M = K .
V. SIMULATION RESULTS
In this section, we present simulation results for different scenarios and compare the FD system with our proposed hybrid architecture as well as existing hybrid designs in the literature.
The following channel models is used for all the simulations, where N c = 5 is the number of clusters, and N ray = 10 is the number of rays in each cluster. Similar to [14], [27], the path gains are independently generated as α ij ∼ CN(0, 1). The transmit and receive antenna responses are denoted by a r (θ r ij ) and a t (θ t ij ) respectively, where Simulation results are presented for the optimal FD precoder and combiner, our proposed hybrid precoder and combiner realization of FD in Subsections IV-A.1 and IV-B.1, as well as selected hybrid designs from [14], [27]. For M RF chains and N antennas, the proposed and the conventional structures require T = 2MN + M + N and T = MN + M + N RF components, respectively.
A. BIT ERROR RATE (BER) PERFORMANCE
BER performance versus SNR (SNR= ρ/σ 2 ) for three different setups is shown in Fig. 10 to 12. Fig. 10 presents the results for a massive-MIMO system with N T = N R = 64 antennas (and M T = M R = 2 RF chains). The downlink BER performance of a massive-MIMO BS with N T = 64 antennas transmitting to a single user with N R = 2 antenna is shown in Fig. 11, while the uplink BER performance for the system is shown in Fig. 12. It can be seen that in all the simulated scenarios the proposed hybrid realization matches the performance of the FD systems while outperforming the existing hybrid designs. The FD systems require M T = M R = 64 RF chains whereas the proposed design achieves the same performance with only 2 RF chains. Consequently, the proposed signals as the FD system with limited number of RF chains by employing the proposed ASP network. In particular, since the RF output of the proposed structure is identical to that of the desired FD system, the same performance as the optimal FD beamforming can be achieved.
B. SPECTRAL EFFICIENCY
The spectral efficiency (in bits/s/Hz) of optimal FD beamforming, proposed hybrid realizations of FD as well as the hybrid designs from [14], [22], [27] for massive-MIMO system with N T = N R = 64 antennas is shown in Fig. 13. The spectral efficiency of an uplink connection for a single user with N T = 16 antennas and a massive-MIMO BS with N R = 64 antennas is presented in Fig. 14. Furthermore, Fig. 15 shows the spectral efficiency of a downlink connection for a massive-MIMO BS with N T = 64 antennas and a single user with N R = 4 antennas. As expected, the proposed ASP-based realizations achieve the same rate as their FD counterparts and and therefore outperform existing hybrid designs. In order to evaluate the performance of the proposed ASP structure when number of antennas grows larger, simulations are performed for ultra-massive MIMO system configurations. Spectral efficiency versus number of transmitter antennas N T is plotted in Fig. 16 for different number of receive antennas. For the FD system the number of RF chains is equal to the number of transmitter antennas, i.e., M T = N T whereas for the proposed hybrid structure the number of antennas is kept equal to the number of transmitted symbols, i.e., M T = K . It can be seen that in all cases, the hybrid design with the proposed ASP architecture achieves the same performance as the corresponding FD system. For instance, for an ultra-massive MIMO transmitter with N T = 1024 antennas and receiver with M T = 2 antennas, the FD structure requires N R = 1024 RF chains while the proposed structure guarantees the same performance with M T = 2 RF chains.
C. COMPUTATIONAL COMPLEXITY
The proposed ASP architecture is implemented with the same RF components as the conventional hybrid structures [13]- [21]. Moreover, since the constant unit modulus is not imposed on the entries for the resulting analog transformation matrix with our approach, the computational complexity of designing the analog and digital beamformers can be reduced. Compared to the FD system design, the additional computations required for the proposed ASP approach lie in the calculation of the phase-shifter parameters as given in (27). In the case of an eigen-mode FD beamformer for instance, the calculations in (27), in terms of complexity order, are dominated by the SVD and water filling algorithm needed for FD design, as represented by (31). Moreover, existing hybrid designs use sophisticated optimization or reconstruction techniques to handle the constant modulus constraint. For instance, the iterative algorithms in [14] and [27] require matrix inversion in each iteration. Consequently, the computational complexity of the proposed FD realizations with ASP is less than each iteration in these hybrid designs. for proposed and FD, with different numbers of rece. . .
VI. CONCLUSION
In this paper, we investigated the hybrid A/D structure as a general framework for signal processing in massive and ultra-massive-MIMO systems. We first explored the ASP network in details by developing a mathematical representation for any arbitrarily connected feed-forward ASP network comprised of phase-shifters, power-dividers and power combiners. Then, a novel ASP structure was proposed which is not bound to the unit modulus constraint. Subsequently, we focused on the transmitter and receiver sides by exploiting the newly proposed ASP architecture and generalizing generalizing the digital processing. Specifically, the optimization problem for the HSP beamformer was reformulated within the new representation framework, which facilitates its solution under a variety of constraints and requirements for the massive MIMO system. Finally simulation results were presented illustrating the superiority of the proposed architecture to the conventional hybrid designs for massive-MIMO systems.
Proof of Proposition 1:
The matrix representation of the RF components in (8)- (10) are introduced such that the input and output signals can be of any size and thus can include RF branches that are not affected by the RF component. Consequently, we can sort the RF components such that the input of each RF component is the output of another RF component except for the first component. Let us denote the input and output of the i th RF component as a i and b i , respectively. Consequently, we have b i−1 = a i , a 1 = a and b = b T . To be more precise the following algorithm is used to assign the index i for i = 1, 2, . . . , T to each RF element: Note that step 1 has always an answer because of how a i and b i are defined. Moreover, it is possible that more than one RF component satisfy the condition in step 1. VOLUME 8, 2020 for i ← 1 to T do 1. Find an RF component whose input is a i ;
Assign index i to that RF component; 3. Denote the output as
In these cases, the components are parallel, i.e, the signals are simultaneously entering them and any ordering of these components is acceptable. Now, for i = 1, 2, . . . , T we can write b i in terms of a i . If the i th RF component is a phaseshifter, a power divider, or a power combiner, then we have b i = (γ , φ, η)P π a i , b i = Q(γ , µ, η)P π a i , or b i = Q t (γ , µ, η) t P π a i , respectively. Note that, if the order of the signals is not changed before the i th component, we have P π = I. Hence, the given ASP can be expressed as in (12).
APPENDIX C 4) PROOF OF (18) To show that E v is a diagonal matrix, we can use induction and the fact that for a diagonal matrix D and permutation matrix P π , the matrixD = P π DP π t is also a diagonal matrix. For J = 1 the statement is true, and we must prove for J = K + 1 we have: By assuming for J = K , matrix E v is diagonal and P π is a permutation matrix, we can rewrite the above equation aŝ We can therefore writê Using the aforementioned property of permutation matrices, we knowˆ = P π (γ K +1 , φ K +1 , N p )P t π is a diagonal matrix. We further know thatP π = P π P π K +1 is a permutation matrix thus we can write:Ê vPπ = E vˆ P π . (81) Since E v andˆ are both diagonal so isÊ v . Furthermore, since all the diagonal entries are unit modulo complex numbers their products are also on the unit circle and thus v = [e jφ 1 , e jφ 2 , . . . , e jφ Np ] t ∈ U N φ . (20) Induction can be used to prove this statement. For J = 1, one can easily find P π 1 such that
5) PROOF OF
accordingly there exist P π 2 such that P t using the fact that permutation matrices are orthogonal, we can write Q(1, µ, η) = P π 1 bd( 1 √ µ 1 µ , I η )P π 2 . Now, let VOLUME 8, 2020 us assume for J = K we have: We can thus write the following for According to the J = 1 case, there exist P π 3 and P π 4 such that thus, Let us first define P π 5 = P π P π 3 , then by considering D d P π 5 , the permutation matrix P π 5 rearranges the columns of D d . Therefore, there exist permutation matrix P π 6 that rearranges the rows of D d P π 5 to make a block diagonal matrix where It is possible that δ i = 1 for individual i or some consecutive number indices which result it diagonal block of identity matrices I. From (87) and (88), we arrive at P π D d P π Q (γ K +1 ,µ K +1 ,η K +1 ) = P π P π 6 D d bd( The above equation can be further simplified as . Therefore by defining P π 7 = P π P π 6 , and from (85) and (89) we have which proves the statement. 6) PROOF OF (22) For J = 1, we have to show that C d has the block diagonal structure of (23) in P π C d P π = Q t (γ , µ, η)P π 1 . According to (83), we can write Q t (γ , µ, η)P π 1 = P π bd( 1 √ µ 1 µ , I)P π P π 1 . Since the product of two permutation matrices is also a permutation matrix, we have: P π = P π P π 1 . To continue the proof with induction, we assume that for J = K there exist P π ,P π and C d such that P π C d P π = J j=1 Q t (γ j , µ j , η j )P π j . Now, for J = K + 1 we can write: K +1 j=1 Q t (γ j , µ j , η j )P π j = Q t (γ 1 , µ 1 , η 1 )P π 1 P π C d P π . (91) According to the J = 1 case, there exist P π 1 and P π 2 such that Be defining a new permutation matrix P π 3 = Pπ 2 P π 1 P π , we can write the left hand-side of (91) as: Considering P π 3 C d , permutation matrix P π 3 rearranges the rows of C d . Therefore, there exist permutation matrix P π 4 that rearranges the columns of P π 3 C d to make a block diagonal matrix where ). Note that it is possible that δ i = 1 for individual i or some consecutive number indices which result it diagonal block of identity matrices I. From (93), (94) and the fact that P π 4 P t π 4 , we can write To further simplify the above equation, we can write bd( 1 To take the last step, there exist permutation matrices P π 5 and P π 6 such that P t = C d where for some L, we can have From the above equation and (95) we have P π 1 bd( 1 √ µ 1 1 µ 1 , I)P π 3 C d P π = P π 1 P π 5 C d P π 6 P t π 4 P π . (98) Now by defining permutation matrices P π 7 = P π 1 P π 5 , P π 8 = P π 6 P t π 4 P π and from (91) to (98), we arrive at K +1 j=1 Q t (γ j , µ j , η j )P π j = P π 7 C d P π 8 , which proves the statement.
APPENDIX D
Proof of Theorem 2: Without loss of generality, let us assume there are a total of T RF components, i.e., P combiners, R dividers repectively, and Q phase-shifters, so that T = P + Q + R. According to properties (14), (15) and (17) in Theorem 1, we can rewrite (12) as follows by commuting the combiner matrices to the left hand side, i.e, where T = T − P. Similarly, the divider matrices can be moved to the right hand side using properties (13), (15) and (17), thus, where T = T −P−R. In (101) only the permutation and single phase-shifter matrices are in the middle of the expression. Therefore, without loss of generality and due to the fact that permutation and single phase-shifter matrices can be identity matrices we can write: Now, using (18), (20), (22), and the fact that product of permutation matrices is another permutation matrix, we have which follows, b = P π 1 C d P π 2 E v P π 3 DdP π 4 a. | 2020-05-28T09:16:45.784Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "6b2695c9d4d4d753bc01bd3ee4a87f635d58336f",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09102249.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "8ce268347d04f1ce2fb7005d0773156ec7276aac",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54931001 | pes2o/s2orc | v3-fos-license | A Study on the Eff ect of Pin Density on Stationary Flats and its Setting on Carding Quality
Carding is the most vital process in spinning, which infl uences the sliver quality and the resulting yarn characteristics. The eff ect of carding on single fi bre and bundle fi bre properties were studied by employing advanced fi bre information system (AFIS) and diff erences in properties are reported. For a good quality yarn, the process parameters and setting in the carding are needed to be selected properly. The paper presents the results due to the eff ect of points per square inch (PPSI) in stationary fl ats of licker-in side (SFL). The effect of setting between the fl ats and cylinder was also studied. The sample was prepared using LMW spinning line using LC-333 carding machine. After regulatory changing the PPSI in SFL, the neps in the carded sliver were reduced. The experimentation led to the reductions in the total imperfection in yarn.
Introduction
Th e process of converting fi bre tuft s into sliver involves processing over several zones in a carding machine.Th e fi rst zone is a feeding zone which involves feeding section of the carding machine.In the feeding zone, fi bre tuft s are fed into the carding machine by means of two methods: one is via a lap-sheet feeding and the other is via a chute feed [1].Th e method used for study in this research paper deals with the chute feed system.Aft er getting through the feeding zone, fi bre tuft s come under the action of licker-in, and from there, they enter into a carding section.Th e activity in the carding section is observed and analysed for the purpose of the study.Carding action can be described as the combing of fibres between two wired surfaces (card clothing), oriented in opposing directions, with their relative speed greater than zero.Th is action individualises the fi bres and gives parallelism to the fi bre mass fl ow [2].Carding machine performs the following task: opening of individual fi bres, elimination of impurities, Madan Lal Regar, Niharika Aikat Dr. B R Ambedkar National Institute of Technology, Jalandhar Punjab -144011, India
Izvleček
Mikanje je najpomembnejši postopek predenja, ki vpliva na kakovost pramena in s tem na lastnosti preje.Vpliv mikanja tako na lastnosti posamičnega vlakna kot na lastnosti snopičev vlaken v kopreni smo raziskovali s pomočjo naprednega informacijskega sistema (AFIS); razlike v lastnostih predstavljamo v članku.Za dobro kakovost preje je potrebno skrbno izbrati procesne parametre in nastavitve mikanja.V članku so predstavljeni rezultati raziskovanja vpliva gostote obloge mikalnih pokrovčkov (PPSI: število zobcev na kvadratnem inču) na stacionarnih mikalnih pokrovčkih rahljalnega valja.Proučili smo tudi vpliv nastavitve med mikalnimi pokrovčki in valjem.Vzorec smo pripravili na predilni liniji LMW z uporabo mikalnega stroja LC-333.Po spremembi PPSI na SFL se je število vozličkov (nopkov) v mikanem pramenu zmanjšalo.S pomočjo eksperimentov smo zmanjšali skupne nepravilnosti v preji.Ključne besede: mikanje, gostota obloge na stacionarnih mikalnih pokrovčkih, PPSI, nopki, neenakomernost preje elimination of dust, disentangling of neps, elimination of short fi bres, fi bre blending, fi bre orientation and sliver formation.Th e number of neps increases from machine to machine in the blow-room, the card reduces the remaining number to a small fraction.Improvement in disentangling of neps is obtained by a closer spacing between clothing, sharper clothing, optimal speed of licker-in, low doff er speeds, lower throughput.To achieve a good quality of carded sliver, the pre-carding, carding and postcarding operations are important [3]."Th e card is the heart of the spinning mill" and "Well carded is half spun" are two well-known proverbs of the experts.Th ese proverbs apprise the immense signifi cance of carding in the spinning process [4,5].Th e carding quality is primarily determined in the cylinder region, where the revolving fl at is of high importance [6].With an optimal number of fl at bars, it is responsible for cleaning as well as extracting neps and short fi bres.Th e purpose of highspeed carding is to increase the card productivity without reducing carding quality or even improving it to some extent.However, it had been assumed that increasing the carding speed would also increase the fi bre breakage [7].High production carding machine economises the spinning process and leads to the reduction in yarn quality.Hence, higher the rate of production, the more sensitive becomes the carding operation and the greater is the danger of a negative infl uence on quality.Since 1965, the production rate has been increased from about 5 kg/h to about 100 kg/h, a rate of increase not matched by any other machine except the draw frame.Th e technological changes that have taken place in the process of carding are remarkable compared to any other technology in textile processing [8,9].Th e opening or individualization of fi bres achieved by the carding action between the cylinder and fl ats is expressed by the number of wire points [10].With an increase in the production rate in carding, the points per fi bre decrease leading to the reduction of the carding action.In the carding zone, if any fi bre bundle is not opened up in the fi rst few fl ats, it is diffi cult to open it in the last fl ats.Th e rolling action after the 5 th fl at or so, when no more carding is possible and leads to neps formation.Th is defi ciency can be rectifi ed by the following: increase in wire point density increase in cylinder speed increase in carding surface -Th e increase in the density of wire points in cylinder and the increase in the speed of cylinder have certain limitations, both technologically and mechanically.Th e only possibility is to increase the carding surface or changing the carding position.Aft er some prior researches in this fi eld, it was found that the increase in carding surface could be considered as the best alternative.Th erefore, it seems to be the creation of additional carding position or introduction of the stationary fl ats of lickerin side (SFL) and stationary fl ats of doff er side (SFD).By introducing SFL, the pre-carding action has been substantially improved.Modern carding machines are equipped with several stationary fl ats of licker-in side (Fig. 1).Th e SFL encounters tuft s.Th e setting should be close enough so that the fi bre tuft s get pre-carded thoroughly before reaching the fl ats.It leads to the liberation of dust and short fi bres which are immediately sucked away.
Figure 1: Side view of LC-333 (left : zoomed view of stationary fl ats of licker-in side)
Th e setting in the pre-carding zone has a strong infl uence on nep level, cleanliness and short fi bre level.If the pre-carding action is better, then it will reduce the cylinder load resulting in better carding action on fi bre.Th e carding action also depends upon the setting between the cylinders and fl ats.Th e setting between the cylinder and fl at is optimum for the quality of fi bre (fi bre fi neness, dust level, tenacity) over the entire fl at zone, the setting is gradually reduced in the material fl ow direction in order to gradually increase the intensity of opening [11,12].If the setting is too close, it would lead to the thorough opening of fi bres with the liberation of dust and trash, but neps and short fi bre may increase due to higher stress on fibres.Th e yarn neppiness is infl uenced by the number of neps in the raw material [13].As the PPSI (points per square inch)1 is increased on the cylinder, the nep removal effi ciency also increases [15].Th e reduction Setting on Carding Quality Tekstilec, 2017, 60 (1), 58-64 in card sliver neps with variable density cylinder wires is fully refl ected in the quality of corresponding yarns.Th e nep removal effi ciency of card increases with the sharpness of fl at and cylinder wires.New (sharper) wires were found to produce slivers with fewer neps [14,15].Th e present paper studies the eff ect by changing the point per square inch in stationary fl ats of licker-in side and the observations are analysed.Th e mechanical properties of the yarns before and aft er changing the SFL and gauge setting is an area for future work and this study has been limited and focused on physical properties.
Material
Th e properties of the fi bres used to produce the yarn are given below in Tab. 1.For the yarn samples preparation, the following process parameters were used (Tab.2).
Carding setting parameter PPSI of SFL was changed in carding machine (Tab.3).
Testing methods
Th e yarns were conditioned at the standard tropical atmospheric condition of 65±2% RH and 27±2 o C temperature for 24 hours.Th e number of tests for each parameter were taken to ensure the result to attain 95% confi dence limit.
Neps
Th e neps study was carried out on the advanced fibre information system (AFIS).For neps testing, 100 gm of fi bre sample were taken from the feed material on carding, the same amount of sliver sample was taken aft er the carding operation.
Nep removal effi ciency
Th e parameter characterising the carding machine eff ectiveness in the aspect of nep reduction is the Nep Removal Effi ciency (NRE), which is expressed by the equation:
Unevenness and imperfection
Th e evenness was measured on Uster Evenness Tester-5, which simultaneously measures the hairiness and imperfections.Yarn imperfections refer to the total number of thin places (-50%), thick places (+50%) and neps (+200%) present per 1000 metre of yarn.
Results and discussion
Th e number of neps/gram in sliver, unevenness and imperfection values of the yarns are given in Tab. 4. Th e yarn is produced before and aft er changing the SFL and gauge setting.
Eff ect of SFL PPSI and gauge setting on sliver properties
Stationary fl ats in licker-in side and doff er side are the recent introductions in the spinning industry for improving card sliver quality.Th e graphical representation of the eff ect of carding delivery speed and SFL PPSI on neps, short fi bre content (SFC), and NRE in sliver are shown in Fig. 2.
It is inferred from the graph (Fig. 2a) that the neps per grams in sliver are reduced aft er reducing the point per square inch on SFL.Th e chances of fi bre damage increase when the number of wire points on SFL increases hence more will be the nep formation.Th e point per square inch on the 1 st set of the plate in SFL side is reduced from 140, 140 to 90, 140.With this particular experimental setup, the fibre is initially in contact with a lesser contact point so the opening of tuft s started and then from the 2 nd plate, the points per fi bre increased.Similarly, PPSI in the 2 nd set of SFL is changed from (240, 240, 240 to 90, 140, 140) and in the 3 rd set of SFL 340, 340, 340 to 240, 240, 240.Th e feeding of fi bres to the carding zone increases with increase in the card delivery rate.Th is, in turn, aff ects the carding action as with an increase in input fi bres, the number of fi bres per wire point is increased.Hence the resulting neps per grams are increased by increasing the delivery speed.Th e Short Fibre Content is defi ned as the percentage of fi bres less than 12.7 mm in length.It is again inferred from Fig. 2b that the SFC/grams have reduced with the change in the setting in the precarding zone.Before the change in PPSI on SFL plats, the seed fi bre coat is more due to similar points per square inch on tuft s.When a fi bre enters the pre-carding zone, the opening up operation may suff er due to a similar trend of wire density.As a result, the points per square are reduced resulting in the better opening up.It is evident from Fig. 2 that with an increase in the delivery speed, a short fi bre content tends to increase in the card sliver.Th is is due to fi bre breakage at higher tensions.It is likely that when the fl at's speed is kept constant, the fi bres are stretched by the higher doff er speed which results in maximum fi bre breakage.It is also inferred from Fig. 2c that the nep removal effi ciency (NRE) is increased aft er the change in pre-carding setting and PPSI on SFL with respect to carding delivery.Neps are created by the mechanical handling, cleaning of the cotton fi bre.Neps increase throughout the ginning, opening and cleaning process.Th e nep removal effi ciency depends on the number of neps present in a feed fi bre and settings in carding.Aft er changing the SFL and its setting, it directly infl uences the NRE.Th e NRE decreases with increase in the delivery speed of carding.When the delivery speed is increased, it reduces the time for the carding operation and hence due to the reduction in time, the neps could not be opened up properly.Th e formation of neps may also be possible due to high speed.
Eff ect of SFL PPSI and gauge setting on yarn properties
Th e raw materials and the technological process infl uence the fi nal yarn quality.Th e graphical representation of the eff ect of carding delivery speed and SFL PPSI on unevenness, total IPI, and hairiness of yarn are shown in Fig. 3. Th e value of unevenness, U, and total imperfection are also shown in Tab. 4. Fig. 3a depicts the eff ect of PPSI of SFL and card delivery rate on yarn unevenness.It is clear from Fig. 3a that yarn unevenness decreases aft er changing the PPSI of SFL.As the points per square inch of the stationary fl at are reduced, the opening up of neps is better in the pre-carding zone and hence it is directly refl ected in the fi nal yarn properties.It is also evident from Fig. 3a that yarn unevenness increases with an increase in the delivery rate of the card.As the card delivery rate is increased from 160 to 170 m/min, there is a steady increase in the yarn unevenness (U) from an average value of 9.07 at 160 m/min delivery rate to 9.6 at 170 m/min delivery rate.Th e trend indicates that the yarn unevenness is directly proportional to the card delivery.Th e trend in the result can be explained as follows: the higher delivery rate results in poor carding, higher cylinder loading and more leading fi bre-hooks in the carded sliver.Ultimately, roving with higher leading fi bre-hooks is forwarded to the ring frame contributing to an increase in yarn unevenness.Fig. 3b depicts the eff ect of PPSI of SFL and card delivery rate on total imperfection.Th ere is a tremendous reduction (139 to 67 at 160m/min) in total imperfection aft er the change in points per square inch of SFL.Th e decrease in the total imperfections with the decrease in points per square inch of SFL can be explained by good pre-carding and nep removal at the carding stage.A yarn with the higher number of yarn imperfections will ultimately result in poor fabric appearance.Th e points per square inch of SFL is playing the vital role in the pre-carding zone.As discussed earlier, the similar trend of wire density is not improving the sliver quality and by avoiding the similar trend the nep removal effi ciency is increased and ultimately its reduces the total imperfection.Mean plot depicting the eff ect of card delivery rate and PPSI of SFL on yarn hairiness are given in Fig. 3c.It is inferred from the Fig. 3c that yarn hairiness increases with the increase in card delivery and it also reduces aft er reducing the wire density on SFL.Th e average hairiness before the change in wire point density was 4.94 at 160m/min and aft er the change, the hairiness is reduced to 4.92 at the same delivery speed.Th e diff erence between before and aft er the change of hairiness is not so great at 160m/min but the diff erence is high for 180m/min.Th e yarns with high hairiness may result in a greater amount of fabric pilling and surface fuzziness as compared to the yarns with lower hairiness.
Conclusion
Th e analysis of the study presents an overview of the eff ect of changing the point per square inch in stationary fl ats of licker-in side (SFL) on sliver and yarn properties.It has been found that by reducing the point per square inch in stationary fl ats of licker-in side (SFL), the number of neps per grams in sliver is reduced.Also, the SFC/grams has decreased with the change in the setting in the pre-carding zone.Th e yarn unevenness was found to be directly proportional to the card delivery rate.Th e increase in the card delivery rate results in poor carding since it leads to higher cylinder loading and more leading fi bre-hooks in the carded sliver.A considerable reduction in total imperfection was found with the decrease in points per square inch of SFL.Further, the reduction in yarn hairiness was Setting on Carding Quality Tekstilec, 2017, 60 (1), 58-64 found with the reduction in pin density on SFL at diff erent rates of card delivery speed.Th e results obtained on the basis of experimentation may be used in a customised yarn production for diff erent applicability.Even various customization may be achieved automatically by programming the carding mechanisation.Th e programmable design of such mechanisation can be obtained by suitable hardware and soft ware interaction that has not been dealt in the paper.However, the prospect of such design can be investigated further in the future course of action.Th e mechanical properties of the yarns before and aft er changing the SFL and gauge setting is an area for the future work and this study has been limited and focused on physical properties.
Table 1 :
Specifi cation of fi bre used
Table 3 :
Carding setting parameters a) Before changing the PPSI on SFL and gauge setting b) After changing the PPSI on SFL and gauge setting
Table 2 :
Process parameters for yarn production NRE is nep removal effi ciency, Nep Cnt/g feed is the nep number per gram in fi bre stream feeding the machine and Nep Cnt/g delivery is the nep number per gram in the fi stream delivered by the machine.
Table 4
Eff ect of changing the PPSI on SFL, gauge setting and delivery speed on yarn and sliver properties a) Before changing the PPSI on SFL and gauge setting b) Aft er changing the PPSI on SFL and gauge setting Setting on Carding Quality Tekstilec, 2017, 60(1), 58-64 | 2018-12-14T23:03:52.202Z | 2017-03-17T00:00:00.000 | {
"year": 2017,
"sha1": "e1305c4caf355ec35709f5189f0fc0a4e68e5c0f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14502/tekstilec2017.60.58-64",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e1305c4caf355ec35709f5189f0fc0a4e68e5c0f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
98221477 | pes2o/s2orc | v3-fos-license | Total assignment of the 1 H and 13 C NMR spectra of piperovatine
Total and unambiguous assignment of the H NMR spectrum of piperovatine [6-(4-methoxyphenyl)N-(2-methylpropyl)-2,4-hexadienamide] was carried out using conventional 1D NMR methods and spectral spin–spin simulation. Based on these data, the complete assignment of the C NMR chemical shift values was made by a C/H chemical shift correlation diagram and conventional considerations for the quaternary carbon atom.
Introduction
Piperovatine (1) [6-(4-methoxyphenyl)-N -(2-methylpropyl)-2,4-hexadienamide] is a naturally occurring alkaloid, which exhibits insecticidal [1] and local anesthetic activity [2,3].It has been isolated mainly from several Ottonia [3][4][5] and Piper species [2,6].Although syntheses of the alkaloid have been carried out [4,[7][8][9], detailed NMR investigations of this interesting natural product have, to our knowledge, not been performed.The 1 H NMR spectrum of 1 has previously been reported [4,8] but the assignment of the spectral data remained ambiguous; besides, a 13 C NMR study has not been described.In this work we wish to report the unambiguous and complete assignment of the 1 H and 13 C NMR spectra of 1, isolated during a phytochemical study of Piper darienence, which is used in the bolivian folk medicine against toothache.
Plant material
Piper darienence D.C. was collected in the neighborhood of the Blanco river, between Remancito and Cafetal, in the Beni department, Itenez province, Bolivia, in February 1996.It was classified by Dr.
Stephan Beck from the National Herbarium of Bolivia, where a sample (voucher no.4012) is deposited.
Extraction and isolation
Air dried roots (532 g) of Piper darienence D.C. were extracted with ethanol after removing the grease with petroleum ether.The ethanol was evaporated under vacuum and the residue was washed with CHCl 3 .The chloroform was evaporated and the residue chromatographed by column chromatography (CC) on silica gel 60 Å (Aldrich, 70-230 mesh).A fraction of 500 mL was collected using chloroform and then 8 fractions of 300 mL were collected with ethanol.The second ethanolic fraction was purified by TLC on silica gel (Fluka A.G., Buchs SG kieselgel GF 254 ) using chloroform-ethyl acetate (10 : 2 v/v).Compound 1 was obtained as a solid, which was crystallized from ethyl acetate (300 mg).It was further purified on column chromatography using silica gel 60 Å (Merck, 230-400 mesh) and chloroformethyl acetate (10 : 2 v/v).Crystallization from ethanol gave colorless needles, m.p. 121-122 • .The identity of 1 was confirmed by comparing its physical and NMR spectral data with those reported in the literature [4].
Nuclear magnetic resonance instrumental conditions
The 1 H and 13 C NMR spectra were recorded at 300 and 75.4 MHz, respectively, on a Varian XL-300GS spectrometer using CDCl 3 with TMS as the internal reference.Measurements were performed at ambient probe temperature using 5 mm o.d.sample tubes.For the 13 C/ 1 H chemical shift correlation experiment, the standard pulse sequence was used [10,11].The spectra were acquired with 1024 data points and 128 time increments with 128 transients per increment.The f 1 and f 2 spectral widths were 12048.2 and 2334.8Hz, respectively.The relaxation delay was 1 s and an average 1 J(C,H) was set to 140 Hz.
Results and discussion
In previous 1 H NMR spectral reports of 1 [4,8] H-2 , H-3, H-4 and H-5 are mentioned only as multiplets and the signals for the aromatic H-2 , H-3 , H-5 and H-6 are not assigned at all.Using conventional 1D NMR methods and spectral spin-spin simulation we describe herein the precise chemical shift and coupling constant values for each of these protons, as is shown in Table 1.
Inspection of the 1 H NMR spectrum of 1 showed at first glance the H-2 signal as a septet.However, such a multiplicity is discarded because the intensity ratio between the higher intensity signals for a septet is 20/15 = 1.33 in contrast with the spectrometer digitally readed signal intensities 135.485/107.925= 1.255, which clearly corresponds to a nonet (70/56 = 1.25).An amplitude increased plot shows that the H-2 signal is found as a nonet (J = 6.7 Hz) centered at 1.79 ppm with intensity ratio 1 : 8 : 28 : 56 : 70 : 56 : 28 : 8 : 1, as expected for an isobutyl methine proton.A selective irradiation of this signal simplifies, from a double doublet to a doublet, the signal at 3.16 ppm assigned to the protons of the methylene group (H-1 ) and from a doublet to a singlet, the signal at 0.92 ppm assigned to the protons of the methyl groups (H-3 and H-4 ).The H-3 signal is shown at 7.21 ppm as a double doublet (J = 14.9, 10.0 Hz) through couplings with H-2 and H-4.These couplings were confirmed by selective irradiation.The frequencies for H-4 and H-5 appear superimposed in the range 6.08-6.25 ppm.The assignment of their individual resonance frequencies and multiplicities was achieved through selective homonuclear irradiations combined with spectral spin-spin simulation.In agreement with an inspection of 1, and considering only vicinal couplings, the multiplicities for H-4 and H-5 must correspond to a double doublet and to a double triplet, respectively.Thus, irradiation of the signal at 7.21 ppm (H-3) simplified the double doublet signal at 6.12 ppm for H-4 to a doublet.On the other hand, irradiation of the signal at 3.42 ppm (H-6) simplified the double triplet signal at 6.20 ppm for H-5 to a doublet.In order to perform the precise assignment of the H-4 and H-5 signals, a spectral spin-spin simulation [12] was made.Therefore a six spins case for the H-2, H-3, H-4, H-5 and 2 H-6 nuclei was solved iteratively.The results fitted satisfactorily with the experimental data (RMS error frequency 0.13), as shown in Fig. 1.
The assignment of the aromatic H-2 , H-3 , H-5 and H-6 signals (AA BB pattern) was made on the basis of their chemical shift [13].Thus the signal at 7.08 ppm was identified as belonging to H-2 and H-6 , while the signal at 6.84 ppm was assigned to H-3 and H-5 .
The remaining proton resonance assignments for H-1 , H-3 , H-4 and the NH group were confirmed by selective proton irradiations and the interchange with deuterium oxide.
The six proton doublet signal (J = 6.7 Hz) at 0.92 ppm assigned to H-3 and H-4 was simplified to a singlet by irradiation of the signal at 1.79 ppm (H-2 ).The two proton double doublet signal (J = 6.7, 6.1 Hz) at 3.16 ppm assigned to H-1 was simplified to a doublet by irradiation of the signal at 1.79 ppm (H-2 ) and, on addition of deuterium oxide, it also collapsed to a doublet.The broad singlet at 5.49 ppm assigned to the NH group interchanged with deuterium oxide.The complete 1 H NMR data of 1 are given in Table 1.
Once the complete 1 H NMR spectrum of 1 had been assigned, a 13 C/ 1 H chemical shift correlation experiment allowed the assignment of all protonated carbons (Fig. 2).The non-protonated C-1 carbonyl signal appears at 166.2 ppm and the aromatic carbons C-1 and C-4 were assigned at 131.1 and 158.1 ppm, respectively, by application of additivity relationships [13].The 13 C NMR chemical shift values are summarized in Table 1. | 2019-03-05T18:30:37.439Z | 1998-01-01T00:00:00.000 | {
"year": 1998,
"sha1": "7f2cfb32fa9995220b6a2b6a705ac22aa2945cdc",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jspec/1998/453785.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f2cfb32fa9995220b6a2b6a705ac22aa2945cdc",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
252816126 | pes2o/s2orc | v3-fos-license | Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
The use of interactive advice in reinforcement learning scenarios allows for speeding up the learning process for autonomous agents. Current interactive reinforcement learning research has been limited to real-time interactions that offer relevant user advice to the current state only. Moreover, the information provided by each interaction is not retained and instead discarded by the agent after a single use. In this paper, we present a method for retaining and reusing provided knowledge, allowing trainers to give general advice relevant to more than just the current state. Results obtained show that the use of broad-persistent advice substantially improves the performance of the agent while reducing the number of interactions required for the trainer.
I. INTRODUCTION
Reinforcement learning (RL) is a method used for robot control in order to learn optimal policy through interaction with the environment, through trial and error [1]. The use of RL in previous research shows that there is great potential for using RL in robotic scenarios [2], [3]. Especially, deep RL (DRL) has also achieved promising results in manipulation skills [4], [5], and on how to grasp as well as legged locomotion [6]. However, there is an open issue relating to the performance in both RL and DRL algorithms, which is the excessive time and resources required by the agent to achieve acceptable outcomes [7], [8]. The larger and more complex the state space is, the more computational costs will be spent to find the optimal policy.
In this regard, interactive RL (IntRL) allows for speeding up the learning process by including a trainer to guide or evaluate a learning agent's behavior [9], [10]. The assistance provided by the trainer reinforces the behavior the agent is learning and shapes the exploration policy, resulting in a reduced search space [11]. Figure 1 depicts the IntRL approach. Current IntRL techniques discard the advice sourced from the human shortly after it has been used [12], [13], increasing the dependency on the advisor to repeatedly provide the same advice to maximize the agent's use of it.
Moreover, current IntRL approaches allow trainers to evaluate or recommend actions based only on the current state of the environment [14], [15]. This constraint restricts the trainer to providing advice relevant to the current state Fig. 1: Interactive reinforcement learning framework. In traditional RL an agent performs an action and observes a new state and reward. In the figure, the environment is represented by the simulated self-driving car scenario and the RL agent may control the direction and speed of the car. IntRL adds advice from a user acting as an external expert in certain situations. Our proposal includes the use of broadpersistent advice in order to minimize the interaction with the trainer. and no other, even when such advice may be applicable to multiple states [16]. Restricting the time and utility of advice affect negatively the interactive approach in terms of creating an increasing demand on the user's time, instead of withholding potentially useful information for the agent [17].
This work presents a broad-persistent advising (BPA) approach for IntRL to provide the agent with a method for information retention and reuse of previous advice from a trainer. This approach includes two components: generalization and persistence. Agents using the BPA approach exhibit better results than their non-using counterparts and with a substantially reduced interaction count.
II. BROAD-PERSISTENT ADVICE
Recent studies [18], [19] suggest permanent agents that record each interaction and the circumstances around particular states. The actions are taken again when the same conditions are met in the future. As a consequence, the recommendations from the advisor are used more effectively, and the agent's performance improves. Furthermore, as there is no need to provide advice for each repeated state, less interaction with the advisor is required.
However, as inaccurate advice is also possible, after a certain amount of time, a mechanism for discarding or ignoring advice is needed. Probabilistic policy reuse (PPR) is a strategy for improving RL agents that use advice [20]. Where various exploration policies are available, PPR uses probabilistic bias to decide which one to choose, with the intention of balancing between random exploration, the use Fig. 2: Probabilistic policy reuse (PPR) for an IntRL agent using informative advice. If the user recommends an action on the current time step then the agent's advice model updates and the action is performed. If the user does not provide advice on the current time step, then the agent will follow previously obtained advice 80% of the time (*decays over time) and its default exploration policy the remaining time.
of a guideline policy, and the use of the existing policy. The PPR approach is shown in Figure 2.
To use PPR, we need a system to store the used pairs of state-action. When the agent arrives at a certain state at a time step, agents using PPR need to check with the system if this state has been suggested by the trainer in the past. If there is advice in the memory of the model, the agent can use the option to reuse the action. However, there is a problem when using PPR in large and continuous domains. It is not efficient to build a system that stores unlimited state-action pairs. In addition, when the amount of state becomes too large in space, the possibility that agents revisit exactly the same state will be very small. Therefore, building this model will become cumbersome and inefficient in large spaces. BPA includes a model for clustering states and then building a system for cluster-action pairs instead of traditional state-action pairs. When the agent receives current state information from the environment and it does not receive any advice from the trainer, the agent will use PPR by injecting the state into the generalization model and defining its cluster. Then proceed to consider whether any advice pertains to the current cluster. If there is an action recommended in the past, the agent can reuse it with PPR selection probability, or use the default action selection policy.
A. Mountain car
The mountain car is a control problem in which a car is located on a unidimensional track between two steep hills. This environment is a well-known benchmark in the RL community, therefore, it is a good candidate to initially test our proposed approach.
The car starts at a random position at the bottom of the valley (−0.6 < x < 0.4) with no velocity (v = 0). The aim is to reach the top of the right hill. However, the car engine does not have enough power to claim to the top directly and, therefore, needs to build momentum moving toward the left hill first.
An RL agent controlling the car movements observes two state variables, namely, the position x and the velocity v. The position x varies between -1.2 and 0.6 in the x-axis and the velocity v between -0.07 and 0.07. The agent can take three different actions: accelerate the car to the left, accelerate the car to the right, and do nothing.
The agent receives a negative reward of r = −1 each time step, while no reward is given if a hill is reached (r = 0). The learning episode finishes in case the top of the right hill is climbed (x = 0.6) or after 1,000 iterations in which case the episode is forcibly terminated.
B. Self-driving car
The simulated self-driving car environment is a control problem in which a simulated car, controlled by the agent, must navigate an environment while avoiding collisions and maximizing speed. The car has collision sensors positioned around it which can detect if an obstacle is in that position, but not the distance to that position. Additionally, the car can observe its current velocity. All observations made by the agent come from its reference point, this includes the obstacles (e.g., there is an obstacle on my left) and the car's current speed. The agent cannot observe its position in the environment. Figure 3 shows a representation of the environment.
In each step, the environment provides the agent reward equal to its current velocity. A penalty of -100 is awarded each time that the agent collides with an obstacle. Along with the penalty reward, the agent's position resets to a safe position within the map, velocity resets to the lower limit, and the direction of travel is set to face the direction with the longest distance to an obstacle.
There are five possible actions for the agent to take within the self-driving car environment. These actions are accelerate, decelerate, turn left, turn right, and do nothing The self-driving car environment has nine state features, one for each of the collision sensors on the car, and the current velocity of the car. The collision sensor state features are Boolean, representing whether they detect an obstacle at their position. The velocity of the agent has nine possible values, the upper and lower limits, plus every increment of 0.5 value in between. With the inclusion of the five possible actions, this environment has 11520 state-action pairs.
C. Simulated robot environment
Additionally, we also build an environment for domestic robots using Webots, given the overall good performance shown previously [21]. In this environment, the goal is to train the robot to go from the initial position to the target position. Figure 4 denotes a graphic of our experimental environment in Webots.
The robot is equipped with distance sensors on its left and right eyes. The robot is completely unaware of its current position in the environment. The robot can only choose one of three actions: go straight at 3m/s, turn left, or turn right. At each step, the robot will be deducted 0.1 points if it uses the action of turning left or right, no points will be deducted if it chooses to go straight. This is to optimize the robot's straight movement and avoid the robot running in circles by turning left or right continuously. The robot is equipped with a few touch sensors next to it, to detect the collision with the environment. The robot will be returned to its initial position and receive 100 penalty points every time it collides on the way. The robot does not know where the touch sensor is located relative to itself, the only information it receives is whether it is a collision with obstacles or not. When the robot goes to the finish position located in the lower right corner of the environment, the robot is considered to complete the task and be rewarded with 1000 points.
The state is represented as a 64 × 64 RGB image taken from the top of the environment. Three actions are allowed: go straight at 3m/s, turn left, or turn right, and the reward function is defined as turn left, right: -0.1; go straight: 0;
A. Simulated users
To compare agent performance, information about interactions, agent steps, rewards, and interactions are recorded. To identify the efficiency of BPA, we need to test the experiment with three cases: No interactive action, interactive actions without BPA, and interactive actions with BPA.
To provide feedback, we employ simulated users [22]. Each use case of the simulated user will have different advice's accuracy and frequency. Frequency is the availability of the interaction of the advisor at the given time step. The higher frequency, the advisor has more rate for giving advice to the agent. Accuracy is a measure of the precision of advice provided by an advisor. When the advisor's accuracy is high, the action would be proposed precisely as per the advisor's knowledge. On the contrary, the action proposed is different from the advisor's knowledge. The frequency and accuracy were simulated using data from a previous user study [11]. These values are described in Table I.
B. Results
For the mountain car environment, Figures 5a and 5b show the performance of IntRL agents, both non-persistent and persistent, using advice from three users with different levels of advice accuracy and availability. The advice that agents receive is an action recommendation, informing them of which action to take in the current state. When either agent, persistent or non-persistent, receives an action recommendation directly from the user on the current time step that action will be taken by the agent. The persistent agent will remember that action for the state it was received in, and use the PPR algorithm to continue to take that action in the future. Once the persistent agent has received an action recommendation from the user for a particular state, the user will not interact with the agent for that state in the future. The persistent agents require substantially fewer interactions than non-persistent agents. All persistent agents measured less than 1% of steps with direct user interaction. Assuming a direct correlation between the number of interactions and the time required to perform those interactions, the use of persistence offers a large time reduction for assisting users. Results show that the number of interactions is much less for broad-advice agents compared to the state-based agents, allowing similar performance with much less effort from the trainer. Steps and reward for state-based and rule-based (broad) IntRL agents for the self-driving car domain. Both agents obtained comparable performance, however, the advice required from the trainer is considerably less in the rulebased approach.
In the self-driving car environment, the aim of the agent is to avoid collisions and maximize speed. In the experiments, we created two simulated users to provide state-based and rule-based (broad) advice. Both agents outperformed the unassisted Q-Learning agent, both achieving a higher step count and reward. The obtained steps and reward are shown in Figure 6a and Figure 6b respectively. Although the agent was forcibly terminated when it reached 3000 steps, Figure 6a shows that the agents never reached the 3000 step limit. This is because the agents are given a random starting position and velocity at the beginning of each episode, some of which result in scenarios where the agent cannot avoid a crash. Although both assisted agents outperformed the unassisted agent, between the state-based and rule-based (broad) methods, there is no considerable difference since both run a similar number of steps and collected a similar reward. However, the number of interactions when using the state-based advice approach is 232 and when using broad advice is only 2. Finally, for the simulated robot environment, the results obtained are shown in Figure 7. Non-persistent IntRL agent is shown by a green dashed line while the persistent IntRL agent is shown by the green solid line. Baseline RL is drawn with a yellow line used for benchmarking. Both agents supported by the trainer obtain better results than baseline RL. However, the persistent agent achieves convergence results slightly earlier than its non-persistent counterpart.
V. CONCLUSIONS
In this work, we have introduced the concept of persistence in interactive reinforcement learning. Current methods do not allow the agent to retain the advice provided by assisting users. This may be due to the effect that incorrect advice has on an agent's performance. To mitigate the risk that inaccurate information has on agent learning, probabilistic policy reuse was employed to manage the trade-off between following the advice policy, the learned policy, and an exploration policy. Probabilistic policy reuse can reduce the impact that inaccurate advice has on agent learning. Moreover, we present BPA, a broad-persistent advising approach to implement the use of PPR and generalized advice in continuous-state environments.
A more in-depth review of the generalization model is needed to get the best results for using PPR. The accuracy of the generalization model greatly affects the speed and convergence of the IntRL model. Additionally, we suggest reducing the number of interactions with the trainer by reusing the action in the persistent model more often. When the agent reaches a new state that is already in memory, the agent reuses the recommended action immediately without interacting with the trainer. However, this should only be done when we have a good enough generalization model. | 2022-10-12T01:16:38.322Z | 2022-10-11T00:00:00.000 | {
"year": 2022,
"sha1": "5d436ff5b477a41d6c80388fb50b135fb3680a15",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5d436ff5b477a41d6c80388fb50b135fb3680a15",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
39140012 | pes2o/s2orc | v3-fos-license | Serum Periplakin as a Potential Biomarker for Urothelial Carcinoma of the Urinary Bladder
Urothelial carcinoma of the urinary bladder (UCB) is the second most common malignancy in the genitourinary tract. Approximately 75% of UCB cases are diagnosed as nonmuscle-invasive bladder cancer (NMIBC) at the first diagnosis (Shelley et al., 2010). NMIBC has the tendency to recur and may progress to muscle-invasive bladder cancer (MIBC), which is a life-threatening neoplasm (Ikeda et al., 2014). Cystoscopy and urine cytology are typical modalities for the diagnosis and surveillance of UCB. Cystoscopy can identify the most papillary and solid lesions, but it is physically uncomfortable for patients. Use of urine cytology is limited because of its low sensitivity. For these reasons, some tumor markers have been investigated (eg, BTAstat, NMP22), but their sensitivity and specificity are limited and not superior to urine cytology (Toma et al., 2004). To overcome these limitations, preoperative molecular markers are expected to be used as a minimally invasive method for assisting in and predicting a precise diagnosis in patients with UCB (Ghafouri-Fard et al., 2014).
Introduction
Urothelial carcinoma of the urinary bladder (UCB) is the second most common malignancy in the genitourinary tract. Approximately 75% of UCB cases are diagnosed as nonmuscle-invasive bladder cancer (NMIBC) at the first diagnosis (Shelley et al., 2010). NMIBC has the tendency to recur and may progress to muscle-invasive bladder cancer (MIBC), which is a life-threatening neoplasm . Cystoscopy and urine cytology are typical modalities for the diagnosis and surveillance of UCB. Cystoscopy can identify the most papillary and solid lesions, but it is physically uncomfortable for patients. Use of urine cytology is limited because of its low sensitivity. For these reasons, some tumor markers have been investigated (eg, BTAstat, NMP22), but their sensitivity and specificity are limited and not superior to urine cytology (Toma et al., 2004). To overcome these limitations, preoperative molecular markers are expected to be used as a minimally invasive method for assisting in and predicting a precise diagnosis in patients with UCB (Ghafouri-Fard et al., 2014).
The plakin family mediates the tissue filaments that represent the cell cytoskeleton in cell-to-cell junctions mediated by cadherin, and it is able to withstand mechanical stimulation and provide integrity of tissues (Jefferson et al., 2004;Sonnenberg et al., 2007). Dysfunctional plakin proteins contribute to diverse diseases, and autoantibodies and mutations perturb their activities with profound consequences. Seven plakin proteins are found in mammalian cells. Envoplakin, desmoplakin, and periplakin are associated with desmosomes in various solid tissues. A proteomics technique including two-dimensional gel electrophoresis (2-DE) combined with immunoblot analysis has been shown to identify tumor-associated antigenic proteins for UCB (Minami et al., 2014). Periplakin is a candidate for being a tumor marker in patients with UCB. The 195-kDa membrane-associated protein periplakin is involved in cellular movement and attachment (Nagata et al., 2001).
We previously found that loss of periplakin expression was associated with biological aggressiveness of UCB using immunohistochemical staining . In addition, the majority of UCB showed loss or decreased expression patterns compared with normal or benign lesions on pathological slides. Next, we sought to determine whether the dynamics of serum periplakin would detect UCB and predict the prognosis in patients with UCB.
The primary objective of this study was to investigate the circulating periplakin levels needed for use as a potential detection marker for UCB. The secondary objective was to determine whether the levels of periplakin would be associated with clinicopathological features and prognosis in patients with UCB.
Patients
This retrospective study included 52 patients with UCB who were treated at Kitasato University Hospital between August 2004 and July 2009. Serum samples from two patients (a man and a woman) were used in other studies (Tsumura et al., 2014). There were 43 men (86%) and 7 women (14%) with a median age of 70 years (mean=68.5; range=39-82 years). Twenty-two of these patients were treated with radical cystectomy and bilateral pelvic lymphadenectomy, and the other 28 were treated with transurethral resection (TUR). Preoperative serum levels of periplakin were measured. Laboratory studies, chest X-ray, and pelvic computed tomography or magnetic resonance imaging were routinely investigated, and there was no evidence of clinical distant or lymph node metastasis in any of the patients. The 2002 Tumor-Node-Metastasis (TNM) classification was used for pathological staging, and the World Health Organization classification was used for pathological grading. Lymphovascular invasion (LVI) determined the presence of cancer cells within the endothelial space. Cancer cells that merely invaded a vascular lumen were considered negative.
The median follow-up time was 63.3 months (mean=60.9; range=6.4-125.9 months) for those patients who were still alive at the last follow-up session. A postoperative follow-up examination was scheduled every 3 to 4 months after TUR and cystectomy, respectively, during the first year. Semi-annual examinations were performed during the second year, with annual examinations thereafter. More frequent examinations were scheduled if clinically indicated. None of the patients had previous radiation or systemic chemotherapy before surgical treatment, and none had a history of pulmonary or skin diseases.
We also measured serum periplakin levels in 30 normal controls (healthy volunteers). Approval was granted by the ethics committee of Kitasato University School of Medicine and Hospital, and all patients signed written informed consent.
Measurement of serum periplakin
All serum samples were kept at -80°C until use. Serum periplakin levels were detected by using an automated micro-dot blot array with a 256 solid-pin system (Kakengeneqs Co., Ltd., Chiba, Japan). In brief, the removal of albumin and IgG from sera was performed using a ProteoExtract Albumin/IgG removal kit (Merck, Darmstadt, Germany) according to the manufacturer's instructions; 1 μl each of 20-times diluted albumin-depleted and IgG-depleted sera was spotted onto polyvinylidene difluoride membranes (Millipore Corp., Bedford, MA, USA). The membranes were then blocked with 20% N101 (NOF Corp., Tokyo, Japan)/ TBS for 1 h at room temperature. After being washed in TBS, the membranes were reacted with 100-times diluted primary polyclonal antibody against periplakin (Santa Cruz Biotech, Dallas, TX, USA) with 1% N101/TBS for 30 min at room temperature. After being washed with TBS containing 0.1% Tween-20, the membranes were incubated with 1000-times diluted horseradish peroxidaseconjugated anti-mouse IgG polyclonal antibody (Dako, Glostrup, Denmark) for 30 min at room temperature. Finally, signals were developed with Immobilon Western reagent (Millipore Corp.). The data were analyzed using DotBlotChip System software version 4.0 (Dynacom Co., Ltd., Chiba, Japan). Normalized signals are presented as the positive intensity minus background intensity around the spot.
Statistical analyses
For the purposes of our analysis, gender, age (younger than 65 versus 65 or older), pathological stage (Ta or T1 as NMIBC versus T2 or greater as MIBC), pathological grade (grades 1 and 2 versus grade 3), LVI (positive versus negative), and lymph node status (N0 versus N1 and N2) were evaluated as dichotomized variables. Mann-Whitney U test was used to evaluate the association of periplakin with gender, age, pathological stage and grade, lymph node status, and LVI. Mann-Whitney U test was also used to compare the serum periplakin levels between UCB patients and normal controls. The Kaplan Meier method was used to calculate survival functions, and differences were assessed with the log rank test. The area under the curve (AUC) and best cut-off point were calculated using the receiver-operating characteristic (ROC) analysis. Multivariate survival analyses were performed with the Cox proportional hazards regression model, controlling for serum periplakin, pathological stage and grade, presence of LVI, and lymph node metastases. Statistical significance was set as p<0.05. All analyses were performed with StatView (version 5.0; SAS Institute, Cary, NC, USA).
ROC curve analysis of serum periplakin level was performed for the comparison between the UCB group and the control group. The AUC-ROC level for UCB was 0.845 (95% CI=0.752-0.937) (Figure 2). The sensitivity and specificity in UCB, using a cut-off point of 4.045, were 83.7% (95% CI=70.3%-92.7%) and 73.3% (95% CI=54.1%-87.7%), respectively. DOI:http://dx.doi.org/10.7314/APJCP.2014.15.22.9927 Serum Periplakin as a Potential Biomarker in Bladder Cancer Association of preoperative serum periplakin with clinicopathological characteristics The association of serum periplakin with clinicopathological features is shown in Table 1. Median serum periplakin levels in patients with NMIBC and MIBC were 0.00 and 1.48, respectively. Preoperative serum periplakin levels were significantly higher in patients with MIBC than in patients with NMIBC (p=0.03). There were no significant differences in serum periplakin levels in terms of gender, age, pathological grade, LVI, and lymph node status (p>0.05).
The Kaplan Meier method using the log-rank test indicated that the normalized signals from patients with serum periplakin above the median level of 0.31 showed no significant differences in terms of progression and cancer-specific survival.
In multivariate Cox proportional hazards regression analysis controlling for preoperative serum periplakin levels, pathological stage and grade, LVI, and lymph node status as dichotomous variables, none of the factors was associated with an increased risk for progression or cancer-specific survival.
Discussion
UCB ranks in the top category of newly diagnosed cancers. High-risk disease of NMIBC revealed high rates (up to 90%) of recurrence (Shelley et al., 2010). It is important to diagnose UCB accurately and quickly with the help of a simple and cost-effective method. Although TUR and histological examination remain the gold standard, urine cytology is helpful as a noninvasive method of early diagnosis of UCB . With the currently available modalities, there is no reliable biochemical or molecular examination that could be used as a universal screening tool for UCB.
Although investigations of autoantibodies to periplakin were performed in several reports (such as those involving pulmonary and skin diseases) (Park et al., 2006;Taille et al., 2011), this is the first study to evaluate serum periplakin for cancer detection, particularly UCB. Serum periplakin was significantly lower in patients with UCB than in normal controls. In addition, using the best cut-off point determined by the ROC curve, preoperative serum periplakin potentially acts as a biomarker for diagnosis. With encouraging results using the dot plot system in regard to serum periplakin, the diagnosis of UCB could become more simple and noninvasive.
Recent studies reported the biological role of periplakin in cancerous lesions. Downregulation of periplakin was correlated with the progression of esophageal squamous cell carcinoma (Hatakeyama et al., 2006;Nishimori et al., 2006). Cyclin A2-induced upregulation of periplakin was associated with poor prognosis as well as cisplatin resistance in endometrial cancer cells (Suzuki et al., 2010). Periplakin silencing reduced migration and attachment of pharyngeal squamous cancer cells (Tonoike et al., 2011). Periplakin silencing in triple-negative breast cancer cells increased cell growth and reduced cell motility (Choi et al., 2013). We previously reported loss of periplakin expression was associated with pathological stage and cancer-specific survival in patients with UCB using immunohistochemical staining . Periplakin is imperative for maintaining epithelial cell barriers, cellular movement, and attachment in normal physiology (Nagata et al., 2001;Jefferson et al., 2004;Sonnenberg et al., 2007). Normal expression of periplakin would suppress tumor progression; however, when altered, it would allow cancer cells to grow, detach, invade, and gain access to vascular and lymphatic systems.
Overexpression of cyclin A2 has been reported in various malignant tumors, including UCB (Chao et al., 1998;Dobashi et al., 1998;Mrena et al., 2006;Sun et al., 2011). Cyclin A2-induced cisplatin resistance is reflected by the suppression of drug-induced apoptosis. The cyclin A2-mediated reduction in apoptosis was attributable to activation of the phosphatidyl inositol-3kinase (PI3K)/Akt pathway, which is a major constituent of the mitochondrial anti-apoptotic pathway. The activation of the p-Akt pathway is reportedly one of the mechanisms by which carcinoma cells avoid the effect of chemotherapeutic drugs (Clark et al., 2002;Yang et al., 2006). p-Akt levels were elevated in cisplatin-resistant cells harboring PTEN gene mutation (Gagnon et al., 2004). Because periplakin is known to bind to intermediate filaments and Akt (van den Heuvel et al., 2002), Suzuki et al (Suzuki et al., 2010) showed a direct association of periplakin with Akt. Silencing of periplakin using siRNA significantly reduced basal p-Akt expression, suggesting that the binding of periplakin positively regulated the Akt activity. Interestingly, high expression of p-Akt was associated with high grade and stage of UCB (Sun et al., 2011), and locally advanced or metastatic UCB revealed resistance to cisplatin-based chemotherapy (Ikeda et al., 2011). We found that serum periplakin expression was higher in patients with MIBC than in those with NMIBC, but serum levels in both lesions were significantly decreased compared with that in normal controls. Although previous reports have shown that loss of expression of periplakin associated with pathological stage was seemingly discordant with the present study, both histological and serum expressions of periplakin indicated no or very slight expression in patients with UCB compared with normal controls. Although the precise role of serum periplakin protein in patients with UCB remains unknown, it is possible that biologically aggressive UCB could slightly increase the periplakin protein in serum concordant with the p-Akt pathway, but not to a high level, like that found under normal conditions, and then could progress locally or distantly and survive in anti-cancerous circumstances, for example, floating chemotherapeutic agents in the blood stream.
This study has several limitations. First, the relatively low number of patients did not allow us to show statistical power. Second, this study was performed exclusively in Japan, so more patients of different ethnicities and from countries with different genetic, epigenetic, and environmental risk factors are needed to confirm our results because the proteins of the plakin family are detected as autoantibodies in other lesions (Park et al., 2006;Taille et al., 2011). Third, periplakin is located not only in urothelium but also in other cells. The role of serum periplakin expression needs to be validated in other types of diseases, including cancerous and inflammatory lesions. Finally, the detailed mechanisms and dynamics of periplakin between immunohistochemical staining and serum findings need to be determined.
In conclusion, patients with UCB demonstrated significantly decreased expression of serum periplakin protein compared with normal controls. In addition, we found that serum periplakin expression was higher in patients with MIBC than in those with NMIBC, but serum levels in both lesions were significantly decreased compared with those in normal controls. Further multiinstitutional evaluations of serum periplakin in a large patient population are warranted before it can be included in routine clinical use for early detection of UCB. It may be suitable as an adjunct urine cytology and cystoscopy along with a noninvasive diagnostic modality. | 2017-06-17T06:13:04.132Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "555ea06bfe457ed254be38afa8c94588fda8c33b",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201405458144912&method=download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "78f13b21c41668a15b00744d0ad3a50a99d94c8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27042243 | pes2o/s2orc | v3-fos-license | When Experiments Travel: Clinical Trials and the Global Search for Human Subjects Adriana Petryna, Princeton University Press, 2009
The author sets the scene for the examination of global drug trial development by looking at the ways in which clinical trial evidence is constituted and experimental subjects are identified. ‘Treatment saturation’, characterised by a shrinking pool of available human subjects in the West for the pharmaceutical industry, an increasing demand both for new markets and for human research subjects in general, are identified as factors in the offshoring of clinical trials. Petryna shows how in the last 20 years the sites of research have shifted from academic to contract research organisations (CROs), which are typically subcontracted by pharmaceutical sponsors to collect the evidence required for drug approval by the FDA (Federal Drug Agency in the USA) or the EMEA (the European Agency for the Evaluation of Medical Products). Such CROs are competitive trans-national businesses that run trials not only for the pharmaceutical industry but also for bio-technological products and medical devices. Their projected growth is faster than that of the pharma industry. Responsible for locating research sites, patients, local experts and ethical review boards and on occasions for drawing up study design and performing analyses, the CROs may work with primary healthcare facilities such as hospitals and consortia of medical specialists. Some even have centralised institutional review boards to review protocols and subject recruitment to ensure the safety of trial volunteers. Alongside more permissive national legislative environments, factors such as population disease profiles, mortality rates, patient trial costs and potential for future marketing of the drug to be approved are essential in identifying new clinical trial sites. Petryna’s analysis illustrates how regulatory requirements, both in terms of patient safety and standardised scientific protocols, are made sense of by local actors in different ways. The offshoring of clinical trials equally needs to produce convincing scientific evidence for each drug’s adoption and various strategies are presented by clinical trial
The author sets the scene for the examination of global drug trial development by looking at the ways in which clinical trial evidence is constituted and experimental subjects are identified. 'Treatment saturation', characterised by a shrinking pool of available human subjects in the West for the pharmaceutical industry, an increasing demand both for new markets and for human research subjects in general, are identified as factors in the offshoring of clinical trials. Petryna shows how in the last 20 years the sites of research have shifted from academic to contract research organisations (CROs), which are typically subcontracted by pharmaceutical sponsors to collect the evidence required for drug approval by the FDA (Federal Drug Agency in the USA) or the EMEA (the European Agency for the Evaluation of Medical Products). Such CROs are competitive trans-national businesses that run trials not only for the pharmaceutical industry but also for bio-technological products and medical devices. Their projected growth is faster than that of the pharma industry.
Responsible for locating research sites, patients, local experts and ethical review boards and on occasions for drawing up study design and performing analyses, the CROs may work with primary healthcare facilities such as hospitals and consortia of medical specialists. Some even have centralised institutional review boards to review protocols and subject recruitment to ensure the safety of trial volunteers. Alongside more permissive national legislative environments, factors such as population disease profiles, mortality rates, patient trial costs and potential for future marketing of the drug to be approved are essential in identifying new clinical trial sites. Petryna's analysis illustrates how regulatory requirements, both in terms of patient safety and standardised scientific protocols, are made sense of by local actors in different ways.
The offshoring of clinical trials equally needs to produce convincing scientific evidence for each drug's adoption and various strategies are presented by clinical trial __________ Genomics, Society and Policy, Vol.5, No.2 (2009) ISSN: 1746-5354 © ESRC Genomics Network. 85 experts in relation to the inclusion/exclusion criteria for the selection of patients and the ways in which these impact upon the collected evidence. Issues of competition, economic profit, risk, transparency and ethics are weighted individually and collectively through a constant comparison between different countries and locations to illuminate further aspects of the complex dynamics created by various interests, actors and agencies.
Petryna develops the concept of "experimentality" defined as a "distinct modus operandi", "decentralised and diffused" which supports the global drug market. Her analysis suggests that experiments are more than simple hypothesis-testing instruments, "they are operative environments that redistribute public health resources and occasion new and often tense medical and social fields". In this process "the line between what counts as experimentation and what counts as medical care is in flux". Furthermore, the ethics of offshore clinical trials remain, despite strict scientific protocols (and occasional scandals), a malleable "workable document". One instance of such malleability is the "expedient experimentality" presented by public health crises in African countries, where international ethics codes can easily be overlooked in favour of immediate access to valuable, yet vulnerable subjects. In order to present experiments as ethically standardised, the complex ethics surrounding the clinical trials placebo is addressed by providing equivalent medication or the best local treatment available. As the research aims to keep costs low, clinical trial locations are chosen so that the best local treatment is hardly better than placebo, for instance, vitamin C. Other examples on the offshoring of clinical trials to Eastern Europe and Brazil depict poignantly ethical conundrums regarding what happens in the aftermath of the trials both to patients and to health care systems.
Yet, the issue of whether such trials are a public good or exploitative mechanisms remains complex. Petryna shows how in different ailing healthcare systems the opportunity of clinical trials seems attractive both to clinicians and administrators, who have few resources but great entrepreneurial spirit. Moreover, these trials may offer a glimpse of hope to patients with little access to medication. At the same time, as the trials unfold, they become powerful marketing tools for specific drugs, instrumental in the reorganisation of healthcare priorities and in the production of a pharmaceuticalization of health care in specific regions of the world (i.e. an increasing dependency on particular, expensive, but not necessarily cost-effective drugs, often introduced through clinical trials). In Brazil, for instance, patient groups mobilise public opinion to lobby for treatment access from the state to drugs previously on trial. Such developments raise further questions from regulators, governments, researchers, other medical practitioners and patients themselves, regarding the continuity of treatment and control over the patients' trial data -matters that are neither part of the initial clinical trial contract or of the consent form, but emerge in the aftermath of the trials. Furthermore, such clinical trials can also provide opportunities for clinicians not only to distribute trial drug resources to patients, but also to carry out further research for the benefit of such patients and in the interests of their own national healthcare systems. It is here, argues Petryna, that a critical assessment of the localised impact of the drug trial political economy is undertaken.
86
Combining scientific rigour with the need to inform public opinion and support government healthcare decisions, researchers continue to monitor the use of clinical trial drugs in order to expose the shortcomings of organisational cultures in securing the integrity of scientific research and patient safety.
To conclude, Petryna's argument provides a rich, detailed and compelling account of the many economic and political interests, complex moral issues and cultural settings of offshore clinical trials. The book will be of particular relevance to scholars in sociology, anthropology and health studies, interested in the dynamics of pharmaceutical markets and their impact on less developed countries, and the production of scientific and clinical evidence. | 2018-05-30T21:04:25.135Z | 2009-08-15T00:00:00.000 | {
"year": 2009,
"sha1": "6a95e5943da1f262a3530db37bbb0657f4d9ec32",
"oa_license": "CCBYNC",
"oa_url": "https://lsspjournal.biomedcentral.com/track/pdf/10.1186/1746-5354-5-2-84",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a95e5943da1f262a3530db37bbb0657f4d9ec32",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Sociology"
]
} |
118566266 | pes2o/s2orc | v3-fos-license | Isothermal Plasma Waves in Gravitomagnetic Planar Analogue
We investigate the wave properties of the Kerr black hole with isothermal plasma using 3+1 ADM formalism. The corresponding Fourier analyzed perturbed GRMHD equations are used to obtain the dispersion relations. These relations lead to the real values of the components of wave vector $\textbf{k}$ which are used to evaluate the quantities like phase and group velocities etc. These have been discussed graphically in the neighborhood of the pair production region. The results obtained verify the conclusion of Mackay et al. according to which rotation of a black hole is required for negative phase velocity propagation.
Introduction
The study of gravitomagnetic waves in rotating black hole is important because the existence of black holes can be ultimately verified with the help of infalling plasma radiation and super-radiance of gravitomagnetic waves. Information from the magnetosphere can be transmitted from one region to another only by means of propagation allowed by plasma state. The gravitational field and the wind bring perturbations to the external fluid dynamics.
The response of black holes to the external perturbations can be explored with the help of wave scattering method.
General Relativity is the theory of four-dimensional spacetime but we experience a three-dimensional space evolved in time. It is easier to split the spacetime into three-dimensional space and one-dimensional time to develop a better understanding of the physical phenomenon. The split we usually use in understanding the general relativistic physics of black holes and plasmas is 3+1 ADM split introduced by Arnowitt et al. [1]. This split in the formalation of general relativity is particularly appropriate for applications to the black hole theory as described by Thorne et al. [2]- [4]. Using this formalism, the wave propagation theory in the Friedmann universe was investigated by Holcomb and Tajima [5], Holcomb [6] and Dettmann et al. [7]. Komissarov [8] discussed the famous Blandford-Znajek solution.
Blandford and Znajek [9] found a process which describes the extraction of rotational energy in the form of Poynting flux. The black hole with a force free magnetosphere behaves as a battery with internal resistivity in a circuit made by poloidal current. This current system is considered to be equivalent to incoming and outgoing waves. The incoming waves transport energy in a direction opposite to the Poynting flux. Penrose [10] was the pioneer who gave the idea of extraction of energy from the rotating black hole by a specific process called Penrose process. In the wavelength analogue of Penrose process [11] an incoming wave with positive energy splits up into a transmitted wave with negative energy and a refracted wave with enhanced positive energy. The negative energy wave propagates into the black hole equivalent to a positive Poynting flux coming out of the horizon [12]- [13].
The key features of the Kerr black hole were beautifully summarized by Müller [14] who investigated the accretion physics in the plasma regime of the general relativistic magnetohydrodynamics (GRMHD). Punsley et al. [15] considered the black hole magnetohydrodynamics in a broader sense. Musil and Karas [16] observed the evolution of disturbances originated in outer parts of the accretion disk and developed a numerical scheme to show the transmission and reflection of waves. Koide et al. [17] modeled the GRMHD behavior of plasma flowing into rotating black hole in a magnetic field. They showed (numerical simulations) that energy of the spinning black hole can be extracted magnetically. Zhang [18]- [19] formulated the black hole theory for stationary symmetric GRMHD with its applications in Kerr geometry. He discussed wave modes for cold plasma with specific interface conditions. Buzzi et al. [20]- [21] provided a linearized treatment of transverse and lon-gitudinal plasma waves in general relativistic two component plasma (3+1 ADM formalism) propagating in radial direction close to the Schwarzschild horizon.
Mackey et al. [22] gave the idea that negative phase velocity plane wave propagates in the ergosphere of a rotating black hole. They verify that the rotation of a black hole is required for negative phase velocity propagation which is a characteristic of Veselago medium. This medium was hypothetically mentioned by Veselago [23] and later formed experimentally [24] as a material (called metamaterial or left-handed material). Much work has been carried out to investigate the characteristics of this medium [25]. Woodley and Mojahedi [26] showed (using full wave simulations and analytical techniques) that in left-handed materials, the group velocity can be either positive (backwards wave propagation) or negative. Sharif and Umber [27]- [28] investigated some properties of plasma waves by investigating real wave numbers. The analysis has been done for the cold as well as isothermal plasmas living in the neighborhood of the event horizon by using Rindler approximaton of the Schwarzschild spacetime. In a recent paper, the same authors [29] have found some interesting properties of cold plasma waves using perturbation wave analysis to the GRMHD equations in the vicinity of the Kerr black hole. They have also discussed the existence of Veselago medium near the pair production region. This paper has been extended to investigate the wave properties for the isothermal plasma. We have focussed this work to investigate the following three main objectives: 1. The behavior of gravitomagnetic waves under the influence of gravity and magnetospheric wind is analysed. This helps us to detect the response of the black hole magnetospheric plasma oscillations to gravitomagnetic perturbations near the pair production region. The pair production region lies near the event horizon of the black hole.
2. The existence of Veselago medium in the black hole regime is checked.
3. The negative phase velocity propagation regions are investigated and compare the results with those obtained by Mackay et al. [22].
To this end, we derive the GRMHD equations in 3+1 formalism using the isothermal equation of state. The component form of the equations for specific background assumptions is obtained by using perturbations. We consider the perturbed quantities as plane harmonic waves produced by gravity and wind due to black hole rotation. The Fourier analysis method for waves is applied and dispersion relations are derived. These relations lead to the x-component of the wave vector from which the relevant quantities are investigated to analyze the wave properties near the pair production region. The paper is organized as follows. The next section is oriented with the description of the Kerr analogue spacetime and mathematical laws in 3+1 formalism for this model. Section 3 is devoted to the assumptions corresponding to the background flow. In section 4, the GRMHD equations alongwith their Fourier analyzed perturbed form for the isothermal equation of state of plasma are given. Section 5 provides the solutions of dispersion relations. In the last section, we shall discuss the results.
Mathematical Framework
This section contains the line element for a general spacetime model. The electrodynamics corresponding to Kerr planar analogue in 3+1 formalism is also considered.
Description of Model Spacetime
The line element of the spacetime in 3+1 formalism can be written as where lapse function (α), shift vector (β) and spatial metric (γ) are functions of time and space coordinates. We consider the planar analogue of Kerr spacetime [19], i.e., Here z, x, y and t correspond to Kerr's radial r, axial φ, poloidal θ and time t coordinates. The Kerr metric depends non-trivially on both r and θ, whereas this model metric depends on z only. The lapse function α is taken to be unity to avoid the effects of horizon and redshift. The value of the shift function β (analogue to the Kerr-type gravitomagnetic potential) decreases monotonically from 0 (z → ∞) to some constant value (z → −∞). We have assumed the direction of β along x-axis. This shift function derives an MHD wind which extracts translational energy analogous to the rotational energy for the Kerr metric. The shift vector in three dimensions will be denoted by the Greek letter β. The Kerr-type horizon has been pushed off to z = −∞. The pair production region lies at z = 0 where the plasma is created. The newly created particles are then driven up to relativistic velocities by magnetic-gravitomagnetic coupling as they flow off to infinity and down towards the horizon. Geometrized units will be used throughout the paper.
Electrodynamics in Kerr Planar Analogue
We consider the magnetosphere filled with MHD fluid and take the perfect MHD flow condition in fluid's rest-frame with V, B and E are fiducial observer (FIDO) measured fluid velocity, magnetic and electric fields respectively. For perfect MHD flow in (2.1) with α = 1, differential form of Faraday's law in 3+1 formalism [18] turn out to be dB dτ where d dτ ≡ ∂ ∂t − β.∇ is the FIDO measured rate of change of any threedimensional vector in absolute space. Gauss law of magnetism according to FIDO can be written as [18] ∇.B = 0. (2.5) For (2.1) with α = 1, the local conservation law of rest-mass [18] according to FIDO is where ρ 0 is the rest-mass density, γ is the Lorentz factor and D .∇ is the time derivative moving along the fluid. The FIDO measured law of force balance equation [18] for the spacetime, given by Eq.(2.1) with α = 1, takes the form where µ is the specific enthalpy and p is the pressure of the fluid. The FIDO measured local energy conservation law (Eq.(2.4) of [28]), for Eq.(2.1) with α = 1, is given as follows
Specialization of Background Flow for Model Spacetime
In this section, we give the background flow and relative assumptions which will be used to simplify the problem.
Description of Flow Quantities
The FIDO measured 4-velocity of fluid can be described by a spatial vector field lying in the xz-plane [19] Here the Lorentz factor takes the form γ = 1 √ 1−u 2 −V 2 . The magnetic field measured by FIDO is also assumed to be in the xz-direction where B is constant. The corresponding Poynting vector becomes We have considered an example of stationary flow of an isothermal MHD fluid in our model spacetime (2.2). These flows are used as stationary model magnetospheres whose dynamical perturbations are to be studied. The plasma is moving in the xz-direction. The perturbed flow is along z-direction due to the black hole's gravity and along x-direction due to rotation of the black hole (in direction of shift vector of our analogue spacetime). This flow will be analyzed to seek the properties of plasma waves.
Perturbations and Wave Propagation
The perturbed flow in the magnetosphere (which is in the xz-plane) can be characterized by velocity V, magnetic field B, the fluid density ρ and pressure p. We denote the unperturbed quantities by a superscript zero and the following dimensionless notations are used for perturbations (δV, δB, δρ, δp) The perturbed variables take the following form It is also assumed that the perturbations have sinusoidal dependence of t, x and z. Thus Using the values of components of k, we can discuss the quantities like phase velocity vector, group velocity vector, refractive index and its change with respect to angular frequency. These quantities would help to investigate the wave behavior of the Kerr black hole magnetosphere and the properties of Veselago medium.
GRMHD Equations for Kerr Spacetime in Isothermal State of Plasma
The isothermal equation of state means that there is no exchange of energy between the plasma and the magnetic field. This state can be expressed by the following equation When we use this equation of state, the set of GRMHD Eqs.(2.4)-(2.8) take the following form for the spacetime given in Eq.
These equations proceed in a similar way as used in [27]- [28]. Equations (3.1) and (3.2) as well as the restrictions for the velocity and magnetic fields, given in Section 3.1, lead to the perturbed form of Eqs.(4.1)-(4.5) given in Appendix A. When we use Eq.(3.3), the Fourier analyzed perturbed equations take the following form Eq.(4.8) gives the relation between x and z-components of the wave vector i.e., k z = − c 4 c 5 k x which will be used in the next section.
Numerical Solutions
This section is devoted to the numerical solutions of the dispersion equations. The following subsection contains the relative assumptions which make the equations easier to deal with.
Relative Assumptions
In our stationary symmetric background, the x-component of the velocity vector can be written in the form [19] We assume the value of the shift function [19] β = tanh(z) − 1 with V F = 1. Further, B 2 = 8π and λ = 1 are taken so that the magnetic field becomes constant. Thus the x-component takes the form Substituting the value of V in the conservation law of rest-mass for threedimensional hypersurface, i.e., with the assumption that rest-mass density is constant, we get an equation of the form 3u 2 + 2u tanh(z) + tanh 2 (z) − 1 = 0 quadratic in u with the assumption that A/ρ 0 = 1. The solutions of this equation and the corresponding values of V are given as follows We shall use these values to solve the dispersion relations. The Poynting vector for these values takes the following form These quantities are valid for the region outside the pair production region. We consider the region −5 ≤ z ≤ 5 and omit the region −1 ≤ z ≤ 1 due to large variations in the background flow quantities. In the rest of the region, these quantities become constant and Fourier analyzed procedure is valid for this region. Further, we use the relation k z = −k x which reduces the wave vector to (k x , 0, −k x ).
The computer programming (using Mathematica) is used to evaluate a root of the dispersion relation for the plasma moving towards the black hole with the velocity components given by Eq. (5.2). This is given as a separate file with all the required codes. Other roots can be obtained in a similar manner.
It It is clear that the Figures 1 and 2 represent the neighborhood of the pair production region towards the outer end (as the wave number is found for the positive values of z) whereas the Figures 3, 4, 5 and 6 show the neighborhood of the pair production region towards the event horizon (as the wave number is determined for the negative values of z).
Results
First, we obtain k x for the velocity components, given by Eqs.(5.2) and (5.3) in the positive z region. These are shown in the Figures 1 and 2 respectively.
In the Figure 1, the x-component of the wave vector is negative near the pair production region and for the waves with negligible angular frequency. For each angular frequency, the waves grow monotonically in number when they move away from the event horizon. There is a sudden increase in the 2), plasma admits the properties of Veselago medium near the pair production region. As the waves move away from the pair production region, they disperse normally. Negative phase and group velocity propagation regions are observed near the pair production region. refractive index with respect to the angular frequency is positive for the regions (i) 1.6 ≤ z < 2, 0.4 ≤ ω ≤ 1, (ii) 2 ≤ z < 3, 0.42 ≤ ω ≤ 2.92, (iii) 3 ≤ z < 4, 0.225 ≤ ω ≤ 10 and (iv) 4 ≤ z ≤ 5, 0.14 ≤ ω ≤ 10 for which the dispersion is normal [26], [30]. In the rest of the region, most of the points admit anomalous dispersion.
The Figure 2 shows that the x-component of the wave vector is negative for the region 1.0 ≤ z ≤ 1.89. It is large near the event horizon and decreases up to z = 1.75 after which it increases and fluctuations occur in the values. In the region 1.89 < z ≤ 10, it takes random values. The negative values of k x in the region implies that the wave vector is in the opposite direction to the Poynting vector which indicates the properties of Veselago medium. In the region 0 ≤ z ≤ 1.89, the x-components of the phase and group velocities take negative values and hence this is of negative phase and group velocity propagation region. Both these quantities admit random values in the region 1.89 ≤ z ≤ 10. For the region 1 ≤ z ≤ 1.6, 0 < ω ≤ 0.079, the change in the refractive index with respect to the angular frequency is positive and hence the dispersion is found to be normal. In the region 1 ≤ z ≤ 1.6, 0.079 < ω ≤ 10, the quantity dn dω < 0 which indicates anomalous dispersion in this region [26]. The rest of the region shows random points of normal as well as anomalous dispersion.
For the negative z region, i.e., the region towards the event horizon of the black hole in the neighborhood of the pair production region, we obtain one value of k x for the velocity components given by Eq.(5.2) and three for the velocity components given by Eq. (5.3). These values are shown respectively by the Figures 3, 4, 5 and 6.
In the Figure 3, the x-component of the wave vector is negative for the region −1.925 ≤ z ≤ −1.0 where the Poynting vector is parallel to the wave vector and hence the medium is usual. The refractive index greater than one and positive change in the refractive index with respect to the angular frequency indicate normal dispersion. In the rest of the region, all the three quantities admit random values and hence there are normal as well as anomalous points of dispersion. For the region −1.4 ≤ z ≤ −1.0, 0.5 ≤ ω ≤ 10, the x-components of phase and group velocities are negative such that v px > v gx . These velocity components admit random values in rest of the region.
The Figure 4 indicates that the x-component of the wave vector is negative for the whole region and hence the wave vector and the Poynting vector are in the same direction showing the existence of the usual medium. As the values of z and ω grow, k x decreases and hence v px and v gx are negative in this region. Although v px > v gx for the region 0 ≤ ω ≤ 10 −15 , yet they are nearly equal for rest of the region. The refractive index is greater than one in the whole region. The refraction increases as the waves move towards the pair production region. The change in the refractive index with respect to the angular frequency is negative for the region 0 < ω ≤ 0.6 which shows that anomalous dispersion of waves. For the rest of the region, it is positive and thus indicates normal dispersion.
The Figure 5 shows that k x is positive for the whole region. Thus the wave vector is in the opposite direction to the Poynting vector which indicates the presence of Veselago medium. The quantity k x increases with the increase in z and ω except for the waves with negligible angular frequency. v px and v gx are nearly equal and admit positive values which show negative phase and group velocity propagation in the whole region. The refractive index is negative and decreases as the values of z and ω increase. The change in the refractive index with respect to the angular frequency is negative for the regions (i) −2 ≤ z ≤ −1, 2.15 ≤ ω ≤ 10 (ii) −3 ≤ z < −2, 5.5 ≤ ω ≤ 10 and (iii) −4 ≤ z < −3, 9.5 ≤ ω ≤ 10 which indicates anomalous dispersion in these regions. It is positive for the rest of the region which indicates normal dispersion except for the waves with negligible angular frequency.
In the Figure 6, the x-component of the wave vector is positive for the whole region and increases with the increase in the angular frequency. As the waves move away from the pair production region, the x-component of the phase velocity decreases slightly and then increases. In contrast, the xcomponent of the group velocity increases a little and then decreases. The refractive index is negative due to the fact that the Poynting vector is in the opposite direction to the wave vector which shows the existence of Veselago medium. The positive values of x-components of phase and group velocities show the existence of negative phase and group velocity propagation regions. The change in the refractive index with respect to the angular frequency is negative throughout the region and hence shows anomalous dispersion except for the waves with negligible angular frequency.
Conclusion
It is well-known that charged particles are created in the pair production region. Some of these particles which are pushed on to orbits with negative energy by the Lorentz force move towards the event horizon and others move towards the outer end of the magnetosphere. These particles would reach their destinations if the plasma existing in the neighborhood of the pair production region allows them to do so. The generation of plasma is necessary to support the MHD magnetically dominated flow. Due to particle generation, waves are produced in the neighborhood of the pair production region. The dispersion relations of waves lead to understand how much the surrounding medium let the waves to disperse through.
This paper studies the wave properties for the isothermal plasma moving with velocity V and admits a constant magnetic field which thread the Kerr black hole magnetosphere. The gravitomagnetic waves and the pair of particles are produced in the z = 0 region. If the medium living around the pair production region allows the particles and waves to pass through, the energy extraction from the black hole is possible. This can be well understood by investigating properties of the waves in this region.
We have considered a black hole immersed in a rarefied plasma with uniform magnetic field which seems to provide support for carrying currents flowing across the magnetic field lines. Due to the strength of the magnetic field, a lot of energy can be extracted due to the plasma particles that fall into the black hole's horizon has negative energy. The 3+1 GRMHD equations are derived for the neighborhood of pair production region and two-dimensional perturbations are discussed in the context of perfect MHD condition. We assume that the rotation is in the x-direction and the horizon is at z = −∞. The perturbations are taken only in the xz-direction. The dispersion relations are formulated by assuming the perturbations as plane waves. We solve these relations by taking the wave vector as (k x , 0, −k x ) and obtain the x-component of the wave vector. This component leads to properties of the isothermal plasma in the neighborhood of the pair production region.
We have discussed these relations for the regions 1 ≤ z ≤ 5 and −5 ≤ z ≤ −1. The dispersion relations for the region 1 ≤ z ≤ 5 are shown in the Figures 1 and 2. These Figures indicate that near the pair production region, the plasma admits the properties of Veselago medium. The region which is nearer to the pair production region does not allow the waves to pass through. Thus the particles and waves cannot get out of this region. The small regions far away from the pair production region admit normal dispersion of waves which indicate that the waves pass through them. As we go away from the pair production region, normal dispersion exists frequently as shown in the Figure 1.
The region −5 ≤ z ≤ −1 allows us to investigate whether there is a possibility for the waves to move towards the black hole event horizon or not. The dispersion relations for this region are shown in the Figures 3, 4, 5 and 6. From the Figures 3, 4 and 5, we find that there are chances for the waves to pass through the neighborhood of the pair production region when the plasma admits the properties of usual or Veselago medium. The Figure 6 indicates that there can be situation when plasma admits the properties of Veselago medium in the neighborhood of the pair production region, it may not allow the waves to pass through the region.
It is interesting to note that the Figures 2 and 3 show the irregular dependence of wave vectors on the angular frequency and z. Mathematically, this irregularity is due to the nature of the roots obtained for these graphs. The irregular behavior may be due to a disturbance of the equilibrium between outward and inward directed forces. The outward directed forces are caused by the particle pressure and the curvature drift due to non-uniform magnetic field and inward directed forces are exerted by the tangential stress of the magnetic field lines for the low frequency regime.
For the high frequency regime, there is a class of MHD instabilities which sometimes develop in a thin plasma column carrying a strong current. If a kink begins to develop in such a column, the magnetic forces on the inside of the kink become larger than those on the outside, which leads to the growth of perturbation. The column then becomes unstable and causes a disruption. Both the ballooning and kink modes are ideal MHD instabilities.
In the Figures 1, 2, 5 and 6, we obtain the properties of Veselago medium. The phase and group velocity vectors propagate in the direction opposite to the Poynting vector which verify the results of Mackay et al. [22] according to which rotation of a black hole is required for the negative phase velocity propagation.
We can conclude that waves produced in the pair production region due to pair creation cannot get out of its neighborhood towards the outer end of the magnetosphere. The same result has been obtained for the cold plasma case [29]. We obtain some cases where favorable conditions are present to allow energy to move towards the black hole horizon. For the cold plasma, these conditions are present for the usual medium whereas for the isothermal plasma, these situations occur for usual as well as Veselago medium. For the plasmas with pressure, the black hole can suck particles and waves for both the usual and Veselago medium whereas for the cold plasmas, this situation holds for the usual medium.
The strong magnetic coupling enforce the accreting particles to fall into the black hole with negative energy and negative angular momentum. This indicates that energy and angular momentum flow from the black hole into the disk. When the particles fall into the black hole with negative energy, energy is extracted from the black hole [31]. When the particle with positive energy and positive angular momentum leaves the pair production region and goes towards the event horizon, much energy and momentum are lost and ultimately the particle has negative energy and angular momentum [32]. Thus if the particle either with negative or positive energy leaves the pair production region and gets a chance to reach the event horizon, the result is the extraction of energy from the black hole transmitted to the accretion disk. This shows that when the magnetosphere is filled isothermal plasma admitting the properties of Veselago as well as usual medium, our results indicate that energy extraction is possible. | 2007-09-15T06:11:39.000Z | 2007-09-15T00:00:00.000 | {
"year": 2007,
"sha1": "32075ea5f9e28634d810cc435442b36562b2a81d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0709.2628",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f8aa0b43c67bf8e4058ed25adbd58945b6c17c4b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
21745257 | pes2o/s2orc | v3-fos-license | SEEDS WITH HIGH MOLYBDENUM CONCENTRATION IMPROVED GROWTH AND NITROGEN ACQUISITION OF RHIZOBIUM-INOCULATED AND NITROGEN-FERTILIZED COMMON BEAN PLANTS ( 1 )
Seeds of common bean (Phaseolus vulgaris) with high molybdenum (Mo) concentration can supply Mo plant demands, but to date no studies have concomitantly evaluated the effects of Mo-enriched seeds on plants inoculated with rhizobia or treated with N fertilizer. This work evaluated the effects of seed Mo on growth and N acquisition of bean plants fertilized either by symbiotic N or mineral N, by measuring the activities of nitrogenase and nitrate reductase and the contribution of biological N2 fixation at different growth stages. Seeds enriched or not with Mo were sown with two N sources (inoculated with rhizobia or fertilized with N), in pots with 10 kg of soil. In experiment 1, an additional treatment consisted of Mo-enriched seeds with Mo applied to the soil. In experiment 2, the contribution of N2 fixation was estimated by 15N isotope dilution. Common bean plants grown from seeds with high Mo concentration flowered one day earlier. Seeds with high Mo concentration increased the leaf area, shoot mass and N accumulation, with both N sources. The absence of effects of Mo application to the soil indicated that Mo contents of Mo-enriched seeds were sufficient for plant growth. Seeds enriched with Mo increased nitrogenase activity at the vegetative stage of inoculated plants, and nitrate reductase activity at late growth stages with both N sources. The contribution of N2 fixation was 17 and 61 % in plants originating from lowor highMo seeds, respectively. The results demonstrate the benefits of sowing Mo-enriched seeds on growth and N nutrition of bean plants inoculated with rhizobia or fertilized with mineral N fertilizer.
INTRODUCTION
Besides being a component of the nitrogenase enzyme of diazotrophic microorganisms, the transition element molybdenum occurs in four enzymes catalyzing diverse redox reactions in plants (Mendel & Hänsch, 2002).Nitrate reductase catalyzes the key step in N assimilation, aldehyde oxidases catalyze the last step in biosynthesis of abscisic acid, xanthine dehydrogenase is involved in purine catabolism including ureide synthesis in legume nodules, and sulphite oxidase is probably involved in detoxifying excess sulphite (Mendel & Hänsch, 2002).Molybdenum deficiency can occur in very weathered soils due to continuous cropping, soil erosion, reduction of soil organic matter, and adsorption by soil colloids particularly at low pH (Kaiser et al., 2005).Legumes that depend on N 2 fixation for their N supply require more Mo than plants fertilized with mineral N, since more Mo is needed for symbiotic N 2 fixation than for general plant metabolism (Parker & Harris, 1977).Root nodules of common bean act as a strong sink for Mo derived from seed or external sources in order to maintain adequate rates of N 2 fixation (Brodrick & Giller, 1991).
Since crops require low amounts of Mo, this nutrient can be provided by seed pellets, while on the other hand, seed pelleting with molybdate can impair seed respiration, reduce the survival of the rhizobia inoculated on seeds, and reduce plant nodulation and the efficiency of N 2 fixation (Campo et al., 2009).Alternatively, Mo-enriched seeds can be harvested from crops treated with Mo foliar applications, since the translocation of Mo from leaves occurs rapidly and efficiently (Brodrick & Giller, 1991).Bean yields are not affected by high Mo rates applied to foliage, and Mo fertilizer in the amount required to increase seed content is relatively inexpensive (Vieira et al., 2005;Campo et al., 2009).
In legumes, seeds at maturity contain a large proportion of the plant-accumulated Mo, often in amounts much higher than plant demand over an entire growth cycle, indicating that seed may supply enough Mo to crops to achieve high yields without additional Mo fertilization (Meagher et al., 1952;Jongruaysup et al., 1997).Common bean plants raised from seeds with high Mo concentration accumulated more biomass and N in shoots, had a higher root nitrogenase activity (Brodrick et al., 1992;Kubota et al., 2008) and also yielded more grain in soil with low N content (Vieira et al., 2005).Moreover, sowing bean seeds with high Mo contents did not require additional Mo supply via foliar fertilization to ensure adequate grain yields (Vieira et al., 2011).Nevertheless, in these previous studies the effects of Mo-enriched seeds on bean plants simultaneously inoculated with rhizobia and N-fertilized were not evaluated.Surveying the ontogenetic variations of the activities of nitrate reductase and nitrogenase enzymes, associated with estimations of the biological N 2 fixation by techniques such as the 15 N isotope dilution, could contribute to establish the relevance of the Mo supply provided by enriched seeds for growth and N metabolism of bean plants relying either on symbiotic or mineral N.
Therefore, this study aimed to evaluate the effects of Mo-enriched seeds on growth and N acquisition of common bean plants inoculated with rhizobia and/or fertilized with mineral N, by measuring the nitrate reductase activity in leaves, the nitrogenase activity in the root system, and the contribution of biological N 2 fixation, at different growth stages.
Experimental conditions
Two experiments in pots were carried out at the National Research Center in Agrobiology (Embrapa Agrobiologia), in Seropédica -RJ, Brazil.Experiment 1 was conducted from August to October 2008, and Experiment 2 from May to July 2009.Seeds of the common bean cultivar Carioca were obtained in a previous field experiment, in which the leaves were sprayed with 120 g Mo ha -1 as (NH 4 ) 6 Mo 7 O 24 .4H 2 O, 52 and 71 days after emergence (DAE).Samples of harvested seeds were dried, ground, nitric-perchloric digested and analyzed for Mo concentration by plasma emission spectrometry (ICP-EAS, Perkin-Elmer).Seeds used in the experiments were sieved to uniform size, with an average weight of 317 mg/seed and Mo concentrations of 0.2 and 10.9 µg g -1 , respectively, for low-and high-Mo seeds.Thus, Mo-enriched seeds contained 3.5 µg Mo/seed.
Both experiments were arranged in randomized blocks with five replications.Experiment 1 was arranged in a 5 x 4 factorial design, consisting of five treatments (low-Mo seed inoculated with rhizobia; high-Mo inoculated seed; low-Mo seed plus mineral N; high-Mo seed plus mineral N; high-Mo inoculated seed with additional Mo in the soil) were harvested at four growth stages (20,35,45,and 55 DAE).These periods corresponded to the developmental stages: fully expanded third trifoliate, flowering, early pod filling, and mid pod filling.Experiment 2 had a 4 x 2 factorial design with four treatments (low-Mo inoculated seed; high-Mo inoculated seed; low-Mo seed plus mineral N; high-Mo seed plus mineral N) and two harvest times (38 and 51 DAE, corresponding to pod setting and mid-pod filling).In experiment 2, a non-nodulating bean genotype NORH-54, sunflower and sorghum were used as non-fixing control crops, with three replications per species at each harvest.A previous test with the same soil showed abundant nodulation in common bean plants without rhizobia inoculation, thus control treatments without inoculation were not run in either experiment.
The substrate was sieved soil (< 6 mm) from the Ap horizon of a Typic Ultisol (Red Yellow Podzolic) in 10 kg pots.Soil chemical analysis, as described by Embrapa (1997), showed: water pH 5.3, Al: 1 mmol c dm -3 , Ca: 17 mmol c dm -3 , Mg: 16 mmol c dm -3 , K: 33 mg dm -3 , P: 8 mg dm -3 (Mehlich-1), and C: 9.1 g kg -1 .The soil in each pot was limed with 500 mg kg -1 CaCO 3 and wetted for three weeks to allow enough time for the lime to react.In experiment 2 the soil received 2.5 mg kg -1 N as urea enriched with 5 atom% 15 N excess 100 days before liming, to estimate the contribution of N 2 fixation by isotope dilution.In both experiments, the following water-diluted nutrients were applied (in mg kg -1 soil): 80.0 P as KH 2 PO 4 , 10.0 Mg as MgSO 4 .7H 2 O, 2.0 Cu as CuSO 4 .5H 2 O, 1.0 Zn as ZnSO 4 .7H 2 O, and 0.05 B as H 3 BO 3 .Pots of the inoculated and mineral N treatments received 30 and 60 mg kg -1 N as (NH 4 ) 2 SO 4 , respectively.In experiment 1, pots of the treatment with soil-applied Mo received 0.5 mg kg -1 Mo as (NH 4 ) 6 Mo 7 O 24 .2H 2 O.After nutrient additions, the soil of each pot was homogenized.At sowing, the soil contained 0 mmol c dm -3 Al, 26 mmol c dm -3 Ca, 17 mmol c dm -3 Mg, 192 mg dm -3 K, and 36 mg dm -3 P, at a pH 5.5.
Seeds were sown six days after soil fertilization.In the inoculated treatments, each seed received 1.0 mL of liquid inoculant containing the strains CIAT899 (or BR322) and PR-F81 (or BR520) of Rhizobium tropici from the collection of Embrapa Agrobiologia.After thinning, three plants were left to grow.Pots were placed in the open air, on tiles distributed on a grass sward, and irrigation was provided whenever necessary.Treatments with mineral N also received two applications of 300 mg N per pot as urea (23 and 37 DAE in experiment 1 and 24 and 41 DAE in experiment 2).
Assays
The trait days to flowering, i.e., when the three plants of each pot had one fully opened flower, was evaluated daily in both experiments.At each harvest, nitrate reductase activity in leaves was measured in vivo by the method described by Jaworski (1971), with adaptations.In the morning, the first fully expanded trifoliate of each plant was cut, placed on ice and transferred to the laboratory.A sample of 200 mg of 2-cm leaf discs were placed in vials containing 5 mL incubation medium, consisting of 0.1 mol L -1 K-phosphate buffer (pH 7.5), 20 mmol L -1 KNO 3 and n-propanol 1 %, and then incubated for 1 h at 30 o C.After incubation, 0.4 mL of the medium was sampled, and the nitrite released was measured by colorimetric reaction with the addition of 0.3 mL sulfanilamide at 1 % in HCl 3 mol L -1 and 0.3 mL Nnaphtyl-ethylenediamide at 0.02 %, and readings were performed at 540 nm wavelength.Nitrate reductase activity was expressed as µmol NO 2 -per g of fresh leaf weight per hour.
At harvest, shoots were cut at ground level and separated into leaves, stems and pods.In experiment 1, expanded trifoliates (including petioles) were detached and leaf area was measured photometrically (Li-Cor 3100 Area Meter).In experiment 2, abscised, fallen leaves from each pot were collected daily after 38 DAE, oven-dried and weighed.
In both experiments, the soil of each pot was placed in a plastic box, and the root system and detached nodules were carefully collected.Nitrogenase activity was measured in the root systems by the acetylene reduction assay (Hardy et al., 1973).Whole root systems were placed in 250 mL closed glass recipients and 30 mL of acetylene was injected by a syringe.After 30 min of incubation, 1 mL of the air inside the recipient was sampled and used to measure the ethylene concentration by gas chromatography (Perkin-Elmer with Flame Ionization Detector).Values of ethylene produced were converted to µmol h -1 C 2 H 4 per plant, representing the nitrogenase activity.The specific nitrogenase activity was calculated as the ratio between nitrogenase activity and nodule dry mass.
Roots and nodules were washed and nodules detached and counted.The mass of one nodule was calculated as the ratio between nodule mass and number.In experiment 2, senescent leaves were collected from each pot.Leaves, senescent leaves, stems, pods, roots and nodules were oven-dried, weighed and finely ground using a roll-mill.Total N concentration in each plant portion was measured by the semi-micro Kjeldahl procedure.Accumulation of N was computed as the product of N concentration and dry mass.In experiment 2, pots with the control crops (non-nodulating bean, sorghum and sunflower) were harvested together with the common bean plants, 52 and 71 DAE.Shoots were cut at ground level and separated into stems and leaves, and each plant portion was dried, weighed and finely ground.
In experiment 2, 15 N isotope composition was initially measured (one replication per treatment) in leaves, stems and roots of beans at both harvests and also in pods at the second harvest, using a continuous-flow isotope-ratio mass spectrometer (Finnigan DeltaPlus, Finnigan MAT, Bremen, Germany) at Embrapa Agrobiologia.A t-test at 0.05 probability was used to check the similarity of 15 N enrichment among plant parts for each harvest.In the different bean plant parts, 15 N enrichment was similar (Table 1), as verified by Wolyn et al. (1991) in the best N 2 -fixing bean lines and by Chagas et al. (2010) in a pot experiment.The leaves contained 67 and 40 % of the total N accumulated by inoculated plants 52 and 71 DAE, respectively, averaged across all treatments.Therefore, measurements of the N isotopic composition were only continued in leaves of inoculated bean plants, and in whole shoots of the non-fixing control crops, for all replications.The percentage of N derived from the atmosphere (%Ndfa) in leaves was estimated by the formula: %Ndfa = [1 -(atom% 15 N excess of bean / atom% 15 N excess of non-fixing)] x 100 Value of atom% 15 N excess of non-fixing crop was obtained by the average of the three control species, at each harvest time.
For both experiments, analysis of variance was performed for each harvest time in a two-factor design considering the combinations between seed Mo and N source, including data of days to flowering.Data of percentage of N derived from the atmosphere of inoculated plants of experiment 2 were analyzed in a two-factor design between seed Mo and harvest time.The least significant difference between treatments was estimated by the Tukey test at 0.05 probability.
Experiment 1
Averaged across all replications, common bean plants flowered 36.5 and 35.5 days after emergence (DAE) for plants grown from seeds with low or high Mo concentration, respectively, with no effect of N sources on days to flowering.Therefore, high-Mo seeds induced bean flowering one day earlier.
For all treatments, the leaf area of bean plants increased until 45 DAE and decreased thereafter (Figure 1), illustrating the process of leaf senescence during mid-pod filling.Shoot mass increased continuously during the experiment.Inoculated plants grown from seeds with high Mo concentration had a greater leaf area 35, 45 and 55 DAE and greater shoot mass 45 and 55 DAE than inoculated plants raised from low-Mo seeds (Figure 1).For mineral-N-fertilized plants, seeds with high Mo concentration increased leaf area and shoot mass more than in plants grown from low-Mo seeds, but only 55 DAE.The intense shoot growth of inoculated plants originating from high-Mo seeds between 35 and 45 DAE, in the early pod filling stage was noteworthy (Figure 1).The leaf area and shoot mass of inoculated plants originating from high-Mo seeds were not affected by additional Mo supplied to the soil during the whole experiment (Figure 1).In plants originating from high-Mo seeds, the pod mass was higher 55 DAE for both N sources than in plants from low-Mo seeds (Table 2).Root dry mass was higher for mineral-N-fertilized than inoculated plants during the whole experiment, without effects of seed Mo concentration (data not presented).
Mass of nodules of bean plants increased from 20 to 45 DAE, whilst nodule number increased only until 35 DAE (Figure 1).Thus, the intense increase in nodule mass between 35 and 45 DAE resulted mainly from expanded individual nodule size, which almost doubled during this period (Table 2).The inoculated plants raised from high-Mo seeds had highest nodule mass and number 20 DAE (Figure 1).Between 35 and 45 DAE, nodulation was not significantly different among treatments.Inoculated plants raised from high-Mo seeds had lowest nodule number and mass 55 DAE, with and without soil-applied Mo, partially owing to a strong decrease in nodule mass 45 DAE (Figure 1).The variation in nodule size was complex: 20 DAE inoculated plants had larger nodules than plants under mineral N fertilization, regardless of seed Mo, whereas at the end of the experiment the inverse occurred, when plants fertilized with mineral N had larger nodules than the inoculated (Table 2).
Values of nitrate reductase activity in bean leaves were highest at the beginning of the experiment, decreased between 20 and 35 DAE and remained almost stable thereafter (Figure 2).Plants fertilized with mineral N showed higher nitrate reductase activity than inoculated plants at both seed Mo concentrations 35 DAE.Mineral-N-supplied plants grown from high-Mo seeds had higher nitrate reductase activity 45 DAE.In inoculated plants originating from high-Mo seeds, with or without soil-applied Mo, the nitrate reductase activity was higher 55 DAE than in plants from low-Mo seeds (Figure 2).
Nitrogenase activity in the root systems was highest 35 DAE, at flowering, whereas this activity was lower during the vegetative period and also during pod filling (Figure 2).2).
Accumulation of N in shoots increased continuously during the experiment, except for plants raised from low-Mo seeds under mineral N after 45 DAE (Figure 2).Accumulation of N increased rapidly between 35 and 45 DAE, i.e. beginning of pod filling, particularly in inoculated plants originating from high-Mo seeds.Seeds with high Mo concentration improved N accumulation in shoots at 35, 45 and 55 DAE, both in plants under inoculation or mineral N (Figure 2).Mo-enriched seeds doubled the amount of N accumulated in pods 55 DAE from both N sources (Table 2).The nodule N concentration of inoculated was higher than of non-inoculated plants 45 and 55 DAE (Table 2).
Experiment 2
Seeds enriched with Mo advanced bean flowering by one day: plants raised from seeds with low or high Mo concentration flowered 35.8 and 34.9 DAE, respectively, with no effect of N source.Shoot growth was not significantly affected by seed Mo concentration 38 DAE, whereas 51 DAE high-Mo seeds increased shoot and pod mass with both N sources (Table 3).Addition of mineral N increased root mass 51 DAE, whereas root growth was not affected by seed Mo.The different treatments did not affect the mass of senescent leaves accumulated between 38 and 51 DAE (Table 3).High-Mo seeds enhanced nodule mass of inoculated plants 38 DAE but did not significantly affect nodule number with either N source.Plants originating from high-Mo seeds had less nodules for both N sources 51 DAE, but their greater nodule size resulted in similar nodule mass of plants from low-Mo seeds (Table 3).Inoculated plants had higher nodule mass than plants under mineral N 51 DAE, at both seed Mo concentrations (Table 3).
Activities of nitrate reductase in leaves and of nitrogenase in root systems decreased as plants aged, although the nitrogenase decay was much more intense, and specific nitrogenase activity decreased by two orders of magnitude between 38 and 51 DAE (Table 4).Nitrate reductase activity was not affected by treatments 38 DAE, whereas 51 DAE plants raised from high-Mo seeds had higher nitrate reductase with both N sources ( 4).
Plants originating from seeds with high Mo concentration showed a higher contribution of biological N 2 fixation in leaves at both evaluation times, but the stimulus of Mo-enriched seeds was more pronounced 51 DAE (Table 5).The contribution of N 2 fixation increased between 38 and 51 DAE at both seed Mo concentrations (Table 5).
DISCUSSION
Two experiments evaluated the effects of sowing seeds with low or high Mo concentrations on common bean plants fertilized by either symbiotic N or mineral N. The evaluations covered most of the reproductive development of the cultivar Carioca, with a growth cycle of nearly 85 days in the field.Considering that a low starter N fertilization may stimulate plant growth and N 2 fixation (Rennie & Kemp, 1984), inoculated plants received 30 mg kg -1 N soil at sowing, whereas non-inoculated plants were fertilized with 60 mg kg -1 N at sowing plus two cover applications of 300 mg N per pot.The very similar results of the two experiments make our findings more meaningful.Additionally, the bean plant development was strong in both experiments, yielding 10 g of shoots at pod setting (Figure 1, Table 3).
Plant growth and nodulation
Sowing seeds with high Mo concentration increased shoot mass of inoculated plants by 41 and 26 % at the end of experiments 1 and 2, respectively (Figure 1, Table 3).These yield increases were greater than those reported by Brodrick et al. (1992) and Kubota et al. (2008), who evaluated only plants grown under symbiotic N. Furthermore, the shoot mass of mineral-N-fertilized plants from Mo-enriched seeds increased by 20 % at the pod-filling stage (Figure 1, Table 3).It should be stressed that the seeds with different Mo concentrations tested by Brodrick et al. (1992) had been produced under different growth conditions, i.e. on fertile soil or perlite, and these seeds were likely to have different physiological properties.
The additional Mo applied to the soil, a treatment included in experiment 1 only for inoculated plants raised from high-Mo seeds, had no significant effect on plant growth and nodulation, enzyme activities or N accumulation, at any growth stage (Figures 1 and 2).This confirmed that Mo-enriched seeds, which contained the same amount of 3.6 µg/seed proposed by Vieira et al. (2011) as sufficient for beans, provided enough Mo to achieve an adequate plant growth without supplemental Mo fertilization.Although the additional soil Mo was not tested in plants raised from high-Mo seeds receiving mineral N, the lack of response of these plants to soil-applied Mo was expected, since plants under mineral N usually require less Mo than plants relying on symbiosis (Parker & Harris, 1977).
Mo-enriched seeds accelerated the reproductive development of bean plants, be it under inoculation or mineral N, advancing flowering by one day and almost doubling pod mass at the end of the experiments (Tables 2 and 3).Under field conditions, Vieira et al. (2005) also verified that the growth cycle of bean plants originating from seeds with high Mo concentration was a few days shorter.This rapid development of plants originating from high-Mo seeds could be associated with a higher biosynthesis of abscisic acid due to increased activity of aldehyde oxidases (Mendel & Hänsch, 2002).1), mainly as a result of enlarged individual nodule size (Table 2).During midpod filling (45 -55 DAE), nodulation was impaired as pods developed, evidencing the process of nodule senescence at late reproductive stages, which coincided with the onset of leaf senescence after 45 DAE (Figure 1).A deeper comprehension of the intricate ontogenetic variation of nodulation of bean plants interacting with the studied treatments requires plant evaluations in different growth stages, as proposed by Araújo & Teixeira (2000).
Our results provide insights into the complex
In both experiments, inoculated plants raised from Mo-enriched seeds had higher nodule mass but less nodules at mid-pod filling in the first evaluation (Figure 1, Table 3).Other results also indicated a somewhat deleterious effect of Mo on late nodulation of bean plants.Brodrick et al. (1992) observed that plants raised from seeds with a high Mo content produced lower nodule mass when the growth media was not supplemented with Mo.Kubota et al. (2008) verified reduced nodule number at pod filling of bean plants raised from Mo-enriched seeds, and Chagas et al. (2010) noticed that seeds with high Mo concentration reduced nodule mass of beans at high soil P availability.Additionally, Vieira et al. (1998b) reported that the foliar Mo application 25 DAE reduced nodule number of field-grown beans, although without affecting nodule mass.Schulze (2004) reviewed the mechanisms associated with the regulation of N 2 fixation in legumes, concluding that the hypothesis of competition between growing pods and nodules for available assimilates being limiting for nitrogenase activity during pod filling seems to be clearly excluded for common bean, at least under non-stress conditions.Alternatively, it is proposed that a product of N fixation or assimilation could exert a feedback regulatory impact during pod filling (Schulze, 2004), and the remobilized N from lower leaves circulating within the plant during leaf senescence may be involved in causing the drop of N 2 fixation during pod-filling in common bean (Fischinger et al., 2006).The absence of treatment effect on the senescent leaf mass (Table 3) obscured a possible association between leaf senescence and nodulation.Nevertheless, the improved reproductive development of bean plants originating from high-Mo seeds, which resulted in advanced flowering by one day and doubled pod mass in both experiments (Tables 2 and 3), could be associated with an earlier N remobilization from leaves and the reduced nodule number at late growth stages.
Nitrogen metabolism
Although N acquisition by shoots increased continuously during plant development, the most intense shoot N accumulation of inoculated plants occurred between 35 and 45 DAE, i.e., at early pod filling (Figure 2).The maximal rate of N accumulation was observed between 30 and 48 days in field-grown bean lines (Kipe-Nolt & Giller, 1993), whereas the greatest amount of N 2 was fixed between early and late pod-filling (55 -75 days) by a climbing cultivar (Kumarasinghe et al., 1992) or during pod filling (60 -77 days) in a bush cultivar (Kimura et al., 2004).Based on evaluations of grain yield and nutrient accumulation at various growth stages in two field experiments with different cultivars, Araújo & Teixeira (2008) concluded that bean grain yield is not intrinsically associated with the vegetative vigor at flowering and that acquisition of N and P during pod filling can strongly influence the final crop yield.Therefore, early pod-filling is a critical stage in N acquisition of common bean.This rapid accumulation of plant N at early pod filling coincided with the highest nitrogenase activity of the nodulated root systems (Figure 2).Since young reproductive organs require large amounts of N, the nitrogenase activity usually peaks during early pod filling in legumes, although in some species nitrogenase activity drops sharply at later growth stages (Schulze, 2004).Although the acetylene reduction assay is of limited use to measure N 2 fixation over whole growth periods, the technique can be effective to study the symbiosis at different points in time under defined experimental conditions (Unkovich & Pate, 2000).
The improved growth of bean plants from Moenriched seeds was associated with a greater shoot N accumulation, amounts of N in the double-sized pods, and higher leaf N concentration, both in inoculated and mineral-N-supplied plants (Figure 2, Table 4).Inoculated plants originating from high-Mo seeds showed an even more rapid N acquisition during the early pod filling (Figure 2).Larger N accumulation of inoculated bean plants originating from high-Mo seeds was also verified by Kubota et al. (2008) soil and by Brodrick et al. (1992) in the field.Furthermore, the benefits of Mo-enriched seeds for N accumulation of bean plants receiving N fertilizer were also verified (Figure 2, Table 4).
Assessments of ontogenetic variations of the activities of nitrogenase and nitrate reductase in plants under symbiotic or mineral N contributes to elucidate the relevance of the mechanisms of N 2 fixation and N assimilation from the soil.Seeds with high Mo concentration increased nitrogenase activity of inoculated plants 20 DAE without significant effects at reproductive stages (Figure 2, Table 4).Kubota et al. (2008) also verified that plants of the cultivar Carioca raised from Mo-enriched seeds had higher nitrogenase activity 30 DAE but not 45 DAE, whereas Brodrick et al. (1992) found no effect of seed Mo concentration on nitrogenase activity of bean plants grown in nutrient solution for 38 days.Besides the effects on nitrogenase activity, the additional Mo supply given by enriched seeds may stimulate ureide synthesis via increased activity of xanthine dehydrogenase in the plant fraction of nodules of ureide-producing species such as common bean (Kaiser et al., 2005).
On the other hand, seeds with high Mo concentration increased the nitrate reductase activity in leaves mainly at the reproductive stages: 45 DAE in mineral N-fertilized plants and 55 DAE in inoculated plants in experiment 1 (Figure 2), and for both N sources 51 DAE in experiment 2 (Table 4).Vieira et al. (1998a) and Pessoa et al. (2001) verified that the foliar application of Mo 25 DAE in the field increased the nitrate reductase activity in bean leaves at reproductive stages, extending the period of high enzyme activity.In field-grown common bean, Franco et al. (1979) observed that nitrate reductase activity per leaf fresh weight peaked in the early stages of leaf development but per plant it peaked at early pod filling, indicating the relevance of N assimilation after flowering.
Nitrogenase activity was affected more strongly by N sources than by seed Mo concentrations.In both experiments, inoculated plants had higher nitrogenase activity near flowering than plants receiving mineral N, whereas after 45 DAE inoculated plants had higher specific nitrogenase activity, irrespective of seed Mo levels (Figure 1, Tables 2 and 4).Therefore, addition of mineral N during plant growth inhibited nitrogenase activity and reduced N concentration in nodules, which confirms the high sensitivity of bean symbiosis to soil nitrate (Leidi & Rodríguez-Navarro, 2000).Alternatively, plants receiving mineral N had higher nitrate reductase activity than inoculated plants, irrespective of seed Mo concentration, 45 DAE in experiment 1 and 55 DAE in experiment 2 (Figure 2, Table 4).Of the total nitrate reductase activity of common bean, approximately 95 % is localized in leaves, and this activity responded positively to increasing exogenous nitrate levels, indicating the presence of a nitrate-inducible form (Silveira et al., 2001).
Estimates of the percentage of N derived from the atmosphere in leaves of inoculated plants raised from high-Mo seeds increased from 32 % at pod setting to 61 % at mid-pod filling (Table 5).This confirmed that N 2 fixation during early pod filling is extremely important for common bean (Kumarasinghe et al., 1992;Kimura et al., 2004).Estimates of contribution of N 2 fixation, assessed by 15 N isotope dilution in bean plants grown in pots with soil of other authors were rather similar: 50 to 72 % (Rondon et al., 2007) or 54 to 79 % (Chagas et al., 2010).Using the 15 N natural abundance method, Mortimer et al. (2008) estimated contributions of biological fixation of 50 to 55 % in beans grown in nutrient solution.In plants originating from low Mo seeds the contribution of N 2 fixation was lower (Table 5), indicating that a limited Mo supply acutely impairs the symbiosis.This demonstrates the benefits of sowing Mo-enriched seeds to improve biological N 2 fixation of common bean.
Concluding remarks
The results clearly demonstrated the benefits of sowing Mo-enriched seeds to improve growth and N nutrition of common bean plants both when inoculated with rhizobia or receiving N fertilization.Mo-enriched seeds increased the nitrogenase activity in the vegetative stage, the nitrate reductase activity at reproductive stages, and the contribution of biological N 2 fixation.Therefore, plants originating from seeds with high Mo concentration, either inoculated or receiving mineral N, accumulated more biomass and N in pods and shoots at mid-pod filling.These Mo enriched seeds were harvested from crops treated with two foliar sprays of 120 g ha -1 Mo in the reproductive stages, which elevated the Mo amount per seed to 3.5 µg.These enriched seeds sown in soils with low Mo and N levels could enhance yields and improve the contribution of biological N 2 fixation to common bean.This technique could be used to produce Mo-enriched bean seeds for distribution to small family farms by government agencies, or such seeds could be produced by co-operatives and associations of farmers, which would raise the common bean yields in areas where chemical fertilizers are currently rather rare, or even to increase the efficiency of mineral N fertilizers.
CONCLUSIONS
1. Common bean plants raised from seeds with high Mo concentration had a greater leaf area, shoot mass and N accumulation than plants raised from seeds with low Mo concentration, both when inoculated with rhizobia or fertilized with mineral N.
2. Bean seeds enriched with Mo increased nitrogenase activity at the vegetative stage in plants inoculated with rhizobia, and nitrate reductase activity at late growth stages in plants inoculated with rhizobia or fertilized with mineral N. R. Bras.Ci.Solo, 37:367-378, 2013 3. The contribution of biological N 2 fixation was increased to 61 % in plants originating from high-Mo seeds from 17 % in plants grown from low-Mo seeds.
4. The results demonstrate the benefits of sowing Mo-enriched seeds on growth and N nutrition of bean plants both when inoculated with rhizobia or fertilized with mineral N.
Figure 1 .
Figure 1.Leaf area, shoot dry mass, nodule dry mass and nodule number of common bean plants originating from seeds with low or high Mo concentration grown at two N sources (inoculated with rhizobia or mineral N) or with additional Mo applied to the soil, at four growth stages (Experiment 1); insets display values of 20 DAE.Vertical bars represent the least significant difference by Tukey's test at 0.05 probability and compare treatments on each evaluation day.
Figure 2 .
Figure 2. Nitrate reductase activity in leaves, nitrogenase activity in root systems, and N accumulated in shoots, of common bean plants originating from seeds with low or high Mo concentration grown at two N sources (inoculated with rhizobia or mineral N) or with additional Mo applied to the soil, at four growth stages (Experiment 1); inset displays values of 20 DAE.Vertical bars represent the least significant difference by Tukey's test at 0.05 probability and compare treatments on each evaluation day.
Table 1 . Enrichment of 15 N (in atom% 15 N excess) of different plant tissues of common bean and of shoots of non-fixing crops in experiment 2; means of four replicates for non-fixing crops and eight replicates for inoculated bean plants originating from seeds with low or high Mo concentration
Values of different bean plant tissues or non-fixing crops for each harvest did not differ by the t test at 5 % probability.
Table 4 )
. Nitrogenase activity was highest 38 DAE in inoculated plants regardless of seed Mo, whereas 51 DAE plants originating from low-Mo seeds under mineral N fertilization had the lowest nitrogenase activity and specific nitrogenase activity (Table 4).Seeds enriched with Mo increased N Means followed by the same letter in columns do not differ by Tukey's test at 0.05 probability.Concentration of N in nodules was not measured 20 DAE due to little material available.
Table 3 . Mass of shoots, roots, senescent leaves, pods and nodules, number of nodules, and mass of one nodule, of common bean plants originating from seeds with low or high Mo concentration grown at two N sources (inoculated with rhizobia or mineral N), at two growth stages (in days after emergence - DAE), in experiment 2
Means followed by the same letter in columns do not differ by Tukey's test at 0.05 probability.
Table 4 . Nitrate reductase activity, nitrogenase activity, specific nitrogenase activity, concentration of N in leaves and nodules, and accumulation of N in shoots, of common bean plants originating from seeds with low or high Mo concentration grown at two N sources (inoculated with rhizobia or mineral N), at two growth stages (in days after emergence -DAE), in experiment 2
Means followed by the same letter in columns do not differ by Tukey's test at 0.05.
in pots with | 2018-05-20T16:01:49.248Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "055a485db8715b55d175692ae0fb69472b052559",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbcs/a/smKHhrT9kqj7Ps4KkS9rKqs/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "055a485db8715b55d175692ae0fb69472b052559",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
265103028 | pes2o/s2orc | v3-fos-license | Infographic summaries for clinical practice guidelines: results from user testing of the BMJ Rapid Recommendations in primary care
Objectives Infographics have the potential to enhance knowledge translation and implementation of clinical practice guidelines at the point of care. They can provide a synoptic view of recommendations, their rationale and supporting evidence. They should be understandable and easy to use. Little evaluation of these infographics regarding user experience has taken place. We explored general practitioners’ experiences with five selected BMJ Rapid Recommendation infographics suited for primary care. Methods An iterative, qualitative user testing design was applied on two consecutive groups of 10 general practitioners for five selected infographics. The physicians used the infographics before clinical encounters and we performed hybrid think-aloud interviews afterwards. 20 interviews were analysed using the Qualitative Analysis Guide of Leuven. Results Many clinicians reported that the infographics were simple and rewarding to use, time-efficient and easy to understand. They were perceived as innovative and their knowledge basis as trustworthy and supportive for decision-making. The interactive, expandable format was preferred over a static version as general practitioners focused mainly on the core message. Rapid access through the electronic health record was highly desirable. The main issues were about the use of complex scales and terminology. Understanding terminology related to evidence appraisal as well as the interpretation of statistics and unfamiliar scales remained difficult, despite the infographics. Conclusions General practitioners perceive infographics as useful tools for guideline translation and implementation in primary care. They offer information in an enjoyable and user friendly format and are used mainly for rapid, tailored and just in time information retrieval. We recommend future infographic producers to provide information as concise as possible, carefully define the core message and explore ways to enhance the understandability of statistics and difficult concepts related to evidence appraisal. Trial registration number MP011977.
BACKGROUND
The body of evidence in healthcare is rapidly growing and becomes impossible to manage individually by each healthcare provider.There is a constant need for integration of evidence-based care in daily practice. 13][4][5] Clinical practice guidelines (CPG) have been trying to fill this gap, particularly when their process is trustworthy. 6 73][14] Lack of time,
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ By adopting a hybrid approach, combining think aloud interview techniques with a previously set interview guide, we balanced capturing immediate reactions and flow of thought with affording participants the opportunity to reflect more thoroughly, thereby facilitating a more nuanced understanding of their experiences.⇒ The infographics were used and evaluated in a natural environment, making the findings more applicable to a real life setting.⇒ The infographics were evaluated with a specific target group, making the findings of this study usercentred and directly applicable to infographic development for general practitioners (GPs).⇒ Experiences and opinions of Dutch-speaking, Belgian GPs may not be generalisable to other professions, cultures or healthcare systems.⇒ We did not evaluate effectiveness on knowledge retention or impact on guideline adherence or health outcomes, nor did we compare the infographics to different formats.
][17] To enhance the translation and implementation of evidence, formats such as infographics have been proposed. 17Infographics use visuals such as charts, icons and illustrations, to convey information and data with a minimum of text. 18Through visualisation, infographics have the potential to convey health statistics in a transparent and understandable way, as it is known that statistical illiteracy is not only common in patients but also in physicians, resulting in serious health consequences. 19y presenting information in a visual manner, they are believed to result in superior understanding and knowledge retention by decreasing the cognitive load required by readers. 18 20 213][24] Infographics are also supposed to increase dissemination and readership of research among practitioners. 25 26nfographics have been perceived positively by physicians.8][29] Physicians prefer certain infographic summaries over text-only summaries 30 and online abstracts, 29 and perceive less cognitive load when using them. 30While some physicians have claimed that infographics are more likely to support long-term retention of knowledge, 29 study results are not undivided regarding this matter. One comparison with text-only summaries could not find a difference in knowledge retention after 4 weeks in emergency physicians, with retention being poor in both formats. 30Also immediately after having studied an infographic, physicians had no difference in retained knowledge when compared with scientific abstracts or plain language summaries. 28Some authors even suggest that infographics might be harmful as information is not being read in depth and results might be presented inaccurately or oversimplified. 25Even though it is difficult to conclude on effectiveness of infographics in general due to their heterogeneity in format and the varying outcome measures, 31 these findings generate concern, especially since infographics are costly and time consuming to produce. 22t is clear that there is a need to explore how, when and in what circumstances infographic summaries can be useful to physicians.This especially in primary care, where guideline infographics contain great potential as there is a wide variety in healthcare needs while time to keep up is scarce.
Recently, The BMJ started providing interactive infographics for each of their published Rapid Recommendations (also known as 'RapidRecs'). 12The RapidRecs are an international project led by the MAGIC Evidence Ecosystem Foundation (www.magicevidence.org),since 2016, in collaboration with The British Medical Journal (BMJ). 12Their aims are to create and disseminate a new generation of trustworthy, timely and actionable recommendations on the basis of new practice changing evidence, as well as complex ignored evidence. 33The RapidRecs follow GRADE methodology and standards (a systematic approach to rating certainty of evidence in evidence syntheses), 34 summarise the whole body of evidence in one or more systematic reviews and involve panels including experts, general practitioners and patients.They also digitally structure the guideline in the MAGICapp, which is an authoring and publication software for evidence summaries, guidelines and patient decision aids.Using digitally structured data in the MAGICapp, MAGIC and the BMJ have cocreated their guideline infographics, using skills from information designers, editors, clinicians and experts in evidencebased medicine.
This study aims to evaluate the experiences of general practitioners (GPs) when using a selection of the BMJ Rapid Recommendations infographics suited for primary care.Through user testing, we try to find out how BMJ Rapid Recommendation infographics are used by GPs and how they may support translation and implementation of evidence.Based on our findings, we aim to provide recommendations for future development of infographic summaries for CPGs that can be used by GPs.
Study design
We applied an iterative qualitative user testing design to evaluate user experiences of GPs using the BMJ Rapid Recommendations infographics in real clinical practice (figure 1).GPs used five RapidRecs infographics that were translated from English to Dutch as evidencebased background information for clinical encounters.After a period of 2-4 months, we conducted interviews, using a hybrid think-aloud method, with the GPs.Small refinements were made to the infographics based on the first round of user testing (see below).The refined infographics were then used by a different group of GPs for the same period, followed by a second round of interviews.This second group of GPs used the infographics also as support for linked patient decision aids (PDAs), developed within the MAGICapp.User testing of the PDAs has taken place in a parallel study. 35
Intervention
Five out of 20 BMJ RapidRecs were chosen based on their relevance for general practice during a consensus meeting with the research team.The topics where the following: thyroid hormones treatment for subclinical hypothyroidism, 36 prostate cancer screening, 37 antibiotics for uncomplicated skin abscesses, 38 corticosteroids for treatment of sore throat 39 and arthroscopic surgery for degenerative knee arthritis and meniscal tears. 40We performed a forward-backward translation, where five GP-trainees translated the infographics to Dutch followed by the backward translation of an independent native English Open access speaker to check for translation issues.This forwardbackward method was performed preliminary to both rounds of user testing.In the first user testing round, we did not have access to the original BMJ templates.Hence, we had to recreate the infographics ourselves in Dutch, preserving the original layout as much as possible.We made the Dutch infographics available in both a digital static pdf version and a paper version and we encouraged the GPs to also explore the publicly available interactive versions, which were not translated.The interactive versions offered enhanced functionality, including the ability to collapse and expand more detailed information in certain sections and to hover over specific terms for explanatory text boxes.An example of one interactive infographic can be accessed through this link.We did not investigate how many GPs actually explored these or how much they used them.After the first round, small refinements were made based on the results of the interviews.These refinements were mainly related to small graphical or translational errors we discovered during the first round of evaluation, that could be resolved by us being able to use the real, static BMJ templates to generate translated versions.The graphical errors related to issues such as lower resolution of images or outlines that were not as they should be.We also emphasised that GPs could use the online interactive formats if they were comfortable with the English language.In the second round, the refined static pdf-formats were also available through the 'evidence linker', a tool integrated in the electronic health record (EHR) that provides direct online access to clinical guidelines connected to certain coded diagnoses, facilitating evidence-based care. 41We provide an example of one original infographic (figure 2).All original, translated and refined infographics are provided as supplementary materials (see online supplemental materials 1-3, respectively).
Participants, recruitment and setting
The setting of this study was primary healthcare in the Dutch-speaking part of Belgium (Flanders) and was performed by 10 GP-trainees as part of their 3-year postgraduate programme thesis.The GP-trainees invited all their 21 GP-trainers to participate.The invitation was done by phone or in person with an information letter.We aimed to recruit at least 10 GPs in each round striving for a representative sample size with a heterogeneity in gender, age and geographical spread.Eventually 20 out of 21 eligible GP-trainers were enrolled.User testing took place in the office of the participating GP.Round one of the user testing lasted from December 2019 to January 2020, and round two from October 2020 to January 2021.
Data collection
Prior to each round, the supervising team (BA, MVe, TA and ND) instructed the GPs on how to use the translated infographics in daily practice through an online training of approximately 1.5 hours.The training course consisted of an explanation of the study design and a short introduction to the concept of infographics, how they are made and how to use them.It was made clear that they are guideline summaries that are meant to be used by physicians to keep up to date and are not meant as decision aids for patients.By providing the course, GPs were able to start using them immediately and we could focus mainly on experiences in daily practice.After each test period, user experience of the GP-trainers was collected through an interview.For this interview, we adopted a hybrid methodology that merged aspects of the traditional
Open access
Figure 2 Example of an original infographic: 'Arthroscopic surgery for degenerative knee arthritis and meniscal tears'.This material has been reproduced with permission from BMJ. BMJ, British Medical Journal.
Open access think-aloud method with the usage of an interview guide.The think-aloud method enables participants to verbalise thoughts that would otherwise often remain silent. 42 43e asked the GPs to express their thoughts when going through the infographics while preparing for fictional clinical scenarios (see online supplemental material 4), acting as if it was a real life situation.Typically when using the think-aloud method, researchers refrain from asking prepared questions.Recognising the inherent limitations of solely relying on the think-aloud method, however, we decided to nevertheless introduce an interview guide as well (see online supplemental material 5) to delve deeper into aspects of experience and usability that might not naturally surface through real-time verbalisation alone.The interview guide prompted participants to discuss various usability dimensions and provide context-rich insights into their interactions with the tool.Interviewers were trained on this hybrid technique prior to the interviews by the supervisory team.By adopting this approach, we aimed to balance capturing immediate reactions with affording participants the opportunity to reflect more thoroughly, thereby facilitating a more nuanced understanding of their experiences.We sought to enhance the breadth and depth of data collected, ultimately contributing to a more robust and multifaceted analysis of GPs' tool usage experiences.
In the first round, each interview focused on two different infographics, the one translated by the GP-trainee conducting the interview and a second one assigned by consensus.By limiting the number of infographics discussed, we were able to explore the experiences of the GPs in more detail.User testing in round two involved the refined infographics, where refinement was based on insights from round one.As opposed to round one, the clinical encounters were observed by the GP trainees and they were used in the interview that took place within days after completing at least three consultations.All interviews were audio-recorded and transcribed verbatim preliminary to data analysis.Out of this exercise, qualitative data regarding the user's perspective was obtained and analysed.
Data processing
Audio-recorded interviews were transcribed verbatim.Transcriptions were made anonymously and references to the identity of the interviewed GP were avoided.Certain characteristics (age, gender, type of practice, etc) were mentioned on the transcriptions, as they were believed to contribute to the quality of the analysis later on.
Data analysis
Analysis of the transcripts was based on the Qualitative Analysis Guide of Leuven. 44This guide, containing 10 main stages, outlines the process of coding qualitative interview data through review, discussion and convention.During this process, each comment was coded using a three-layered structure.First, we took sentiment into account by labelling through connotation (ie, positive comments, minor and major frustrations, show stoppers, suggestions and ways of use).Second, all notes were classified into overall themes which were created inductively through collaboration of all team members.The third layer involved six different categories that were deductively created by means of the Morville's honeycomb model: usability, usefulness, desirability, findability, accessibility and credibility (table 1, see also online supplemental material 6). 45 46We used the online software programme 'AtlasTi' for coding the three layers. 47ach transcript was analysed and coded by two different team members than those who completed the interview to limit the impact of individual perspective.Subsequently, the coded fragments were pooled into overall concepts through constant iterative comparison, discussion and consensus among all team members to enhance reliability of the resulting findings.
Patient and public involvement
No patients were involved in this study as we aimed to evaluate only the user experiences of GPs using already Table 1 Morville's facets of user experience-definitions (adapted from 46 65 )
Facet Explanation
Usability Refers to how simple and easy to use the product is.The product should be designed in a way that is familiar and easy to understand.The learning curve a user must go through should be as short and painless as possible.
Usefulness
Refers to how much the product fills or answers an information need.If the product is not useful or fulfilling the user's wants or needs, then there is no real purpose for the product itself.
Desirability
Refers to the visual aesthetics of the product, which needs to be attractive and easy to translate.Design should be minimal and to the point.
Findability
Refers to how easy to navigate the product is.If the user has a problem they should be able to quickly find a solution within the product, and the navigational structure should also be set-up in a way that makes sense.
Accessibility Refers to how accessible and adapted the tool is, even to users with special needs, so that they can have the same user experience as others.
Credibility
Refers to how trustworthy the product is.Note that this may refer to the product itself, as well as to content that informs it (which is not necessarily an attribute of the design).
Open access developed infographic guidelines.We hence saw no added value in patient involvement for the design, conduct, reporting or dissemination plans of our research.
Overview of the user experiences
We performed 10 think-aloud sessions in the first group and 10 more in the second group, interviewing 10 GPs each round (table 2).In both rounds, most comments and suggestions from the GPs concerned usability, usefulness and desirability of elements in the infographics.Overall there were more positive experiences than negative ones, and comments tended to be relatively more positive in the second round compared with the first round, probably due to the small refinements we made (see Methods), as well as the access through the EHR which was suggested many times during the first round of user testing.
We discuss our findings of the two rounds in each of the facets of the honeycomb with illustrative quotes from the interviews below.A summary of the most important results can be found in table 3.
Usability
Many GPs reported that the tool was simple to use and time-efficient.Retrieving information required little effort once familiarised with the tool.However, some infographics were perceived as too confusing and complicated (eg, prostate-specific antigen (PSA) screening and subclinical hypothyroidism).Especially in the first round of our study, different physicians reported the design as too crowded with too much text.They felt reducing the amount of information may help highlight the core message.These observations were made in round two as well but to a lesser extent, acknowledging that in round two, most of the GPs found their way to the interactive, English format where further details could be retrieved, but only if required.
At a glance, I have the information I want to know. It is clear and concise. (round one, 62-year old man, rural duo practice)
I only looked at the printed version.Uhm … gosh … pff … in itself it … In itself I think there is way too much on one page.It's unclear.The real core messages, they (emphasis) must pop out.For me, they can be bigger and preferably more to the point.(round one, 70-year old man, rural duo practice) Many GPs said the infographics were easy to understand.Some terminology was not well understood by the GPs in the first group.Examples were 'values and preferences' and 'resourcing'.After optimising the Dutch translation, comments on understandability of terms were not recited in round two.A couple of GPs noted that it was not very clear how to interpret the scales used in the infographics as they were not used in daily practice.Rewarding provision of needed knowledge.
Limited perceived applicability due to content defining population too narrow or broad.
Less useful due to content not adapted to local guidelines.
Not sufficient to persuade physicians to change practice on its own.Desirability Importance of colours.Overuse can be confusing or distracting.
Uniformity in design is valued.
Influence of font style on sense of importance.
Findability
Quick and easy access if through the electronic health record (EHR).
Accessibility
More easily readable when interactive and expandable design.
Difficult to use when limited digital literacy.
Credibility
Credible due to seeing a trustworthy source.
Trustworthy due to access through EHR.
Even with an infographic, the concepts of weak recommendations and low quality of evidence remain difficult to understand.
Open access
Erm 'mean score', yes.I suspect it's also in 1000 patients… Erm… Or is it in percentages?(round two, 48-year old woman, urban duo practice)
Usefulness
Many of the physicians perceived the topics of the infographics as innovative and rewarding to meeting one's information needs.They acknowledged their potential for the use in primary care as it supported them in the shared decision-making process afterwards with their patients.Development of new infographics was supported and seen as bringing added value to the guideline content.Although the infographic was not designed primarily to support direct shared decision-makingthere were specific decision aids for that-GPs perceived the infographic may help for some, yet not all clinical topics.Several GPs claimed patient profile (eg, level of education) and context to be of particular importance when attempting to discuss with patients.
To seek confirmation for your own choice and then at the same time perhaps be able to persuade your patient with a few very specific criteria that might uhm… help to convince the patient a little more to do or not do something.(round two, 34-year old man, urban solo practice) Some physicians found the infographics not to be adapted to local guidelines (eg, recommended regimen of antibiotics where atypical in their context), which clouded their judgement in choosing a particular treatment.Physicians noted they would be less eager to use recommendations that were not in line with their current practice.One physician did not even consider reading the infographic because of the latter.
But for the time being, it's not changing practice, no.
(round two, 48-year old woman, urban duo practice) The GPs felt they were supported in their knowledge of the topic as the infographics displayed the most relevant information.Only some suggested adding more information to certain infographics.This is in contrast with the many statements about the extra information in the summary of findings tables that was perceived as overwhelming for the majority of the physicians.It was therefore often categorised as less important and unclear.The physicians noted that this information can be useful when you decide to dive further into it.
With some patients we don't even make an incision, yet do start the antibiotics, so that's also a possible option which wasn't actually there.(about the infographic 'antibiotics for skin abscesses', round two, 62-year old woman, rural duo practice)
Below is the underlying evidence … Yes, I think that's es-pecially… It's useful indeed if you want to delve deeper.
(round two, 32-year old man, urban group practice) Different clinicians perceived the section 'population' as clearly defined, but others thought it was too heterogeneous in some of the infographics.This may reflect their need for more recommendations or evidence summaries stratified to each type of patient (assuming the body of evidence allows it).For example, one clinician commented that the patient characteristics were not fully considered.
The age of the patient, the profession of the patient.Those are all things that matter.The fact that it does not mention them… It doesn't take them into account.(round one, 70year old man, rural duo practice)
Desirability
In general, clinicians responded positively to the overall layout.Opinions regarding choice of colour, however, were mixed.Most seemed fine with the use of colours, even describing them as clear and appealing.Others found it too great a variety, with some even distracted by the degree and type of colours that were displayed.They preferred a more straightforward and contrasting colour scheme, as they struggled to infer meaning from the chosen colours.
I have to say I don't understand the color code immediately.
(round one, 42-year old woman, urban duo practice) Some GPs expressed minor frustrations with the lack of uniformity in layout between the different infographics, while one GP was glad that the bar displaying the recommendation had the same layout throughout the different infographics.
Here there's recommendations, here we have comparison and here there's recommendations with quotation marks.(Referencing the headings for the recommendations of knee arthroscopy, PSA screening and thyroid hormones, respectively) (round two, 45-year old man, rural group practice) The physicians thought the sequence of the different components, namely 'population', 'interventions compared' and 'recommendation' arranged, respectively, was very logical.
Findability
Both a printed version as a link to the online static version of the infographics were provided to the GPs.Therefore, it was difficult to evaluate the experience of them actually searching for and finding a given infographic.
When asked if GPs could find these infographics again without them being given by us in the future, some GPs stated they would not be able to.As it is hard for GPs to keep up to date on all that's new, some indicated they preferred the guidelines to be brought to them rather than having to search for them single-handedly.Some of the older GPs preferred to have a printed version in Open access their drawer to be able to find it more easily.Repeatedly, suggestions were made to integrate the infographics in the EHR through the evidence linker for instant access.In the second round, where access through the EHR was provided, all but one GP considered the evidence linker as an added value and even a necessity to reach this tool.
And also through the evidence linker in the EHR?Well, that would be a big added value.If it is approved and supported, I think it would otherwise vanish into nothing.Well, as a young GP, I get in touch with this through you, but otherwise I'm not going to search this on Google, you know.We also don't have the time as GPs to seek out every guideline and check if it's correct.That should actually be done by scientists who want to sacrifice their time for this, you know.(round one, 45-year old man, rural group practice) In addition, some physicians proposed to be notified when new infographics would arise, for example, through the EHR.One GP proposed to use 'recent updates' on the website of BMJ Rapid Recommendations.Another GP proposed the use of a mobile application to be kept up to date.
So that would be nice, possibly through the EHR, that there would be a possibility to be informed about 'this is an available rapid recommendation that is usable for primary care'.(round one, 60-year old woman, urban group practice) Some GPs experienced difficulties in scrolling through the long list of existing infographics, describing the process as time-consuming.One GP proposed to range the infographics alphabetically and by discipline.
Accessibility
The main comments were focused on the printed design used in the first round.Although the infographics were designed to be used as interactive, expandable tools, the printed version was a practical necessity of the first round.As a result, the large amount of information summarised on one A4 page led to a small font size limiting the readability of the content.This gave one GP the impression that the content was less important and even neglectable.However, clinicians had overall positive feelings towards the refined visual design in the second round where mostly the digital infographics were used.Another concern about the layout was the atypical and inconsistent colour combinations and lack of contrast.
Without glasses, I can't read it. (round one, 46-year old woman, rural group practice)
When it is color on color, it is more difficult to read it and you drop out more quickly.(round one, 64-year old man, urban solo practice) The preference for paper or digital medium was mainly based on habit except for one clinician who was less digitally skilled.There were no concerns about the availability of the digital platform.
How come that this is more time consuming for me?Because I'm less skilled with the computer.(round one, 46-year old woman, rural group practice)
Credibility
The vast majority of clinicians perceived the infographics as trustworthy and they were unanimous in their confidence in The BMJ.This led to most GPs focusing on the 'main message', as they were overall less concerned with the underlying evidence.Beneficial to trustworthiness was the inclusion of the infographic in the evidence linker of the EHRs in round two.
… but since it's made by BMJ, which for me is a very trustworthy source.(round one, 60-year old woman, urban group practice) The GPs expressed more trust in the data and recommendations if it aligned with their own standard of care.However, this could backfire when they disagreed with the recommendation.Confusion about certain data or scales shown in the infographic could also lead to a diminished trust in the infographic as a whole.
It confirms somehow what I do in practice, so that's why I can have confidence in it.
(round one, 38-year old woman, rural group practice) I myself can't agree that people with meniscal tears are treated conservatively.(round one, 70-year old man, rural duo practice) Several clinicians struggled to interpret the different degrees of evidence supporting the recommendations.While some even admitted to have glossed over this aspect completely, those who did pay attention had a clear preference for strong recommendations.Weak recommendations were often perceived as a validation of lingering doubt regarding the subject (eg, PSA screening).Several doctors described discomfort with the inclusion of weak evidence.Other doctors, however, saw weak recommendations as beneficial as it gave them more flexibility in their interpretation.
If the conclusion is less clear or, let me put it this way, less pronounced, well yeah then it raises doubts a bit.(round one, 38-year old woman, rural group practice)
Main findings
We tested the user experience of Belgian GPs using five translated RapidRecs infographics, in two consecutive iterations.To our knowledge, this is the first study to perform an iterative and comprehensive user testing using a hybrid think aloud method for the evaluation of Open access infographics as evidence summaries in GPs.A summary of results can be found in table 3.
The GPs had an overall positive experience using the infographics.They provided the right information quickly and were easy, pleasant and intuitive to use.The digital interactive versions were preferred, as they provided expanding information if necessary but had the benefit of also providing a clear core message at a glance.Complex colour schemes were found to be confusing as meaning was sought in them.Even though graphically represented, GPs still had troubles understanding terminology related to evidence appraisal and unfamiliar scales, as well as applying statistics to the individual patient.Access through the EHR was found to be very supportive.The infographics were found to be very trustworthy and GPs recognised their potential in daily practice.A discordance with local guidelines or GPs own views seemed to be important barriers to implementation of the recommendations illustrated by the infographics.
Comparison with other literature
8][29][30] As supposed by previous authors, they did seem to offer a rapid information retrieval and hence are potentially promising tools to increase the ease of keeping up-to-date with guidelines. 17 23any of the comments we observed can be related to the impact of the infographics on time investment of GPs.They wanted to be able to see the core message in a glance, wanted rapid access through the EHR and desired a clear colour scheme and uniformity between infographics to be able to quickly move towards the needed information.GPs often face time constraints and need to gather answers to clinical questions rapidly, as they often occur at a point of care. 48 49For that matter, GPs prefer short guideline recommendations that are easy to understand. 7 50 51In our study, GPs tended to mainly focus on the core message and follow the recommendation depicted by it.This acknowledges previous concern that the use of infographics have the risk of conveying information in an oversimplified manner, losing sight of important nuances of the underlying studies. 25This is particularly important for 'weaker' recommendations, where more information, such as health-related and riskrelated statistics, is needed to be able to make shared decisions with patients. 52Providing an expansion with deeper explanation prompted some GPs to delve into this when necessary, though some found the recommendation alone to be sufficient.This is of great importance for future infographic development for GPs, as effort should be put in conveying a clear and correct core message, while also encouraging GPs to delve into the specifics, especially when recommendations are not strong and unambiguous.
In our study, GPs found the infographics to be rewarding in meeting their information needs.They felt they had learnt new things and were able to provide more information to their patients.This perception of increased knowledge was also found in another study when using infographics. 29This contrasts however with yet another study where only poor increase in knowledge was actually measured when infographics were used. 30nfographics have also failed to stand out to other, more simple formats regarding that matter. 28 30An explanation for the discrepancy between the perception of increased knowledge versus actual knowledge retention, might be found in the time course of information needs.Primary care physicians encounter an enormous variety of clinical cases every day.Infographics might be more supportive in their decision-making by providing just-in-time, tailored and evidence-based information, rather than being used as tools to increase their general knowledge in the longterm.Integrating access to the infographics in the EHR, linked to a coded diagnosis, was hence found very useful by physicians.Previous studies have also introduced and evaluated so-called 'infobuttons' and found that physicians use these EHR-integrated infobuttons for short, tailored searches. 53 54Other formats, such as scientific abstracts or plain language summaries, might be as effective as the infographics in long-term knowledge retention and are less costly and time consuming to make.
Lack of agreement, lack of adaptation to local guidelines and lack of strong recommendations were mentioned as barriers to the use and implementation of the infographics and their recommendations.They concur with those seen in guidelines and are hence not specific to the infographic format. 14 50Lack of agreement can be provoked by a lack of adaptation to local guidelines, as geographical variations in healthcare delivery have been widely documented. 55Unclarity or ambiguity are also known to decrease adherence. 50This might explain why weak recommendations were less adopted, as they provided less confidence and even confusion.Ambiguity is however meant for in the weak recommendations, as no one answer is the right one and patient values and preferences should be taken into account.It is possible that the GPs in this study were used to following strong guidelines and still have to become accustomed to weak recommendations.
It is striking that, even with the graphical representation, GPs have difficulties understanding terminology related to GRADE (such as strength of recommendations, evidence certainty or quality), unfamiliar scales and certain statistics.This means that the way these formats are displayed, or even the choice of words conveyed by a translation, play a role in understanding.7][58][59] Statistical (and even scientific) illiteracy will probably not be solved by infographics alone.It is caused by a plurality of issues, such as the still existing paternalistic nature of the doctor-physician relationship, where trust in authority makes statistical literacy 'unnecessary', as well as the influence of determinism, where physicians Open access seek causes instead of probabilities, and the illusion of certainty, where patients seek certainty even when there is none. 19Even though maximal effort should be put in making these terms and numbers as understandable as possible, infographics might not be the ideal tool to also educate GPs on these issues, especially since GPs preferred the infographics as concise as possible.They could however provide support, by linking to training courses or further explanation and encourage physicians to explore these.
Strengths and limitations
The strengths of our study include the thorough process of user testing analysis.By repetitively analysing the interviews in different cycles with at least two researchers for each interview, followed by group discussion and reflection, we made sure no findings could be lost in the process.We also performed a sufficient amount of user testing in which we were able to vary in age, gender and type of GP (see table 2) to have a broad range of opinions and experiences.None of the members that performed the interviews and the analysis were part of the organisation that designed the infographics, which added to objectivity.By combining think aloud interview techniques with a previously set interview guide, we balanced capturing immediate reactions and flow of thought with affording participants the opportunity to reflect more thoroughly, thereby facilitating a more nuanced understanding of their experiences.The infographics were also used and evaluated in a natural environment, making the findings more applicable to real life setting.
Previous investigations indicate that the design of knowledge transfer tools should be based on specific preferences and needs of the users. 27 28 60 61User testing in a specific setting such as primary care, is hence very informative in further development of the infographics.In the past, most guidelines have had a content-based approach for testing, to check whether appropriate information is being given.User testing however analyses quantitative and qualitative findings of the experience of healthcare professionals and permits modification afterwards, many cycles consecutively. 62It has been proven to increase information retrieval and comprehension by healthcare professionals and has resulted in safer care as well. 63 64ur study is limited in what we can learn from it due to the methods used.We explored experiences of GPs after having used the infographics.We did not investigate whether these tools actually succeeded in improving knowledge.We did not collect quantitative measures, such as actual use or impact on physician's behaviour or on health outcomes.We also did not compare the infographics to different formats, so we cannot form any conclusions on their relation to experiences with other tools.
Our study was limited to Dutch-speaking, Belgian GPs.A similar study might yield different results in other countries with different cultures or healthcare systems.Furthermore, our GPs were all trainers of GP trainees.
They might possibly be more open to new and innovative approaches than the average GP.
Another limitation is that we handed out static, noninteractive infographics to the GPs.Findability, as one of Morville's honeycombs, was therefore difficult to evaluate.Another study design should be set-up to further investigate this.
The second round of this study was performed during the COVID-19 pandemic.To our knowledge, the COVID-19 measures had no substantive effect on our results with a good variety in topics discussed and consultations with patients.The interviews could still be done face-to-face, although wearing facemasks.
Finally, our study design included a training session before using these infographics, which is not how they will be accessed in real life by practitioners online.This was necessary to use the more complex printed version in the first round, but may have affected their user experience in an unpredictable way.Nevertheless, the main findings of our study likely apply regardless of whether such training has occurred or not, and is consistent with previous findings.
Implications for further research and development
Based on our findings, we are able to provide some rules that can guide future developers of guideline infographics targeted at GPs: 1. Have time in mind.GPs only have limited time and use the infographics mainly as rapid, just in time information for decision-making.Provide a clear core message at a glance, rapid access through an EHR and an intuitive design where further information can be accessed on demand.2. Carefully describe the core message.GPs tend to mainly focus on the core message and often neglect more detailed information.Make sure the core message conveys what is meant and avoid risk of different interpretation or apparent certainty in recommendations while there is none.Encourage GPs to delve into the details for a more nuanced overview of the evidence.
Open access
This study should be seen as part of a continuous process with each iteration necessitating further user testing. 62e recommend further user testing in a broader range of GPs and specialists, as well as in researchers and policymakers who might benefit from infographics as well.We are aware of another group working with the BMJ Rapid Recommendations conducting similar user testing among hospital-based doctors.
CONCLUSIONS
Infographics can be useful tools in daily primary care, as they can offer an enjoyable and visually appealing format for rapid retrieval of information on guidelines and recommendations.The infographics were found to be most useful for rapid, tailored and just in time information retrieval, which was supported by a clear core message, an intuitive design and integration in the EHR.Terminology regarding evidence appraisal and statistics remained difficult even with the infographics.Lack of consideration of local guidelines led to frustration.Further user testing in different contexts, comparison with different formats and impact on quantitative measures such as knowledge retention and health outcomes are needed.
Figure 1
Figure 1 Iterative qualitative user testing design.
Table 3
Summary of main findings 3. Consider statistical and scientific illiteracy.As many other physicians, GPs often have insufficient skills in statistics and evidence appraisal.Even though important, presentation format alone might not be sufficient to support correct understanding of the recommendations.An additional course or other form of education should ideally be encouraged.4.Adapt to local guidelines.Many GPs follow local guidelines.Credibility and usefulness of the infographic might lower significantly if not adapted.Incorporate them or justify why you deviate from them.Further user experience evaluation of infographics developed for physicians is needed.Many infographics are being developed yet follow-up on their impact is lacking.In this study, we only investigated one format of infographics.Comparison with different formats might be contributive, as well as investigation of knowledge retainment, guideline adherence and health outcomes. | 2023-11-11T06:18:32.646Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "337e50e061db76e97f36bba491dd09a2b959f1cb",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/13/11/e071847.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c102f707298a69f050b0d83f17a93195db2150b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39497120 | pes2o/s2orc | v3-fos-license | World of Radiology Imaging of Gaucher disease
Gaucher disease is the prototypical lysosomal storage disease. It results from the accumulation of undegraded glucosylceramide in the reticuloendothelial system of the bone marrow, spleen and liver due to deficiency of the enzyme glucocerebrosidase. This leads to hematologic, visceral and skeletal maifestions. Build up of glucosylceramide in the liver and spleen results in hepatosplenomegaly. The normal bone marrow is re-placed by the accumulating substrate leading to many of the hematologic signs including anemia. The visceral and skeletal manifestations can be visualized with vari-ous imaging modalities including radiography, computed tomography, magnetic resonance imaging (MRI) and radionuclide scanning. Prior to the development of enzyme replacement therapy, treatment was only supportive. However, once intravenous enzyme replacement therapy became available in the 1990s it quickly became the standard of care. Enzyme replacement therapy leads to improvement in all manifestations. The visceral and hematologic manifestations respond more quickly usually within a few months or years. The skeletal manifestations take much longer, usually several years, to show improvement. In recent years newer treatment strategies, such as substrate reduction therapy, have been under investigation. Imaging plays a key role in both initial diagnosis and routine monitoring of patient on treatment particularly volumetric MRI of the liver and spleen and MRI of the femora for evaluating bone marrow disease burden.
INTRODUCTION
Gaucher disease (GD) is the most common of the lysosomal storage diseases [1] . It results from accumulation of undegraded glucosylceramide in lysosomes within macrophages of the reticuloendothelial cell system due to a deficiency of the enzyme glucocerebrosidase. Consequently Imaging of Gaucher disease these macrophages, enlarged with a buildup of glycolipids, are called Gaucher cells and are most abundant in the bone marrow, spleen and liver. GD is inherited in an autosomal recessive manner [2] .
Three clinical subtypes of GD have been described [3] . Type 1 does not have any involvement of the central nervous system and is the most common. It formerly was referred to as the "adult type". However, this is a misnomer since type 1 can occur at any age and is currently know as the non-neuronopathic type. Although it is most common in the Ashkenazi Jewish population it can occur in all ethnic groups. Type 2 was formerly referred to as the "infantile type". This type manifests with grave involvement of the central nervous system. It is rapidly progressive usually leading to death within 2 years. It is now known as the acute neuronopathic type. Type 3 also has central nervous system involvement but is less severe and is more indolent than type 2 leading to the current terminology, subacute neuronopathic type.
The clinical manifestations of GD are due to the accumulation of Gaucher cells in the reticuloendothelial system of the bone marrow, spleen and liver. There can be marked variability in the severity of symptoms and the course of the disease. This is particularly true for type 1 where some patients can remain asymptomatic through life. Although the visceral changes can be dramatic, the more debilitating symptoms arise from infiltration of the bone marrow and bone changes. Since type 1 is the most common and widely studied variant of Gaucher disease it will be the primary focus of this review.
GENETICS
Gaucher disease is inherited in an autosomal recessive manner. The diagnosis of GD is made by the demonstration of decreased glucocerebrosidase enzymatic activity in peripheral blood leukocytes or fibroblasts cultured from a skin biopsy. Generally there is a 70%-90% reduction in the enzyme activity when compared to normal [4] .
Molecular testing by targeted mutation analysis is used for confirmation of diagnosis and may be helpful for genotype-phenotype correlations. There are more than 300 mutations in the glucocerebrosidase gene that cause Gaucher disease [2] . However, four common mutations -N370S, IVS2(+1), 84GG, L444P -account for approximately 96.5% of disease in Ashkenazi Jewish population in the western hemisphere and approximately 50%-60% in non-Jewish populations [5] .
Genotyping is helpful to test at risk family members, for genetic counseling as well as for prognosis. However, genotype-phenotype correlations are limited due to the clinical heterogeneity of the disease. Moreover, the majority of the work on genotype-phenotype correlation was based on a heavily Ashkenazi Jewish population which could skew the results since many affected individuals with the N370S/N370S homozygous genotype may remain asymptomatic and not come to medical attention [2] . Never the less a few generalities can be made: (1) the presence of at least one N370S allele precludes development of neuronopathic disease; and (2) the presence of the L444P allele is strongly (but not exclusively) associated with neuronopathic involvement. In general, those homozygous for the N370S allele tend to have less severe manifestations of disease and compound heterozygotes with one copy of N370S and a second mutation being L444P, 84GG or IVS 2 + 1 tend to have more severe disease. In fact, adults homozygous for L444P mutation (L444P/L444P genotype) typically have the type 3 neuronopathic disease. However, these rules are not hard and fast due to the limited genotype-phenotype correlation. Some patients with the N370S/N370S genotype have profound symptomatic disease whereas a type 1 patient with N370S/L444P genotype may have mild symptoms.
HEMATOLOGIC MANIFESTATIONS
Hematologic abnormalities of GD are exceedingly common. Almost all patients with symptoms present with anemia and thrombocytopenia. The etiology can be explained by depressed hematopoiesis resulting from substitution of the bone marrow by Gaucher cells. However, hypersplenism or sequestration within the spleen can be a cause as well. Symptoms that arise due to the hematologic abnormalities include fatigue, easy bruising and frequent nosebleeds. Additional blood chemistries can be elevated in GD including angiotensin converting enzyme, chitotriosidase, and tartrate resistant acid phosphatase [6] . Changes towards normalization of the anemia, thrombocytopenia and blood chemistries can be used to monitor treatment response [7] .
VISCERAL MANIFESTATIONS
The viscera most commonly involved with accumulation of Gaucher cells are the liver and spleen. The pulmonary system can be involved as well; although it is very rare. Current recommendation for evaluating and monitoring visceral involvement is volumetric MRI (preferred due to lack of ionizing radiation) or CT every 12 to 24 mo [6] .
Gaucher cells accumulate in the Kupfer cells of the liver leading to hepatomegaly ( Figure 1). Liver volumes in type 1 patients are typically approximately 2 times normal [8] . It is notable that glycolipid does not accumulate in the hepatocytes [8,9] . The Gaucher cells can conglomerate into nodules that can be seen with sonography or MRI. These nodules may be hypoechoic, hyperechoic, or mixed on sonography [10,11] . On MRI the nodules typically appear isointense or low signal intensity (SI) on T1 weighted imaging (WI) and high SI on T2 WI. Focal areas of extramedullary hematopoiesis can have a similar appearance and can also be seen due the accompanying anemia. Hepatic infiltration can also lead to fibrosis and cirrhosis [12] .
Splenomegaly results from accumulation of Gaucher cells within the spleen (Figure 1). Spleen volumes in type 1 GD are typically 5-15 times normal but the spleen size can be significantly enlarged in some cases and may be Simpson WL et al . Gaucher disease over 50 times normal [13] . Focal splenic masses are common and may represent clusters of Gaucher cells or extra-medullary hematopoiesis. They may be detected with sonography, CT or MRI. Similar to the liver, Gaucher masses in the spleen may be hypoechoic, hyperechoic, or mixed echogenicity [11,14] . On CT the masses are low density [15] and occasionally peripherally calcified ( Figure 2). These masses are most commonly imaged with MRI. They typically are low SI or isointense on T1 WI and high SI on T2 WI [16] (Figure 3). Low SI on gradient recalled echo imaging in these masses is thought to be secondary to iron contained in the Gaucher cells [17] . Splenic infarcts can occur as well due to massive splenomegaly and can be detected with imaging as well ( Figure 4).
An infrequent manifestation of GD is pulmonary involvement which is more commonly seen in type 1 patients who have undergone splenectomy and those with type 3 [18] . The lung findings are thought to be secondary to direct infiltration by Gaucher cells into the interstitial spaces, alveolar spaces and capillaries [19] as well as indirect causes secondary to hepatopulmonary syndrome related to the liver manifestations and/or aspiration associated with neurologic manifestations. Chest radiographs generally are normal or demonstrate a reticulo-nodular pattern. The findings are best imaged by high resolution CT and include interstitial thickening (both interlobular and intralobular), ground glass opacity, consolidation and 659 September 28, 2014|Volume 6|Issue 9| WJR|www.wjgnet.com A B Figure 1 Hepatosplenomegaly. Coronal T1 WI (A) and axial T2 WI (B) images in a male type 1 GD patient with N370S/N370S genotype demonstrate marked hepatosplenomegaly. The liver volume measured 3235 cc. The spleen volume measured 2923 cc.
Figure 2 Splenic mass on computed tomography.
Axial computed tomography image shows a large low density mass with patchy foci of soft tissue density within it in the medial aspect of the spleen. Additional smaller low density masses are present as well (arrows). bone marrow. The pathophysiology of how the infiltration leads to the bone changes is not well understood. Proposed mechanisms include altered bone formation and resorption, as well as increased intra-osseous pressure due to the infiltration leading to vascular occlusion [3,25] . An array of bone findings are seen in GD, including growth retardation in children, osteopenia, lytic lesions ( Figure 5), pathologic fractures, bone pain, osteonecrosis ( Figure 6), cortical and medullary infarcts ( Figure 7) and evidence of bone crises [3,26] . The severity of bone findings in GD depend on the extent of medullary cavity substitution. Marrow replacement with Gaucher cells can lead to expansion of the medullary cavity with thinning of the cortex and endosteal scalloping ( Figure 8) and consequent diffuse osteopenia. In addition, the medullary expansion leads to a failure of remodeling in the distal femurs resulting in the so called Erlenmeyer flask deformity ( Figure 9). These manifestations can be imaged using a variety of modalities including radiography, MRI, dual energy X-ray absortiometry (DEXA) and radionuclide imaging. However, the mainstay of skeletal imaging in GD involves MRI.
Bone crises are most common in childhood and adolescence presenting as episodes of severe bone pain associated with fever and leucocytosis. The signs and symptoms are indistinguishable from osteomyelitis, however no infection exists. The terms "pseudo-osteomyelitis" and "aseptic osteomyelitis" have been historically used to bronchial wall thickening [20,21] . Pulmonary hypertension can be the result of lung involvement [22][23][24] . Symptomatic pulmonary involvement is generally seen in patients with more striking visceral and skeletal findings.
SKELETAL MANIFESTATIONS
The skeletal manifestations of GD lead to the most debilitating complications of the disease and significant morbidity. Gaucher cells infiltrate and accumulate in the 660 September 28, 2014|Volume 6|Issue 9| WJR|www.wjgnet.com [3,27] (Figure 10).
Osteonecrosis, otherwise known as avascular necrosis, is due to lack of blood supply and consequent bone death. It is most commonly seen in the femoral heads ( Figure 6), proximal humeri and vertebral bodies. The vertebral body can "cave in" leading to the "H-shaped" vertebra, i.e., Reynolds phenomenon, similar to sickle cell disease ( Figure 11). While the end result is similar in both diseases the mechanism of formation is different. In GD the entire vertebral body collapses followed by peripheral regrowth while in sickle cell disease the deformity is secondary to central growth arrest [28] . The necrotic bone crumples and leads to malformation and/or fracture, sometimes requiring treatment with a bone prosthesis or joint replacement.
Radiography is used primarily to image cortical bone. It can detect lytic ( Figure 5) or sclerotic lesions within bones. Fractures, both traumatic and pathologic, are readily detected on radiographs. In addition, endosteal scalloping ( Figure 8) and the Erlenmeyer flask deformity (Figure 9) due to marrow expansion are also detected with radiography. Although changes in cortical bone secondary to marrow infiltration can be detected with this modality, the marrow space itself is cannot be evaluated by radiography.
Osteopenia is near universal in GD as a representation of decreased bone mineral density. A significant decrease in bone density must occur before osteopenia is perceived on radiography leading to its poor sensitivity for detecting this abnormality. DEXA is the current modality of choice for evaluation of osteopenia and bone mineral density. However, care must be taken to avoid areas of osteonecrosis during DEXA evaluations.
The bone marrow itself is best assessed with MRI. Normal yellow (fatty) marrow is seen as high signal on T1 WI and T2 WI. The infiltration of the marrow by Gaucher cells replaces the normal yellow marrow. Marrow infiltration generally follows the distribution of cellular read marrow progressing from the axial to the peripheral skeleton and from the proximal to the distal aspects of the long bones with a tendency to spare the epiphyses [29] . This is recognized as a change to low SI on both T1 and T2 WI [29,30] (Figure 12). On short tau inversion recovery (STIR) images the infiltration appears slightly high SI [31] . High SI within the marrow on T2 or STIR images suggests edema within the marrow and the presence of an "active" process such as a bone crisis or infection [32] . Evaluation of bone marrow infiltration in children is complicated by the fact that normal red marrow which is seen in this age group manifests with low SI on both T1 and T2 WI.
Radionuclide imaging is useful for evaluating bone changes in GD. Bone scintigraphy utilizing Technicium 99m-methylene diphosphonate ( 99m Tc-MDP) can be used to evaluate for fractures that are not readily apparent on radiography. In addition, this tracer can be used to help differentiate a bone crisis (aseptic infarction) from osteomyelitis. In a bone crisis bone scintigraphy performed within 1-3 d of the onset of pain will demonstrate decreased tracer uptake at the involved site unlike infection that shows increased uptake [33,34] . The same agent can be used to help evaluate for complication related to joint prostheses such as loosening [31] . Scintigraphy using leucocytes labeled with Indium-111 is commonly used to image areas of suspected infection including osteomyelitis and around joint prostheses.
Prior to the advent of MRI, radionuclide imaging was also used for evaluation of the bone marrow. Technicium 99m sulfur colloid ( 99m TC-SC) accumulates in normal bone marrow. Therefore in marrow infiltrated and replaced by Gaucher cells there will be decreased uptake or an abnormal pattern of uptake compared to normal [34] . This gives an indirect sign of infiltration. Another tracer, Technicium 99m sestamibi, has the advantage of being accumulated in areas of Gaucher cell deposition [35,36] . Mariani et al [35] imaged 74 Italian patients with Gaucher disease using technetium 99m sestamibi and showed 71 of 74 demonstrated uptake predominantly in the distal femur. An undisclosed number of these patients had MR imaging performed at the same approximate time revealing low SI in the same regions. Therefore it is a method of direct visualization of infiltration. Bone marrow sestamibi imaging can be advantageous when imaging children and trying to differentiate Gaucher infiltration from normal red marrow in children. Positron emission tomography has become widely available in recent years. Imaging with the most common radiotracer, fluoride-18 fluorodeoxyglucose, has not proven beneficial for detecting marrow involvement in GD. However, a newer tracer, fluoride-18 L-thymidine, shows promise for imaging bone marrow [37] . Since it is not yet FDA approved its use is limited to clinical trials and its role if any in GD has not been established. Therefore, MRI remains the modality of choice for imaging bone marrow due the poor spatial resolution of scintigraphy as well as the associated radiation dose of radionuclide imaging. An essential problem of imaging is that it only gives a qualitative assessment of bone marrow infiltration. An MRI shows decreased SI on T1 WI but there is no way to measure the "amount" of signal. A visual assessment of improvement or worsening can be made on the basis of the MR image but that is qualitative and not very useful to clinicians. One way of directly measuring bone marrow disease has been developed, Dixon's quantitative chemical shift imaging [38] . Chemical shift imaging leverages the difference in resonance frequencies between water and fat molecules thereby defining the amount of fat or fat fraction within bone marrow. The fat fraction of normal marrow decreases as the amount of infiltration replaces the triglyceride rich fat cells of normal marrow. Studies have shown a low marrow fat fraction as measured by quantitative chemical sift imaging to correspond to worse clinical disease and more bone complications [39][40][41][42] . However, the technique is complex and not widely used outside academic centers. To overcome this problem several semi-quantitative methods have been developed including the Rosenthal staging system [29] , the Dusseldorf score [43] , the Terk classification [44] and the bone marrow burden (BMB) score [45] . All use conventional MR imaging technology and assign points based on changes in marrow signal intensity at different anatomic locations.
The BMB score is the most widely used and validated [45,46] . The BMB score [45] incorporates both the visual interpretation of SI and the geographic location of the disease on conventional MR images of the lumbar spine and femora. The SI of the bone marrow in the femora is compared to the subcutaneous fat on both T1 and T2 WI sequences. The SI of the bone marrow in the lumbar spine is compared to a non-diseased intervertebral disc on both T1 and T2 WI sequences. The SI is scored as hyper-intense, slightly hyper-intense, iso-intense, slightly hypo-intense, and hypo-intense. A numeric value ranging from 0-2 is assigned based on the SI. Point values are also assigned based on the location/distribution of the marrow infiltration. In the femora, sites of involvement including the diaphysis, proximal epiphysis/apophysis and distal epiphysis are evaluated. In the lumbar spine, the distribution of infiltration is evaluated as patchy or diffuse with special attention given to absence of fat in basivertebral vein region. The score for the femora (0-8) and the lumbar spine (0-8) are added together for a total score which can range from 0 to 16. A higher score indicates more severe the bone marrow involvement. In their study of 12 patients with Gaucher disease [45] , Maas et al [45] demonstrated good correlation of the BMB score with fat fraction determination by the Dixon technique. That study also showed a decrease in the BMB score in patients on enzyme replacement therapy (ERT), however, it was less sensitive for detection of marrow improvement than Dixon method.
TREATMENT
Only supportive treatment or surgical intervention including splenectomy and joint replacement was available for patients with Gaucher disease into the early 1990s. In 1991 ERT with placentally derived enzyme alglucerase (Ceredase ® , Genzyme Corporation, Cambridge, Mass.) came into existence. Following this, in 1994, recombinant mannose-terminated human glucocerebrosidase (Cerezyme ® , Genzyme Corporation) received FDA approval. The goal of ERT is to treat the symptoms of the disease as well as to prevent complications particularly the skeletal complications [6] .
The visceral and hematologic manifestations of GD respond relatively quickly to ERT [47,48] (Figure 13). The anemia and thrombocytopenia can improve within 6 mo to one year of initialing ERT. The liver and spleen volumes generally can decrease by approximately 50% within the first 2 years but rarely ever return to a normal volume even with long term treatment. Although some improvement in pulmonary involvement has been reported with ERT, response is generally slow, and sometimes no improvement is seen [23,49] . The marrow infiltration responds to ERT as well but takes much longer to be seen [50,51] (Figures 14 and 15). Some skeletal manifestations, such as osteonecrosis, osteosclerosis and vertebral body collapse, remain irreversible. The neurologic manifestations of GD do not respond to ERT since it does not cross the blood-brain barrier [52] .
An alternative to ERT for treatment of GD is substrate reduction therapy. Whereas ERT works by replacing the deficient enzyme, substrate reduction works by decreasing the production of the substrate glucosylceramide. Since most type 1 GD patients have some residual enzyme activity, reducing the amount of substrate may allow the native enzyme to succeed. The first medication of this kind was N-butyldeoxynojirimycin (miglustat) approved by the FDA in 2003. It is an oral medication for use in mild to moderate type 1 GD patients who cannot tolerate ERT. Studies have shown improvement of anemia, platelet count, liver volume and spleen volume alone or in combination with ERT [53][54][55] . One study also showed improvement of bone disease on miglustat [56,57] . However, side effects particularly diarrhea, weight loss and tremors were significant [52] and its use is limited in the United States Unlike ERT miglustat can cross the blood-brain barrier [52] and has shown promising results in treating neurologic manifestations in combination with ERT [58,59] . A study also shows improvement of pulmonary manifestations with single drug treatment [60] . Currently there is a newer oral substrate reduction therapy agent, eliglustat tartrate, which has demonstrated efficacy in GD type 1 patients with a more favorable side effect profile [61] . It is currently undergoing phase Ⅲ trials.
Both ERT and substrate reduction therapy can lead to improvement in the signs and symptoms of GD. However, there is a significant cost to the treatment ranging from US$100000 to $250000 per year [62] . Thus judicious use is warranted. Imaging plays a significant role in determining which patients need treatment and in surveillance of those on treatment. Current recommendations call for abdominal MRI for determination of liver and spleen volume, MRI of the bilateral femora, radiography of the spine and DEXA of the hips and lumbar spine in the initial assessment [6] . For those patients not on treatment being followed and those on treatment, the same imaging protocol is recommended every 12-24 mo or at the time of a dosage change or significant clinical complication [6] .
CONCLUSION
Gaucher disease is the most common lysomal storage disease which affects all ethnic groups. The accumulation of glycolipids in the reticuloendothelial system leads to symptoms. Anemia, thrombocytopenia, and hepatosplenomegaly are commonly seen and respond relatively rapidly to treatment with ERT. The skeletal manifestations are due to build up of the glycolipids in the bone marrow and lead to the most debilitating aspects of GD. Treatment results in improvement of marrow infiltration however it takes much longer than the visceral and hematologic manifestations. Imaging plays a key role in both initial diagnosis and treatment monitoring. MRI of the abdomen is used to monitor liver and spleen volumes. MRI of the femora and lumbar spine is used for evaluation of bone marrow infiltration burden. | 2018-04-03T04:33:33.547Z | 2014-09-28T00:00:00.000 | {
"year": 2014,
"sha1": "d8864793e044f4603a3cba100b1ef58291144ff2",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4329/wjr.v6.i9.657",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b8267337d66bb25cfd348aa15a0d0ccc4796b8bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246895696 | pes2o/s2orc | v3-fos-license | Detection of Splenic Tissue Using 99mTc-Labelled Denatured Red Blood Cells Scintigraphy—A Quantitative Single Center Analysis
Background: Red blood cells (RBC) scintigraphy can be used not only for detection of bleeding sites, but also of spleen tissue. However, there is no established quantitative readout. Therefore, we investigated uptake in suspected splenic lesions in direct quantitative correlation to sites of physiologic uptake in order to objectify the readout. Methods: 20 patients with Tc-99m-labelled RBC scintigraphy and SPECT/low-dose CT for assessment of suspected splenic tissue were included. Lesions were rated as vital splenic or non-splenic tissue, and uptake and physiologic uptake of bone marrow, pancreas, and spleen were then quantified using a volume-of-interest based approach. Hepatic uptake served as a reference. Results: The median uptake ratio was significantly higher in splenic (2.82 (range, 0.58–24.10), n = 47) compared to other lesions (0.49 (0.01–0.83), n = 7), p < 0.001, and 5 lesions were newly discovered. The median pancreatic uptake was 0.09 (range 0.03–0.67), bone marrow 0.17 (0.03–0.45), and orthotopic spleen 14.45 (3.04–29.82). Compared to orthotopic spleens, the pancreas showed lowest uptake (0.09 vs. 14.45, p = 0.004). Based on pancreatic uptake we defined a cutoff (0.75) to distinguish splenic from other tissues. Conclusion: As the uptake in extra-splenic regions is invariably low compared to splenules, it can be used as comparator for evaluating suspected splenic tissues.
Introduction
Scintigraphy with administration of technetium-99m ( 99m Tc)-labelled denatured red blood cells (RBCs) is a commonly applied tool for detection of unclear bleeding sites. However, due to physiological RBC molting, RBC scintigraphy could also be used for detection of ectopic spleen tissue, splenules, or following splenic trauma or surgery [1]: Selective spleen scintigraphy with intravascular administration of 99m Tc-labelled RBCs is indicated for assessing size, shape, and position of the spleen, detecting and measuring spleen masses, identifying functioning splenic tissue, as well as evaluating suspected functional asplenia [2][3][4]. Compared to spleen detection using hepatobiliary scintigraphy or technetium-99m-sulfur colloid, the selective spleen scintigraphy with RBCs has a higher diagnostic value due to the absence of liver uptake and its increased specificity [2,[5][6][7].
Accidental implantation of splenic tissue (splenosis) could be demonstrated in a reliable way by selective spleen scintigraphy, especially with a concomitant SPECT [8,9]. Experimental studies could demonstrate that splenic tissue can survive and grow when 2 of 11 transplanted to abnormal sites [10]. In this context, splenosis is a frequent finding after trauma to the spleen with rupture of the splenic capsule, with an incidence of 16-67% [1,9,11], and can be defined as the heterotopic auto transplantation of splenic tissue onto the peritoneal surfaces, in the splenic fossa, in the gastrointestinal tract, the liver, the subcutaneous region and, rarely, in the lungs or pleural space if splenic rupture is associated with rupture of the diaphragm [9,11]. There are even cases known of heterotopic spleen tissue in the head after an accident [10,11]. Patients with splenosis are often asymptomatic but, for example, intraabdominal lesions may stimulate adhesions which may lead to intestinal obstruction [9]. In this context, splenosis can be mistaken for endometriosis, intrathoracic neoplasm, or angioma, but also for primary intraabdominal tumor [12,13]. Splenosis has indeed been reported to mimic tumor recurrence in patients with known prior tumors and can also mimic metastatic peritoneal spread of a known tumor by means of peritoneal carcinomatosis [14][15][16], a diagnosis which can lead to a fulminant change of oncological treatment [17,18]. Despite this major clinical impact, splenosis remains diagnostically challenging in conventional imaging. Thus, RBC scintigraphy could be of particular relevance for the evaluation of suspected splenic tissue and for subsequently guiding patient management. However, there is no established quantitative readout for rating suspected splenic lesions as either vital splenic tissue or as other tissues, such as metastases, in RBC scintigraphy. Hence, until now clinical decision-making is mainly based on individual reader-dependent qualitative assessment.
Therefore, in this study, we demonstrate RBC scintigraphy using a newly defined quantitative cutoff as a highly feasible method for the detection of splenic tissues.
Study Design and Inclusion Criteria
This retrospective analysis was approved by the institutional ethics committee of the LMU Munich (# 21-0325). Patients who received a 99m Tc-labelled denatured red blood cells scintigraphy with available concomitant single photon emission tomography (SPECT)/low dose computed tomography (ldCT) for evaluation of suspected splenic tissue from 2013 to 2021 were included. All patients gave written informed consent prior to scintigraphy and concomitant SPECT/ldCT as part of the clinical routine.
First, uptake in sites of physiologic uptake was quantified. Second, uptake in suspected splenic lesions was quantified and compared to uptake in sites of physiologic uptake. Finally, a potential quantitative cutoff for the differentiation of vital splenic tissue from other tissues was evaluated based on the previously performed measurements; here, a series of unequivocal cases with, if available, histology or otherwise imaging/clinical follow up served as control.
Radiopharmaceutical and Imaging Protocol
For evaluation of suspected splenic tissue, a median of activity of 148 MBq (Q1-Q3 interquartile range, 140-153 MBq) technetium-99m heat damaged RBCs were prepared as described previously [4] and administered intravenously. SPECT and planar imaging were performed on a dual-headed Siemens Symbia T2 SPECT/CT or a Siemens Symbia Intevo T16 SPECT/CT system (Siemens Healthineers, Erlangen, Germany), using a low-energy high-resolution collimator. Planar images of the abdomen were acquired over 10 min. SPECT/low dose CT scan was initiated 30 min after tracer injection. SPECT acquisition parameters were 32 projections per head with 25 s per projection and a projection matrix of 128 × 128 pixels (4.7952 × 4.7952 mm 2 ). SPECT reconstruction was performed via Hermes (Hermes Medical Solutions, Stockholm, Sweden) using the OSEM algorithm (3 iterations and 16 subsets) with Gaussian post-filtering of 1.10 cm full-width-half-maximum and included CT-based attenuation correction as well as resolution compensation. For further evaluation, SPECT images were transferred to a Hermes workstation.
Data Analysis
For uptake quantification, hepatic uptake served as reference tissue and was quantified using a spherical 3.0 cm diameter volume-of-interest (VOI) in the non-diseased right hepatic lobe [19]. Uptake in sites of physiologic uptake was quantified using spherical VOIs (diameter 1.5 cm: bone marrow and, if available, pancreas; diameter 3.0 cm: orthotopic spleen [20]). Figure 1 illustrates the VOI definition. For evaluation of suspected splenic tissue, scans were reviewed by 4 nuclear medicine physicians and rated either as vital splenic or non-splenic lesions. The uptake characteristics of the lesions were then quantified using a VOI-based approach consisting of a manually drawn VOI in three adjacent axial layers centered on the maximum uptake of the lesion. Based on uptake quantification in sites of physiologic uptake, a quantitative cutoff for the differentiation of splenic from non-splenic tissue was defined. iterations and 16 subsets) with Gaussian post-filtering of 1.10 cm full-width-half-maximum and included CT-based attenuation correction as well as resolution compensation. For further evaluation, SPECT images were transferred to a Hermes workstation.
Data Analysis
For uptake quantification, hepatic uptake served as reference tissue and was quantified using a spherical 3.0 cm diameter volume-of-interest (VOI) in the non-diseased right hepatic lobe [19]. Uptake in sites of physiologic uptake was quantified using spherical VOIs (diameter 1.5 cm: bone marrow and, if available, pancreas; diameter 3.0 cm: orthotopic spleen [20]). Figure 1 illustrates the VOI definition. For evaluation of suspected splenic tissue, scans were reviewed by 4 nuclear medicine physicians and rated either as vital splenic or non-splenic lesions. The uptake characteristics of the lesions were then quantified using a VOI-based approach consisting of a manually drawn VOI in three adjacent axial layers centered on the maximum uptake of the lesion. Based on uptake quantification in sites of physiologic uptake, a quantitative cutoff for the differentiation of splenic from non-splenic tissue was defined.
Data Statistics
SPSS for Windows (version 25.0; SPSS, Chicago, IL, USA) was used for statistical analyses. Normal distribution was assessed using the Shapiro-Wilk test. The Wilcoxon signed-rank test was used to compare dependent and not-normally distributed continuous parameters. The unpaired Mann-Whitney U test was used to compare independent and not-normally distributed continuous parameters. Statistical significance was defined as a two-tailed p-value < 0.05.
Data Statistics
SPSS for Windows (version 25.0; SPSS, Chicago, IL, USA) was used for statistical analyses. Normal distribution was assessed using the Shapiro-Wilk test. The Wilcoxon signed-rank test was used to compare dependent and not-normally distributed continuous parameters. The unpaired Mann-Whitney U test was used to compare independent and not-normally distributed continuous parameters. Statistical significance was defined as a two-tailed p-value < 0.05.
Patient Characteristics
20 patients were included in the study. Characteristics of patients scanned for the evaluation of suspected splenic tissue are shown in Table 1. One patient was excluded from quantitative analyses due to outlying low applied radioactivity (#18; 77 MBq). Table 1. Patient characteristics and uptake characteristics. Uptake values are referenced to liver uptake. 1 If information not available, lesions were considered as previously known ("0"). Pos. = positive. Neg. = negative. * = one additional lesion excluded due to spill-in from the adjacent spleen. ** = excluded due to exocrine pancreatic insufficiency secondary to cystic fibrosis. M = male. F = female. PPPD = pylorus-preserving pancreaticoduodenectomy. n/a = not available.
Sites of Physiologic Uptake
The median ratio of physiologic uptake in the pancreas was 0.09 (range 0.03-0.67), in the bone marrow 0.17 (range 0.03-0.45), and in the orthotopic spleen 14.45 (range 3.04-29.82). Compared to orthotopic spleen tissues, the pancreas showed the lowest uptake characteristics (median 0.09 vs. 14.45, p = 0.004), followed by the bone marrow (median 0.17 vs. 14.45, p = 0.002). The uptake characteristics in bone marrow and pancreas were rather similarly low (median 0.09 vs. 0.17, p = 0.261), see Figure 2A. All values for physiologic uptake are shown in Table 1.
Suspected Splenic Lesions
55 abdominal lesions were investigated. One lesion was excluded from quantitative analyses due to spill-in from the adjacent orthotopic spleen. The number of lesions per patient rated as either positive or negative for splenic tissue in RBC scintigraphy/SPECT are displayed in Table 1. Median VOI size for quantification of suspected splenic lesions was 1.32 mL (range 0.33-13.45 mL). The median uptake ratio was significantly higher in splenic lesions (2.82 (range, 0.58-24.10), n = 47) compared to other lesions (0.49 (range, 0.01-0.83), n = 7), p < 0.001. The uptake quantification of positive and negative lesions is shown in Figure 2B.
Interestingly, at least 5 new, priorly unnoted lesions were discovered due to increased uptake in RBC scintigraphy (see Table 1).
Definition of a Quantitative Cutoff in Relation to Physiologic Uptake in a Subgroup of Unequivocal Cases
First, only the cases with unequivocal results based on uptake intensity in clinical reading (n = 5 with clearly suspected splenic tissue and n = 5 with clearly suspected metastases or other non-splenic tissues) and additionally with, if available, histological or otherwise imaging or clinical follow up for confirmation of clinical reading results were selected (n = 3 histology, n = 3 ultrasonography follow up, n = 1 PET/CT follow up, n = 3 clinical follow up of each >18 months). These cases showed even more distinctly than in the overall group (see Section 3.2.2) that the median uptake ratio was significantly higher in splenic lesions (8.10 (range, 4.50-24.10)) as compared to other lesions (0.49 (range, 0.01-0.74)), p = 0.004, see Figure 3A. In lesions rated as spleen tissue, the median uptake ratio was higher as compared to uptake characteristics in the pancreas (0.19 (range, 0.03-0.67), p = 0.063) and bone marrow (0.22 (range, 0.03-0.45), p = 0.031), but still somewhat lower than in the orthotopic spleen (19.95 (range, 3.04-29.32), p = 0.500), see Figure 3A,B. In lesions rated as non-splenic tissue, the median uptake ratio was rather similar compared to the pancreas (0.19 (range, 0.03-0.67), p = 0.156) and the bone marrow (0.22 (range, 0.03-0.45), p = 0.156), but lower than in the orthotopic spleen (19.95 (range, 3.04-29.32), p = 0.125), see Figure 3A,B. Referenced to liver uptake (uptake ratio). Positive = consensually rated as splenic tissue. Negative = consensually rated as non-splenic tissue. n.s. = not significant. ** p < 0.01, *** p < 0.001.
Suspected Splenic Lesions
55 abdominal lesions were investigated. One lesion was excluded from quantitative analyses due to spill-in from the adjacent orthotopic spleen. The number of lesions per patient rated as either positive or negative for splenic tissue in RBC scintigraphy/SPECT are displayed in Table 1. Median VOI size for quantification of suspected splenic lesions was 1.32 mL (range 0.33-13.45 mL). The median uptake ratio was significantly higher in splenic lesions (2.82 (range, 0.58-24.10), n = 47) compared to other lesions (0.49 (range, 0.01-0.83), n = 7), p < 0.001. The uptake quantification of positive and negative lesions is shown in Figure 2B.
Interestingly, at least 5 new, priorly unnoted lesions were discovered due to increased uptake in RBC scintigraphy (see Table 1).
Definition of a Quantitative Cutoff in Relation to Physiologic Uptake in a Subgroup of Unequivocal Cases
First, only the cases with unequivocal results based on uptake intensity in clinical reading (n = 5 with clearly suspected splenic tissue and n = 5 with clearly suspected metastases or other non-splenic tissues) and additionally with, if available, histological or otherwise imaging or clinical follow up for confirmation of clinical reading results were selected (n = 3 histology, n = 3 ultrasonography follow up, n = 1 PET/CT follow up, n = 3 clinical follow up of each >18 months). These cases showed even more distinctly than in the overall group (see Section 3.2.2) that the median uptake ratio was significantly higher in splenic lesions (8.10 (range, 4.50-24.10)) as compared to other lesions (0.49 (range, 0.01-0.74)), p = 0.004, see Figure 3A. In lesions rated as spleen tissue, the median uptake ratio was higher as compared to uptake characteristics in the pancreas (0.19 (range, 0.03-0.67), p = 0.063) and bone marrow (0.22 (range, 0.03-0.45), p = 0.031), but still somewhat lower than in the orthotopic spleen (19.95 (range, 3.04-29.32), p = 0.500), see Figure 3A,B. In lesions rated as non-splenic tissue, the median uptake ratio was rather similar compared to the pancreas (0.19 (range, 0.03-0.67), p = 0.156) and the bone marrow (0.22 (range, 0.03-0.45), p = 0.156), but lower than in the orthotopic spleen (19.95 (range, 3.04-29.32), p = 0.125), see Figure 3A,B. With regard to the inferior uptake intensity of suspected splenic lesions in comparison to the orthotopic spleens, approaches using an objective cutoff based on physiologic uptake in the orthotopic spleen were rejected for further analyses. Yet, the pancreas showed lowest uptake characteristics, with clearly inferior uptake as compared to positive lesions (median 0.19 vs. 8.10 vs., p = 0.063) and even slightly inferior uptake as compared to the bone marrow (0.19 vs. 0.22, p = 0.469). Therefore, the following quantitative cutoff defined in relation to the pancreatic uptake is proposed for the differentiation of splenic from non-splenic lesions: Cutoff = (median uptake ratio in pancreas + 2 × SD) = (0.19 + 2 × 0.28) = 0.75 (1)
Proof-of-Principle Quantitative Reading in all Cases in Direct Comparison to Standard Qualitative Clinical Reading
When using the objective reader-independent cutoff based on physiologic pancreatic uptake as proposed above in all cases included in the quantitative analysis, in 50 out of 54 cases the lesions were concordantly classified with regard to standard clinical reading. For the positive lesions, 44 out of 47 were concordantly classified with the clinical reading, and 6 out of 7 negative lesions were concordantly classified with the clinical reading, see Figure 4. With regard to the inferior uptake intensity of suspected splenic lesions in comparison to the orthotopic spleens, approaches using an objective cutoff based on physiologic uptake in the orthotopic spleen were rejected for further analyses. Yet, the pancreas showed lowest uptake characteristics, with clearly inferior uptake as compared to positive lesions (median 0.19 vs. 8.10 vs., p = 0.063) and even slightly inferior uptake as compared to the bone marrow (0.19 vs. 0.22, p = 0.469). Therefore, the following quantitative cutoff defined in relation to the pancreatic uptake is proposed for the differentiation of splenic from non-splenic lesions: Cutoff = (median uptake ratio in pancreas + 2 × SD) = (0.19 + 2 × 0.28) = 0.75 (1)
Proof-of-Principle Quantitative Reading in All Cases in Direct Comparison to Standard Qualitative Clinical Reading
When using the objective reader-independent cutoff based on physiologic pancreatic uptake as proposed above in all cases included in the quantitative analysis, in 50 out of 54 cases the lesions were concordantly classified with regard to standard clinical reading. For the positive lesions, 44 out of 47 were concordantly classified with the clinical reading, and 6 out of 7 negative lesions were concordantly classified with the clinical reading, see Figure 4. With regard to the inferior uptake intensity of suspected splenic lesions in comparison to the orthotopic spleens, approaches using an objective cutoff based on physiologic uptake in the orthotopic spleen were rejected for further analyses. Yet, the pancreas showed lowest uptake characteristics, with clearly inferior uptake as compared to positive lesions (median 0.19 vs. 8.10 vs., p = 0.063) and even slightly inferior uptake as compared to the bone marrow (0.19 vs. 0.22, p = 0.469). Therefore, the following quantitative cutoff defined in relation to the pancreatic uptake is proposed for the differentiation of splenic from non-splenic lesions: Cutoff = (median uptake ratio in pancreas + 2 × SD) = (0.19 + 2 × 0.28) = 0.75 (1)
Proof-of-Principle Quantitative Reading in all Cases in Direct Comparison to Standard Qualitative Clinical Reading
When using the objective reader-independent cutoff based on physiologic pancreatic uptake as proposed above in all cases included in the quantitative analysis, in 50 out of 54 cases the lesions were concordantly classified with regard to standard clinical reading. For the positive lesions, 44 out of 47 were concordantly classified with the clinical reading, and 6 out of 7 negative lesions were concordantly classified with the clinical reading, see Figure 4.
Exemplary Use of the Quantitative Cutoff in Histology-Proven Cases
To illustrate the benefit of RBC scintigraphy using a quantitative cutoff for the differentiation of splenic from non-splenic lesions in clinical routine, a case of negative reading (i.e., lesional uptake below the cutoff), a case of positive reading (i.e., with lesional uptake above the cutoff) as well as two challenging cases in which the cutoff helped the clinical reading are presented. Figure 5 illustrates the case of a lesion rated as non-splenic tissue. A 67-year-old woman with an incidental lesion in the pancreatic tail presented for somatostatin receptor (SSTR)-targeted PET/CT (left). A strong focal tracer uptake in PET/CT suggested a neuroendocrine pancreatic neoplasm (see left arrow). The latter, however, cannot be differentiated from an intrapancreatic splenule in PET/CT, since both entities display a comparably high SSTR-ligand uptake despite their diametrically opposed prognosis (note the high physiologic SSTR-ligand uptake in the spleen; see asterisk). Yet, the pancreatic lesion was not visible in endoscopy and, therefore, not accessible to ultrasound-guided fine needle aspiration cytology and biopsy. Hence, the interdisciplinary tumor board review recommended 99m Tc-labelled denatured red blood cells scintigraphy to rule out a splenule (right; SPECT fused to low dose CT). Here, no increased uptake is noted in the pancreatic lesion (see right arrow). Taking into account the scintigraphy results, a laparoscopic distal pancreatectomy was performed, and histopathology ultimately confirmed the diagnosis of a neuroendocrine tumor of the pancreas. Using the quantitative reading in selective spleen scintigraphy, the lesion would have been correctly classified (lower lesional uptake ratio than the cutoff, i.e., 0.49 < 0.75).
Exemplary Use of the Quantitative Cutoff in Histology-Proven Cases
To illustrate the benefit of RBC scintigraphy using a quantitative cutoff for the differentiation of splenic from non-splenic lesions in clinical routine, a case of negative reading (i.e., lesional uptake below the cutoff), a case of positive reading (i.e., with lesional uptake above the cutoff) as well as two challenging cases in which the cutoff helped the clinical reading are presented. Figure 5 illustrates the case of a lesion rated as non-splenic tissue. A 67-year-old woman with an incidental lesion in the pancreatic tail presented for somatostatin receptor (SSTR)-targeted PET/CT (left). A strong focal tracer uptake in PET/CT suggested a neuroendocrine pancreatic neoplasm (see left arrow). The latter, however, cannot be differentiated from an intrapancreatic splenule in PET/CT, since both entities display a comparably high SSTR-ligand uptake despite their diametrically opposed prognosis (note the high physiologic SSTR-ligand uptake in the spleen; see asterisk). Yet, the pancreatic lesion was not visible in endoscopy and, therefore, not accessible to ultrasound-guided fine needle aspiration cytology and biopsy. Hence, the interdisciplinary tumor board review recommended 99m Tc-labelled denatured red blood cells scintigraphy to rule out a splenule (right; SPECT fused to low dose CT). Here, no increased uptake is noted in the pancreatic lesion (see right arrow). Taking into account the scintigraphy results, a laparoscopic distal pancreatectomy was performed, and histopathology ultimately confirmed the diagnosis of a neuroendocrine tumor of the pancreas. Using the quantitative reading in selective spleen scintigraphy, the lesion would have been correctly classified (lower lesional uptake ratio than the cutoff, i.e., 0.49 < 0.75). Figure 6 illustrates the case of a lesion rated as splenic tissue. A 63-year-old man with newly diagnosed esophageal adenocarcinoma presented with multiple unclear abdominal lesions. The patient had experienced traumatic rupture of the spleen several years before. 99m Tc-labelled denatured red blood cells scintigraphy was performed (SPECT fused to low dose CT) in order to differentiate splenosis secondary to rupture of the spleen from potential lymph node metastases and peritoneal carcinomatosis secondary to the esophageal carcinoma. The lesions showed pronounced tracer uptake and, therefore, suggested splenic tissue (e.g., see perigastric lesion in the figure). Hence, after 3 neoadjuvant cycles of fluorouracil plus leucovorin, oxaliplatin, and docetaxel (FLOT) chemotherapy, a Figure 6 illustrates the case of a lesion rated as splenic tissue. A 63-year-old man with newly diagnosed esophageal adenocarcinoma presented with multiple unclear abdominal lesions. The patient had experienced traumatic rupture of the spleen several years before. 99m Tc-labelled denatured red blood cells scintigraphy was performed (SPECT fused to low dose CT) in order to differentiate splenosis secondary to rupture of the spleen from potential lymph node metastases and peritoneal carcinomatosis secondary to the esophageal carcinoma. The lesions showed pronounced tracer uptake and, therefore, suggested splenic tissue (e.g., see perigastric lesion in the figure). Hence, after 3 neoadjuvant cycles of fluorouracil plus leucovorin, oxaliplatin, and docetaxel (FLOT) chemotherapy, a thoracoabdominal partial esophagectomy with total gastrectomy and locoregional lymphadenectomy was performed in curative intention. Histopathology showed no evidence of lymph node metastases or peritoneal carcinomatosis (ypT0, ypN0 (0/34), L0, V0, Pn0, UICC-Stadium 0, locally R0). Using the quantitative reading in selective spleen scintigraphy, the lesion would have been correctly classified (higher lesional uptake ratio than the cutoff, i.e., 1.22 > 0.75).
thoracoabdominal partial esophagectomy with total gastrectomy and locoregional lymphadenectomy was performed in curative intention. Histopathology showed no evidence of lymph node metastases or peritoneal carcinomatosis (ypT0, ypN0 (0/34), L0, V0, Pn0, UICC-Stadium 0, locally R0). Using the quantitative reading in selective spleen scintigraphy, the lesion would have been correctly classified (higher lesional uptake ratio than the cutoff, i.e., 1.22 > 0.75). Figure 7 illustrates two challenging cases with visually similar uptake in spleen scintigraphy. In both cases, the use of the newly proposed quantitative cutoff approach would have led to the correct diagnosis and, therefore, would have helped significantly with the unclear visual findings in clinical reading. Figure 7A shows the scan of 60-year-old woman referred for spleen scintigraphy due to an incidental, unclear perisplenic lesion. Using merely visual reading, the lesion would have been prone to be classified as negative, as it has a significantly lower uptake than the nearby orthotopic spleen. Using the cutoff, however, the lesion would have been correctly rated as positive with a ratio of 6.0 (clearly > 0.75). Clinical follow up >18 months showed no evidence of a progression of the perisplenic lesion. Figure 7B shows the scan of a 56-year-old woman with an unclear lesion near the pancreatic tail. Although displaying minimally lower uptake in visual reading as compared to the lesion in Figure 7A, there is still more uptake than in a clearly negative case (e.g., Figure 5) and, therefore, the lesion would have been at risk of being classified as positive in standard clinical reading. Using the newly proposed quantitative cutoff approach, however, the lesion would have been correctly rated as negative (ratio < 0.75). In this case, an endosonography was performed and the histopathological findings were most compatible with a partially sclerosed lymph node. Figure 7 illustrates two challenging cases with visually similar uptake in spleen scintigraphy. In both cases, the use of the newly proposed quantitative cutoff approach would have led to the correct diagnosis and, therefore, would have helped significantly with the unclear visual findings in clinical reading. Figure 7A shows the scan of 60-year-old woman referred for spleen scintigraphy due to an incidental, unclear perisplenic lesion. Using merely visual reading, the lesion would have been prone to be classified as negative, as it has a significantly lower uptake than the nearby orthotopic spleen. Using the cutoff, however, the lesion would have been correctly rated as positive with a ratio of 6.0 (clearly > 0.75). Clinical follow up >18 months showed no evidence of a progression of the perisplenic lesion. Figure 7B shows the scan of a 56-year-old woman with an unclear lesion near the pancreatic tail. Although displaying minimally lower uptake in visual reading as compared to the lesion in Figure 7A, there is still more uptake than in a clearly negative case (e.g., Figure 5) and, therefore, the lesion would have been at risk of being classified as positive in standard clinical reading. Using the newly proposed quantitative cutoff approach, however, the lesion would have been correctly rated as negative (ratio < 0.75). In this case, an endosonography was performed and the histopathological findings were most compatible with a partially sclerosed lymph node.
Discussion
Splenosis is the heterotopic autotransplantation of splenic tissue following splenic trauma or splenectomy and is mostly found in the peritoneal, pelvic, or even thoracic cavity. The incidence of splenosis varies from 26-65% in patients with traumatic splenic rupture [21,22]. Heterotopic splenic tissue is diagnostically challenging, as it can easily be mistaken as metastatic peritoneal spread in conventional imaging, especially in patients with a known tumoral disease, and could, therefore, lead to unnecessary surgeries or other procedures with potential harm to the patient. Radiolabeled heat-damaged red blood cells (RBC) scintigraphy is a reliable and non-invasive imaging method to confirm the presence of splenic tissue [1]. However, until now there has been no defined standard quantitative approach to clinical reading of RBC scintigraphy.
Therefore, to distinguish between splenic tissue and metastatic abdominal sites in inconclusive cases in the future, we conducted a single-center analysis investigating the uptake characteristics in suspected splenic lesions in direct quantitative correlation to physiologic uptake in sites of physiologic uptake. To the best of our knowledge, this is the first study, which describes a visual and quantitative method to identify splenic tissue in relation to the surrounding tissues.
Discussion
Splenosis is the heterotopic autotransplantation of splenic tissue following splenic trauma or splenectomy and is mostly found in the peritoneal, pelvic, or even thoracic cavity. The incidence of splenosis varies from 26-65% in patients with traumatic splenic rupture [21,22]. Heterotopic splenic tissue is diagnostically challenging, as it can easily be mistaken as metastatic peritoneal spread in conventional imaging, especially in patients with a known tumoral disease, and could, therefore, lead to unnecessary surgeries or other procedures with potential harm to the patient. Radiolabeled heat-damaged red blood cells (RBC) scintigraphy is a reliable and non-invasive imaging method to confirm the presence of splenic tissue [1]. However, until now there has been no defined standard quantitative approach to clinical reading of RBC scintigraphy. Therefore, to distinguish between splenic tissue and metastatic abdominal sites in inconclusive cases in the future, we conducted a single-center analysis investigating the uptake characteristics in suspected splenic lesions in direct quantitative correlation to physiologic uptake in sites of physiologic uptake. To the best of our knowledge, this is the first study, which describes a visual and quantitative method to identify splenic tissue in relation to the surrounding tissues.
Compared to other lesions, splenic lesions showed a significantly higher uptake ratio in RBC scintigraphy. However, interestingly, suspected splenic lesions had an inferior uptake compared to orthotopic spleens (e.g., see Figure 3A,B). Therefore, due to the quantitatively inferior uptake intensity of suspected splenic lesions in comparison to the orthotopic spleens, approaches using an objective cutoff based on physiologic uptake in the orthotopic spleen were rejected for further analyses. The cause of this differential uptake remains unclear; one might speculate that a partial volume effect is a contributor to this phenomenon (the median VOI size for quantification of suspected splenic lesions was low with only 1.32 mL and a range of 0.33-13.45 mL). In addition, the range of uptake in or- Compared to other lesions, splenic lesions showed a significantly higher uptake ratio in RBC scintigraphy. However, interestingly, suspected splenic lesions had an inferior uptake compared to orthotopic spleens (e.g., see Figure 3A,B). Therefore, due to the quantitatively inferior uptake intensity of suspected splenic lesions in comparison to the orthotopic spleens, approaches using an objective cutoff based on physiologic uptake in the orthotopic spleen were rejected for further analyses. The cause of this differential uptake remains unclear; one might speculate that a partial volume effect is a contributor to this phenomenon (the median VOI size for quantification of suspected splenic lesions was low with only 1.32 mL and a range of 0.33-13.45 mL). In addition, the range of uptake in orthotopic spleens was rather high (3.04-29.32, see error bars in Figure 3). Instead, the uptake characteristics in the pancreas and bone marrow seemed to be more suitable for the definition of an objective quantitative cutoff, as they were (1) comparably low in all cases (low range of 0.03-0.67 and 0.03-0.45, respectively); (2) similarly low as compared to the uptake in negative lesions (range, 0.01-0.87); and (3) invariably lower than the uptake in positive lesions (median 8.10 vs. 0.19, p = 0.063; see Figure 3A,B). As the pancreatic uptake proved to be even slightly lower than the uptake in the bone marrow (0.19 vs. 0.22, p = 0.469), and as intrapancreatic splenules vs. metastases are an important clinical differential diagnosis to be addressed with RBC scintigraphy, uptake characteristics of the pancreas were eventually used for definition of the cutoff. Thus, we proposed a quantitative cutoff solely based on the physiologic uptake of the pancreas (and not just a cutoff centered in between the latter and the orthotopic spleen uptake) for the differentiation of splenic from non-splenic lesions, i.e., 0.75. This cutoff was established in a subgroup of unequivocal cases with, if available, histology or imaging/clinical follow up for control.
The proof-of-principle application of this cutoff showed a concordant classification of the corresponding lesions in 92.6% of cases with regard to standard clinical reading. To illustrate the potential application of the cutoff in clinical routine, illustrative cases with available histology were presented. Here, applying this cutoff could indeed exclude/confirm a splenule, with subsequent clinical benefit, e.g., in one case it supported the suspected diagnosis of an intrapancreatic neoplasia and subsequently lead to the decision to perform a distal pancreatectomy. Histopathology ultimately confirmed the diagnosis of a neuroendocrine tumor of the pancreas and not of a splenule, as correctly predicted by RBC scintigraphy. Of note, the newly proposed method using a cutoff may especially support decision making in challenging cases which would potentially have been misclassified using standard visual reading alone (see Figure 7), illustrating the presumable added value of quantitative reading in clinical routine.
Limitations of this study arise from the rather small sample size and the merely internal validation in a single institution setting, which do not impose a more general applicability of the established method per se. Yet it has to be noted, that this is probably the largest number to date of RBC scans quantitatively analyzed in one study for the assessment of splenic tissue, and that future studies can, therefore, relate to and build upon those initial results. Another limitation is the lack of histologic validation for most cases, which is partly due to the retrospective study design, and here mainly to the much higher number of positively rated lesions, which regularly do not result in a biopsy or resection within routine clinical practice. Therefore, although 20 patients were enrolled in total, the quantitative cutoff was established in a subgroup of only 10 patients with, if available, histological or otherwise imaging/clinical follow up. However, further prospective studies for detection of splenic tissues are underway to verify this newly defined cutoff in more histologically verified cases.
This study could show for the first time that using a defined quantitative cutoff in RBC scintigraphy is a feasible method for the assessment of suspected splenic tissue as well as for the detection of previously unknown sites of splenosis. This method could avoid unnecessary biopsies, surgical exploration, further imaging or misleading of oncological procedures.
Informed Consent Statement:
Patients gave written consent prior to imaging as part of the clinical routine. Written informed consent for the retrospective data analysis was waived by the local Ethics Committee.
Data Availability Statement:
The numerical data presented in this study are available in Table 1. Further requests can be addressed to the corresponding author. | 2022-02-17T16:08:19.511Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "9a6c7ce57d49ec3f8b505c00b42ed00d201f5717",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/12/2/486/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82ff5b5eda2a951bd017a0cad63ec1b6be78af1c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237508410 | pes2o/s2orc | v3-fos-license | Pediatric Weight Management Through mHealth Compared to Face-to-Face Care: Cost Analysis of a Randomized Control Trial
Background Mobile health (mHealth) may improve pediatric weight management capacity and the geographical reach of services, and overcome barriers to attending physical appointments using ubiquitous devices such as smartphones and tablets. This field remains an emerging research area with some evidence of its effectiveness; however, there is a scarcity of literature describing economic evaluations of mHealth interventions. Objective We aimed to assess the economic viability of using an mHealth approach as an alternative to standard multidisciplinary care by evaluating the direct costs incurred within treatment arms during a noninferiority randomized controlled trial (RCT). Methods A digitally delivered (via a smartphone app) maintenance phase of a pediatric weight management program was developed iteratively with patients and families using evidence-based approaches. We undertook a microcosting exercise and budget impact analysis to assess the costs of delivery from the perspective of the publicly funded health care system. Resource use was analyzed alongside the RCT, and we estimated the costs associated with the staff time and resources for service delivery per participant. Results In total, 109 adolescents participated in the trial, and 84 participants completed the trial (25 withdrew from the trial). We estimated the mean direct cost per adolescent attending usual care at €142 (SD 23.7), whereas the cost per adolescent in the mHealth group was €722 (SD 221.1), with variations depending on the number of weeks of treatment completion. The conversion rate for the reference year 2013 was $1=€0.7525. The costs incurred for those who withdrew from the study ranged from €35 to €681, depending on the point of dropout and study arm. The main driver of the costs in the mHealth arm was the need for health professional monitoring and support for patients on a weekly basis. The budget impact for offering the mHealth intervention to all newly referred patients in a 1-year period was estimated at €59,046 using the assessed approach. Conclusions This mHealth approach was substantially more expensive than usual care, although modifications to the intervention may offer opportunities to reduce the mHealth costs. The need for monitoring and support from health care professionals (HCPs) was not eliminated using this delivery model. Further research is needed to explore the cost-effectiveness and economic impact on families and from a wider societal perspective. Trial Registration ClinicalTrials.gov NCT01804855; https://clinicaltrials.gov/ct2/show/NCT01804855
1a-i) Identify the mode of delivery in the title
Identify the mode of delivery. Preferably use "web-based" and/or "mobile" and/or "electronic game" in the title. Avoid ambiguous terms like "online", "virtual", "interactive". Use "Internet-based" only if Intervention includes non-web-based Internet components (e.g. email), use "computer-based" or "electronic" only if offline products are used. Use "virtual" only in the context of "virtual reality" (3-D worlds). Use "online" only in the context of "online support groups". Complement or substitute product names with broader terms for the class of products (such as "mobile" or "smart phone" instead of "iphone"), especially if the application runs on different platforms.
Clear selection
Does your paper address subitem 1a-i? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 1a-ii) Non-web-based components or important co-interventions in title Mention non-web-based components or important co-interventions in title, if any (e.g., "with telephone support").
Clear selection
Does your paper address subitem 1a-ii?
Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No. As the mobile arm of the trial predominantly involved use of the app at home, this was not included in the title descriptor.
1a-iii) Primary condition or target group in the title
Mention primary condition or target group in the title, if any (e.g., "for children with Type I Diabetes") Example: A Web-based and Mobile Intervention with Telephone Support for Children with Type I Diabetes: Randomized Controlled Trial
Clear selection
Does your paper address subitem 1a-iii? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes: "Pediatric Weight Management" is the term used in the title, which is aimed at conveying that the population is children with obesity. This is clarified in the text.
Clear selection
Does your paper address subitem 1b-i? * Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "A digitally delivered (via a smartphone app) maintenance phase of a pediatric weight management program was developed iteratively with patients and families using evidencebased approaches." We do not elaborate on the details of the intervention because there is a published protocol (referenced in the main text) and additionally a manuscript reporting the primary outcomes in preparation. 1b-ii) Level of human involvement in the METHODS section of the ABSTRACT Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
Clear selection
Does your paper address subitem 1b-ii?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not addressed in the abstract, due to this manuscript being focused on the cost analysis only, we instead avoided duplication of detailed descriptions of the intervention reported elsewhere.
1b-iii) Open vs. closed, web-based (self-assessment) vs. face-to-face assessments in the METHODS section of the ABSTRACT Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic or a closed online user group (closed usergroup trial), and clarify if this was a purely web-based trial, or there were face-to-face components (as part of the intervention or for assessment). Clearly say if outcomes were self-assessed through questionnaires (as common in web-based trials). Note: In traditional offline trials, an open trial (open-label trial) is a type of clinical trial in which both the researchers and participants know which treatment is being administered. To avoid confusion, use "blinded" or "unblinded" to indicated the level of blinding instead of "open", as "open" in web-based trials usually refers to "open access" (i.e. participants can self-enrol). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) Does your paper address subitem 1b-iii?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes: "A digitally delivered (via a smartphone app) maintenance phase of a pediatric weight management program" -this sentence conveys that this trial was undertaken within an existing clinical service. "We undertook a microcosting exercise and budget impact analysis to assess the costs of delivery from the perspective of the publicly funded health care system. Resource use was analyzed alongside the RCT, and we estimated the costs associated with the staff time and resources for service delivery per participant." No details of outcome measures are provided other than costs as they are the only outcome assessed in this manuscript.
1b-iv) RESULTS section in abstract must contain use data Report number of participants enrolled/assessed in each group, the use/uptake of the intervention (e.g., attrition/adherence metrics, use over time, number of logins etc.), in addition to primary/secondary outcomes. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
Clear selection
Does your paper address subitem 1b-iv?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes: "In total, 109 adolescents participated in the trial, and 84 participants completed the trial (25 withdrew from the trial). We estimated the mean direct cost per adolescent attending usual care at €142 (SD 23.7), whereas the cost per adolescent in the mHealth group was €722 (SD 221.
Clear selection
Does your paper address subitem 1b-v?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This paper does not report the primary outcome of the trial, but the secondary outcome of cost. This is discussed in the abstract: "This mHealth approach was substantially more expensive than usual care, although modifications to the intervention may offer opportunities to reduce the mHealth costs"
2a-i) Problem and the type of system/solution
Describe the problem and the type of system/solution that is object of the study: intended as stand-alone intervention vs. incorporated in broader health care program? Intended for a particular patient population? Goals of the intervention, e.g., being more cost-effective to other interventions, replace or complement other solutions? (Note: Details about the intervention are provided in "Methods" under 5) Clear selection Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes: "There is evidence that telemedicine interventions can support self-management of nutrition and physical activity in children and adolescents [5]; however, there is a scarcity of studies focusing on the economic evaluations of such interventions, particularly for mHealth interventions developed to incorporate evidence-based approaches [1,5,6]." Does your paper address CONSORT subitem 2b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes: "We aimed to assess the direct costs of delivering the mHealth intervention to participants in the trial relative to usual care participants to inform future designs of mHealth trials to assess effectiveness and cost-effectiveness within this population as well as contribute to the evidence base for the economic viability of integrating mHealth into pediatric weight management services in future." Does your paper address CONSORT subitem 3a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The pilot noninferiority RCT"
4a) Eligibility criteria for participants
Does your paper address CONSORT subitem 3b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not addressed as there were no relevant changes to report.
3b-i) Bug fixes, Downtimes, Content Changes
Bug fixes, Downtimes, Content Changes: ehealth systems are often dynamic systems. A description of changes to methods therefore also includes important changes made on the intervention or comparator during the trial (e.g., major bug fixes or changes in the functionality or content) (5-iii) and other "unexpected events" that may have influenced study design such as staff changes, system failures/downtimes, etc. [2].
Clear selection
Does your paper address subitem 3b-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No this is aspect is not addressed in this paper as it was not a major issue during the trial, but such issues will be reported with the primary outcome findings. Does your paper address CONSORT subitem 4a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These are not provided as this paper focuses on cost, however reference to the published protocol which details the eligibility criteria is provided in the main text. (reference 13)
4a-i) Computer / Internet literacy
Computer / Internet literacy is often an implicit "de facto" eligibility criterion -this should be explicitly clarified.
Clear selection
Does your paper address subitem 4a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Your answer 4a-ii) Open vs. closed, web-based vs. face-to-face assessments: Open vs. closed, web-based vs. face-to-face assessments: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic, and clarify if this was a purely webbased trial, or there were face-to-face components (as part of the intervention or for assessment), i.e., to what degree got the study team to know the participant. In online-only trials, clarify if participants were quasi-anonymous and whether having multiple identities was possible or whether technical or logistical measures (e.g., cookies, email confirmation, phone calls) were used to detect/prevent these. Does your paper address subitem 4a-ii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Eligible trial participants were recruited from the W82GO Child and Adolescent Weight Management Service, which is the only dedicated Tier 3 service for children and adolescents with obesity in the Republic of Ireland. All new adolescent referrals made to the service by a pediatrician were screened against the inclusion/exclusion criteria. Those eligible were invited to participate in the study following the consideration of the study by their parents and upon receipt of parental consent and adolescent assent forms."
Clear selection
"In total, 109 adolescent participants with clinical obesity (40 boys, 69 girls) were recruited through the W82GO service and received phase 1 of the treatment face to face before being randomized to receive the maintenance phase (phase 2) of treatment either through usual care (three additional face-to-face booster sessions with the multidisciplinary team either through one-to-one sessions or group sessions) or remotely via the mHealth app (Reactivate) [13]." "The sensitivity analysis showed that the main driver of costs for the mHealth group was the HCP time spent managing the mHealth service arm of the trial (platform administration, individualized care plans, providing feedback, troubleshooting, checking in). This was estimated to be approximately 12 hours per adolescent over 46 weeks (approximately 15 minutes per adolescent per week)"
4a-iii) Information giving during recruitment
Information given during recruitment. Specify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results.
Clear selection 9/9/21, 1:48 PM 4b) Settings and locations where the data were collected Does your paper address subitem 4a-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These details will be reported separately with the primary outcomes for the study once the analyses are completed.
Does your paper address CONSORT subitem 4b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes, details of the data collection relevant to the cost analysis only are provided: "Participant data including trial group data, whether they commenced one-to-one or group treatment, the number of sessions attended, and records of treatment completion or withdrawal stages, were collected during the trial and used for this analysis to ascertain variations in costs per patient. Cost data were obtained from multiple sources. For face-toface maintenance sessions, we used a time-driven activity-based microcosting method [18] to capture the direct costs associated with the face-to-face time of health care professionals (HCPs) with patients. We also included administrative time associated with appointment preparation. We interviewed personnel to map workflow processes associated with usual care to accurately assess the unit costs of program appointments and dropout/nonattendance costs. A record of the trial costs was maintained by the principal investigator, and it included invoices received for contracted mHealth service delivery, the related expenses, and the time allocated for checking in, monitoring, and processing participants. During baseline data collection, parents/carers were asked to provide details of their annual income, current occupation, the make and model of their car (if any), mode of transport, and distance traveled to attend hospital appointments." 4b-i) Report if outcomes were (self-)assessed through online questionnaires Clearly report if outcomes were (self-)assessed through online questionnaires (as common in web-based trials) or otherwise.
Clear selection
Does your paper address subitem 4b-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study As the outcomes reported in this paper are related to cost only, this is explained in the data collection section.
4b-ii) Report how institutional affiliations are displayed
Report how institutional affiliations are displayed to potential participants [on ehealth media], as affiliations with prestigious hospitals or universities may affect volunteer rates, use, and reactions with regards to an intervention.(Not a required item -describe only if this may bias results)
Clear selection
Does your paper address subitem 4b-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 5) The interventions for each group with sufficient details to allow replication, including how and when they were actually administered subitem not at all important 1 2 3 4 5 essential
5-i) Mention names, credential, affiliations of the developers, sponsors, and owners
Mention names, credential, affiliations of the developers, sponsors, and owners [6] (if authors/evaluators are owners or developer of the software, this needs to be declared in a "Conflict of interest" section or mentioned elsewhere in the manuscript).
Clear selection
Does your paper address subitem 5-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Acknowledgments This study was funded by the RCSI University of Medicine and Health Sciences -StAR program (grant 2151) and carried out as part of the Health Research Board (HRB) SPHeRE training program (SPHeRE/2013/1). The randomized controlled trial on which this study was based was funded by the HRB (HFP/2011/54) and the Children's Fund for Health & National Children's Research Centre of Ireland (PAC11-58). The funders had no role in the design of this study, including the collection, analyses, or interpretation of data, or in the preparation of the manuscript." Conflict of interest: we have declared that the senior author was responsible for the design and owns the Reactivate system.
5-ii) Describe the history/development process
Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results.
Clear selection
Does your paper address subitem 5-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The platform was designed using a participatory approach with the intended end users (adolescents with obesity and their parents) [14,15]. The findings of the RCT suggested that substituting face-to-face maintenance care with the mHealth intervention did not adversely affect the change in the primary outcome (BMI-SDS) of the overall treatment. Although study attrition was substantial and was similar to other pediatric trials [16], there was insufficient power to statistically confirm noninferiority [17]."
5-iii) Revisions and updating
Revisions and updating. Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial. Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b).
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not described within the manuscript.
5-iv) Quality assurance methods
Provide information on quality assurance methods to ensure accuracy and quality of information provided [1], if applicable.
Clear selection
Does your paper address subitem 5-iv?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study We used a bottom-up costing approach (microcosting), as well as financial records from the trial, and conducted sensitivity analyses, to ensure our cost analysis was as pragmatic as possible. The also declared our economic perspective to be that of the publicly funded healthcare system only. -v) Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used. Replicability (i.e., other researchers should in principle be able to replicate the study) is a hallmark of scientific reporting.
Clear selection
Does your paper address subitem 5-v?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not described within the manuscript.
5-vi) Digital preservation
Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org, and/or publishing the source code or screenshots/videos alongside the article). As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login.
Clear selection
Does your paper address subitem 5-vi?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
5-vii) Access
Access: Describe how participants accessed the application, in what setting/context, if they had to pay (or were paid) or not, whether they had to be a member of specific group. If known, describe how participants obtained "access to the platform and Internet" [1]. To ensure access for editors/reviewers/readers, consider to provide a "backdoor" login account or demo mode for reviewers/readers to explore the application (also important for archiving purposes, see vi).
Clear selection
Does your paper address subitem 5-vii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not described within this manuscript, but will be reported with the primary outcome for the study. Does your paper address subitem 5-viii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These are described in detail within the published protocol that we reference within the text.
5-ix) Describe use parameters
Describe use parameters (e.g., intended "doses" and optimal timing for use). Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use, if any, or was the intervention used ad libitum.
Clear selection
Does your paper address subitem 5-ix?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not described within the manuscript.
5-x) Clarify the level of human involvement
Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as co-intervention (detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered". It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability).
Clear selection Does your paper address subitem 5-x?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "...the HCP time spent managing the mHealth service arm of the trial (platform administration, individualized care plans, providing feedback, troubleshooting, checking in). This was estimated to be approximately 12 hours per adolescent over 46 weeks (approximately 15 minutes per adolescent per week) during the trial."
5-xi) Report any prompts/reminders used
Report any prompts/reminders used: Clarify if there were prompts (letters, emails, phone calls, SMS) to use the application, what triggered them, frequency etc. It may be necessary to distinguish between the level of prompts/reminders required for the trial, and the level of prompts/reminders for a routine application outside of a RCT setting (discuss under item 21 -generalizability).
Clear selection
Does your paper address subitem 5-xi? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These are described in detail within the published protocol that we reference within the text.
Clear selection
Does your paper address subitem 5-xii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "In total, 109 adolescent participants with clinical obesity (40 boys, 69 girls) were recruited through the W82GO service and received phase 1 of the treatment face to face before being randomized to receive the maintenance phase (phase 2) of treatment either through usual care (three additional face-to-face booster sessions with the multidisciplinary team either through one-to-one sessions or group sessions) or remotely via the mHealth app (Reactivate)" The interventions are described in further detail within the published protocol that we reference within the text.
Does your paper address CONSORT subitem 6a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These are described in detail within the published protocol that we reference within the text, however are not detailed in this manuscript as it assessed costs only. We acknowledge in the limitations that this analysis was exploratory.
Clear selection
Does your paper address subitem 6a-i?
Copy and paste relevant sections from manuscript text These are described within the published protocol that we reference within the text, and will also be reported with the primary outcome.
6a-ii) Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored (logins, logfile analysis, etc.). Use/adoption metrics are important process outcomes that should be reported in any ehealth trial. subitem not at all important 1 2 3 4 5 essential 6b) Any changes to trial outcomes after the trial commenced, with reasons Does your paper address subitem 6a-ii?
Clear selection
Copy and paste relevant sections from manuscript text Stage of dropout was recorded (or, if applicable, the stage at which the participant became unavailable for contact). This was accounted for in the cost analysis: "Withdrawal from or partial completion of the mHealth intervention was estimated to cost €310 to €680, depending on when the participant dropped out (see Multimedia Appendix 1). Accounting for partial completion and attrition costs, the mean cost incurred for those in the usual care arm was €142 (SD 23.7) (group participants: mean €133, SD 12.2; one-to-one participants: mean €177, SD 22.4). The mean cost for those randomized to use mHealth was estimated to be €722 (SD 221)." 6a-iii) Describe whether, how, and when qualitative feedback from participants was obtained Describe whether, how, and when qualitative feedback from participants was obtained (e.g., through emails, feedback forms, interviews, focus groups).
Clear selection
Does your paper address subitem 6a-iii?
Copy and paste relevant sections from manuscript text Participant feedback was not addressed in the analysis presented in this paper. Does your paper address CONSORT subitem 6b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not addressed in the manuscript as there were no relevant changes to report. We do however explain why a direct cost analysis was completed as opposed to a costeffectiveness analysis: Yes: "Although study attrition was substantial and was similar to other pediatric trials [16], there was insufficient power to statistically confirm noninferiority [17]. As a result of this, and in addition to high levels of missing secondary outcome data including health-related quality-of-life data, a full cost-effectiveness analysis was not possible despite conducting a clinical trial."
7a-i) Describe whether and how expected attrition was taken into account when calculating the sample size
Describe whether and how expected attrition was taken into account when calculating the sample size.
Clear selection
Does your paper address subitem 7a-i?
Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes: "The null hypothesis in the trial protocol was that the mHealth intervention would have a positive effect on change in the BMI-SDS but that this change will be inferior to that observed in usual care. Based on a reduction of 0.21 in the BMI-SDS at 12 months, an SD of 0.24 in the usual care group, and a noninferiority limit of 0.12, the sample size at 80% power was calculated to be 50 per group or 100 in total. To allow for expected attrition, the target recruitment sample size was 134 [13]." 7b) When applicable, explanation of any interim analyses and stopping guidelines 8a) Method used to generate the random allocation sequence NPT: When applicable, how care providers were allocated to each trial group 8b) Type of randomisation; details of any restriction (such as blocking and block size) Does your paper address CONSORT subitem 7b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not addressed in the manuscript.
Does your paper address CONSORT subitem 8a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study We do not report this, but rather reference the published protocol which states: "Adolescents who are eligible will be randomised to either a W82GO (usual care) group or a smartphone experimental group by the research team using a secure online randomisation system with full allocation concealment. They will be stratified by gender and parental obesity, and 134 adolescents will be randomised in total using a 1:1 randomisation ratio. After randomisation, adolescents will receive a study code, which will be used to analyse all the data related to that child." Does your paper address CONSORT subitem 8b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not addressed in the manuscript as not applicable. 9) Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned 10) Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions 11a) If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how NPT: Whether or not administering co-interventions were blinded to group assignment Does your paper address CONSORT subitem 9? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study We do not report this, but rather reference the published protocol which states: "Adolescents who are eligible will be randomised to either a W82GO (usual care) group or a smartphone experimental group by the research team using a secure online randomisation system with full allocation concealment. They will be stratified by gender and parental obesity, and 134 adolescents will be randomised in total using a 1:1 randomisation ratio. After randomisation, adolescents will receive a study code, which will be used to analyse all the data related to that child." Does your paper address CONSORT subitem 10? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This will be reported with the primary outcome 11a-i) Specify who was blinded, and who wasn't Specify who was blinded, and who wasn't. Usually, in web-based trials it is not possible to blind the participants [1, 3] (this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any).
Clear selection
Does your paper address subitem 11a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This will be reported with the primary outcome, and is also detailed in the published protocol which we reference in the text: "After randomisation and assignment to the study group, participants will be given a study identification number. The assessor of the primary outcome will not know the allocated intervention for the participant, to reduce the risk of bias. Given the nature of the intervention, it is not possible to blind the participants or the care providers. An independent data analyst will be blinded during the trial by the use of a fully anonymised dataset." 11a-ii) Discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator" Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator".
Clear selection 9/9/21, 1:48 PM 11b) If relevant, description of the similarity of interventions (this item is usually not relevant for ehealth trials as it refers to similarity of a placebo or sham intervention to a active medication/intervention) 12a) Statistical methods used to compare groups for primary and secondary outcomes NPT: When applicable, details of whether and how the clustering by care providers or centers was addressed Does your paper address subitem 11a-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
This was not addressed in this manuscript
Does your paper address CONSORT subitem 11b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not addressed in detail in this manuscript, though a description of the format of interventions is described.
Does your paper address CONSORT subitem 12a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not addressed in this manuscript as not applicable to the cost. 12a-i) Imputation techniques to deal with attrition / missing values Imputation techniques to deal with attrition / missing values: Not all participants will use the intervention/comparator as intended and attrition is typically high in ehealth trials. Specify how participants who did not use the application or dropped out from the trial were treated in the statistical analysis (a complete case analysis is strongly discouraged, and simple imputation techniques such as LOCF may also be problematic [4]).
Clear selection
Does your paper address subitem 12a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study We describe how the cost analysis accounted for attrition by calculating the actual costs incurred by those who withdrew.
Does your paper address CONSORT subitem 12b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not addressed in this manuscript as not applicable. Does your paper address subitem X26-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The pilot noninferiority RCT was approved by the ethics committee of Children's Health Ireland at Temple Street (reference number 11-033)"
x26-ii) Outline informed consent procedures
Outline informed consent procedures e.g., if consent was obtained offline or online (how? Checkbox, etc.?), and what information was provided (see 4a-ii). See [6] for some items to be included in informed consent documents.
Clear selection
Does your paper address subitem X26-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Those eligible were invited to participate in the study following the consideration of the study by their parents and upon receipt of parental consent and adolescent assent forms."
RESULTS
13a) For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center
X26-iii) Safety and security procedures
Safety and security procedures, incl. privacy considerations, and any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline)
Clear selection
Does your paper address subitem X26-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Whilst this is not reported in this manuscript, the published protocol describes the procedures: "At each visit, participants and their parents will complete an adverse events form in order to capture any unintended effects of the trial. Spontaneous reporting of adverse events will be possible by calling the study phone line." Does your paper address CONSORT subitem 13a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These details as relevant to the cost analysis are shown in Figure 1. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study These details as relevant to the cost analysis are shown in Figure 1. "In total, 109 adolescents and their families provided consent for participation in the trial; only 84 participants completed the trial as 25 adolescents withdrew from the study (13 from the usual care group and 12 from the mHealth group) after allocation, as shown in Figure 1."
13b-i) Attrition diagram
Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) or other figures or tables demonstrating usage/dose/engagement.
Clear selection
Does your paper address subitem 13b-i?
Copy and paste relevant sections from the manuscript or cite the figure number if applicable (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes this is presented in Figure 1. 14b) Why the trial ended or was stopped (early) Does your paper address CONSORT subitem 14a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Specific dates are not provided however the study year is given. "The conversion rate for the reference year 2013..." 14a-i) Indicate if critical "secular events" fell into the study period Indicate if critical "secular events" fell into the study period, e.g., significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources"
Clear selection
Does your paper address subitem 14a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not reported in the manuscript.
Does your paper address CONSORT subitem 14b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not reported in the manuscript. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not provided for this cost analysis but will be reported with the primary outcome.
15-i) Report demographics associated with digital divide issues
In ehealth trials it is particularly important to report demographics associated with digital divide issues, such as age, education, gender, social-economic status, computer/Internet/ehealth literacy of the participants, if known.
Clear selection
Does your paper address subitem 15-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not provided for this cost analysis but will be reported with the primary outcome. 16-i) Report multiple "denominators" and provide definitions Report multiple "denominators" and provide definitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group). Always clearly define "use" of the intervention.
Clear selection
Does your paper address subitem 16-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is not provided for this cost analysis but will be reported with the primary outcome.
16-ii) Primary analysis should be intent-to-treat
Primary analysis should be intent-to-treat, secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i).
Clear selection
Does your paper address subitem 16-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript as it is not relevant to the cost analysis. 17a) For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) subitem not at all important 1 2 3 4 5 essential 17b) For binary outcomes, presentation of both absolute and relative effect sizes is recommended Does your paper address CONSORT subitem 17a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript as it is not relevant to the cost analysis.
17a-i) Presentation of process outcomes such as metrics of use and intensity of use
In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a metric like a "session" is defined (e.g., timeout after idle time) [1] (report under item 6a).
Clear selection
Does your paper address subitem 17a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript as it is not relevant to the cost analysis.
18) Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 17b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript as it is not relevant to the cost analysis.
Does your paper address CONSORT subitem 18? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes, the limitations state that "Cost was not prespecified outcome for this trial and this study was undertaken as an exploratory analysis after completion of the trial."
18-i) Subgroup analysis of comparing only users
A subgroup analysis of comparing only users is not uncommon in ehealth trials, but if done, it must be stressed that this is a self-selected sample and no longer an unbiased sample from a randomized trial (see 16-iii).
Clear selection
Does your paper address subitem 18-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript as it is not relevant to the cost analysis, which accounted for the costs incurred by all participants including those who withdrew.
19) All important harms or unintended effects in each group
(for specific guidance see CONSORT for harms) subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 19? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript as it is not relevant to the cost analysis, but will be reported with the primary outcome.
19-i) Include privacy breaches, technical problems
Include privacy breaches, technical problems. This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches [1], technical problems, and other unexpected/unintended incidents. "Unintended effects" also includes unintended positive effects [2].
Clear selection
Does your paper address subitem 19-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study There were none to report. That is not reported within this manuscript as it is not relevant to the cost analysis, but will be reported with the primary outcome.
Clear selection
Does your paper address subitem 19-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study That is not reported within this manuscript.
22-i)
Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use). where dropouts are common owing to the intensive nature of these interventions. The results show that this mHealth intervention, developed using evidence-based approaches, is associated with higher health care costs than face-to-face pediatric weight management."
22-ii) Highlight unanswered new questions, suggest future research
Highlight unanswered new questions, suggest future research.
Clear selection
Does your paper address subitem 22-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "This highlights the need for further cost studies incorporating a wider perspective that is not limited to the health care provider." "Our results highlight the importance of conducting further research to explore the costeffectiveness of evidence-informed mHealth interventions in treating chronic diseases such as obesity across multiple centers."
20-i) Typical limitations in ehealth trials
Typical limitations in ehealth trials: Participants in ehealth trials are rarely blinded. Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error. Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events.
Clear selection
Does your paper address subitem 20-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes, a detailed limitations section is provided: "This study had several strengths and limitations...the study did not meet the target recruitment number within the available time period, and coupled with the attrition rate, this led to insufficient power for demonstrating statistically significant noninferiority. In addition, low response rates for health-related quality-of-life measures used contributed to the decision of undertaking only a direct cost comparison. As a result, our cost analysis does not provide a full economic evaluation. Further, although it was the only treatment center available nationwide, we acknowledge the limited external validity of our findings given the recruitment through a single center for obesity management. Cost was also not prespecified outcome for this trial and this study was undertaken as an exploratory analysis after completion of the trial."
21-i) Generalizability to other populations
Generalizability to other populations: In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations
Clear selection
Does your paper address subitem 21-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Further, although it was the only treatment center available nationwide, we acknowledge the limited external validity of our findings given the recruitment through a single center for obesity management" 21-ii) Discuss if there were elements in the RCT that would be different in a routine application setting Discuss if there were elements in the RCT that would be different in a routine application setting (e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting.
Clear selection 9/9/21, 1:48 PM CONSORT-EHEALTH (V 1.6.1) - Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "We undertook the cost analysis under pragmatic "real-world" conditions and their cost implications (ie, estimates of implementing the intervention outside of a research trial) [19], as the trial costs included additional expenses that would not represent the cost of telemedicine if provided as part of usual care (eg, provision of smartphones and mobile data packages to trial participants)." Does your paper address CONSORT subitem 23? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The pilot noninferiority RCT was approved by the ethics committee of Children's Health Ireland at Temple Street (reference number 11-033; ClinicalTrials.gov trial registration: NCT01804855)." Does your paper address CONSORT subitem 24? * Cite a Multimedia Appendix, other reference, or copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Does your paper address CONSORT subitem 25? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "This study was funded by the RCSI University of Medicine and Health Sciences -StAR program (grant 2151) and carried out as part of the Health Research Board (HRB) SPHeRE training program (SPHeRE/2013/1). The randomized controlled trial on which this study was based was funded by the HRB (HFP/2011/54) and the Children's Fund for Health & National Children's Research Centre of Ireland (PAC11-58). The funders had no role in the design of this study, including the collection, analyses, or interpretation of data, or in the preparation of the manuscript." X27-i) State the relation of the study team towards the system being evaluated In addition to the usual declaration of interests (financial or otherwise), also state the relation of the study team towards the system being evaluated, i.e., state if the authors/evaluators are distinct from or identical with the developers/sponsors of the intervention.
Clear selection
Does your paper address subitem X27-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Yes, we have reported the potential conflict of interest arising from the senior authors role in designing and retaining ownership of the app. | 2021-09-15T06:18:00.701Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "2d6a9e6d9762140af39fd6a61a68a051e1537fb8",
"oa_license": "CCBY",
"oa_url": "https://jmir.org/api/download?alt_name=mhealth_v9i9e31621_app2.pdf&filename=6a927fdf93d0de47272ffd266dbe3df7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "188d90dc92fd30909c4754a1bc9f62c7e25fa50e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247532616 | pes2o/s2orc | v3-fos-license | EXPLORING THE EFFECTIVENESS OF THEATRE FOR PEACE BUILDING IN GLOCAL CONFLICTS
Conflict is an inevitable phenomenon in any human society because humans are driven by varied ideologies, interests and positions, which might clash time and again. Since the return to democratic rule in 1999, Nigeria has witnessed many conflicts and violence fostered by issues of identity, citizenship and participation in national dispensations. The worst case scenario in this expression is the Boko Haram insurgency since 2009. Conflict in Nigeria has thrived mainly because the approaches adopted to address conflict have been inadequate and unsuitable. This article adopts the Participatory Theory to discuss the effectiveness of applying more subtle approaches such as the Theatre-for-Development (TfD) framework for peace building in Nigeria today. It focuses on the conflict between herders and farmers in Barkin Ladi and Riyom Local Government Areas (LGAs) of Plateau State and the experimentation of the TfD framework to facilitate dialogue and reconciliation between herders and farmers and presents qualitative data to this effect. The paper concludes that conflict and conflict related issues can be addressed without the use of force. Therefore, TfD is the alternative strategy for entrenching peace and building inclusive societies. The study recommends that the Theatrefor-Development framework is pertinent for peace building as it is an investment in humans, both physically and psychologically towards reconciliation and durable peace. It also recommends that there is the need for capacity building within government agencies to improve their fundamental understanding of conflict related issues and enhance their ability to contribute to just and lasting solutions.
Introduction
The end point of development practice is the wellbeing of the citizens. However, there are always obstacles to be surmounted to be able to reach the goal of improved standards of living for citizens. The absence of good governance and concrete efforts to forge national integration and economic growth are part of the development obstacles in Nigeria. Corruption, mismanagement of public funds, increased poverty and unemployment has led to hardship for the majority of citizens in the pursuit of better living standards. The frailty of Nigeria's school system and the failures of some religious organisations and bodies in their expected roles have combined to impede Nigeria's social and economic development (Emelonye, 2011, p. 23). While these developments would not in and of themselves lead to conflict, it would be unwise to neglect the fact that they exacerbate tensions.
A more pressing obstacle to development in Nigeria at the moment is conflict and violence, which has characterised the socio-political life of the nation since 1999. Flashpoints in Nigeria have covered a wide area such as the Niger Delta, the North Central, North West and the North-East. The worst case scenario in this expression is the Boko Haram insurgency since 2009 and the herdsmen conundrum. It is possible to say that the issues that lie at the base of these problems are economic and political in nature since it appears that the politics of Nigeria revolves around two key factors: indigeneship and religion on the one hand and resource control and sharing on the other which have "aggravated inter-communal tensions that are dangerously volatile in and of themselves" (Human Rights Watch, 2015, p.2). Therefore, to make sense of and promote development in Nigeria, these background issues need to be properly analysed, understood and acted upon for an enabling environment to prevail. Timely investments in capacity building to provide basic services or address grievances concerning human rights and the rule of law can make a difference in building confidence among citizens and as well as between citizens and the State (Golberg, 2012).
Since the return to democratic rule in 1999, Nigeria has witnessed a lot of conflicts and violence fostered by issues of identity, participation and citizenship "perhaps because of the freer atmosphere it brings along with it" (Alubo, 2006, p.6). The term 'freer' suggests a sense of belonging and participation between and among all ethnicities and tongues in the affairs of the country. However, what obtains in practice is a continuous deficit of inclusive participation at federal, state and local levels. Part of the problem is that the leaders have not been able to properly harness the various voices of the citizens, to implement people-informed and guided decision making processes, policy formulation, and other endeavours expected of public office holders especially in a supposed democratic setting. While this is the case, the citizens themselves are yet to recognise the strength embedded in working together, perhaps to upturn this situation.
Plateau State, Nigeria, which defines itself as the 'Home of Peace and Tourism', was created in February 1976 when it was carved out of Benue-Plateau State. It is the twelfth largest state in Nigeria, and gets its name from the Jos Plateau. To its North are Kaduna and Bauchi States while Benue State is at its Southern border. It is flanked in the east by Taraba State and in the west by Nasarawa State. Plateau State is said to be a miniature Nigeria primarily because nearly all the ethnic groups of Nigeria are present there. The State has 17 Local Government Areas: Barkin Ladi, Bassa, Bokkos, Jos East, Jos North, Jos South, Kanam, Kanke, Langtang North, Langtang South, Mangu, Mikang, Pankshin, Qua'an Pan, Riyom, Shendam, and Wase (plateaustate.gov.org); onlinenigeria.com/plateau-state) Plateau State has one of the largest concentration of ethnic minorities in the Nigerian federation with over 58 relatively small ethnic communities spread across its seventeen local government areas with an estimated population of 3.5 million (National Population Commission: 2006). Being home to over 58 of Nigeria's ethnic groups earned it the name "mini Nigeria" (Best, 2007;Isa-Odidi, 2004). The main city and administrative capital is Jos with an approximated population of 1 million (plateaustate.gov.ng, Search for Common Ground, 2015).
The first major riot in Jos broke out in June 2001 when the Federal Government appointed a Hausa Muslim politician, Alhaji Mukhtar Mohammed as local coordinator of the Federal Poverty Alleviation Programme leading indigenes to protest his appointment. The indigenes saw his appointment as an imposition and a ploy to marginalise them on their own soil (Best, 2007;Emelonye, 2011). Violence broke out on September 2001, when a Christian woman attempted to cross a barricaded street outside a mosque during Friday prayers. What began as a fight between the woman and a group of Muslims in Congo-Russia, a street in Jos, eventually spread to other parts of the city and nearly all over the state with the greatest intensity in the northern zone of the State (Minchakpu, 2001;Human Rights Watch, 2001;Krause, 2011). The 2001 Jos riot subsequently led to the expression of long standing tensions within smaller towns and villages in Plateau State (Krause, 2011), extending beyond the city mainly because the conflict parties embarked on what is considered a 'defensive action' against their ethnic and religious kin (Best, 2007). Conflict in the rural areas also escalated as a result of exaggerated or largely falsified information especially in the northern zone of the state about the conflict scenario in the city (Best, 2007).
While the conflict in Jos city was largely as a result of political control between the indigenes (Berom, Afizere, Anaguta) and the Jasawa (people of Jos, constituting the Hausa/Fulani group), the rural conflict is between the indigenes-Christian farmers and Fulani-Muslim pastoralists largely around access to land which gained momentum after the 2001 crisis. The term Hausa/Fulani refers both to the amalgamated Hausa and urban Fulani also referred to as Jasawa (People of Jos) who speak Hausa, and the Fulani cattle herders within Plateau State, who speak Fulfulde and constitute a separate group. These two groups have been labeled thus as a result of the common religion they share and their association with the development of the post 1804 Dan Fodio jihad principles and religious communities of Northern Nigeria (Best, 2007;Krause, 2011). Some Local Government Areas within the state have shown higher conflict incidence compared to others probably because some groups are more defensive of indigene rights compared to others.
Barkin Ladi and Riyom LGAs, have significant conflict risks, with frequent communal violence between pastoralists and farmers. Riyom was also one of the four LGAs clustered around the city of Jos that were under a state of emergency from December 2011 to June 2012 (The Fund for Peace, 2013). Meanwhile, violence in Barkin Ladi spiked considerably after the state of emergency was lifted in June 2012 (Nigerian Watch, the Council on Foreign Relations and Armed Conflict Location and Event Data Project (ACLED)). Like Taft and Haken (2015) noted, the issues are complex: from indigene rights to resource control, compounded by religion. So far, the strategies employed in Plateau State have been inadequate towards conflict management and peace building. These have included military deployments, curfews and commissions of inquiry which in practice do not necessarily capture deep seated issues and/or grudges that might be responsible for relapses in conflict and violence. Given this situation, there needs to be a working strategy that primarily takes into account the human encounter that allows for participation, dialogue, and shared decision making to enhance the ability to establish durable peace in Plateau State. The study, therefore, proposes that the Theatre for Development frame work, centred on humans encountering humans is the alternative strategy for conflict management and peace building especially because it promotes participation, negotiation and inclusiveness which are indispensable for facilitating peace in any conflict ridden society.
Participatory Communication and Peace Building
Paulo Freire (1970) is the most influential scholar to apply liberation theology to education and communication practice in development contexts. Melkote and Steeves (2002, p. 297) affirm that Freire's ideas drew mostly from Christian liberation theology of Latin America and from the teachings of liberation leaders from other traditions such as Gandhi. Freire's literacy work in Brazil which empowered landless peasants to formulate their demands for a better life and liberate themselves from oppressive conditions grew into participatory communication theory and practice (Tufte & Mephalopulos, 2009, p.56). Indeed, Freire (2005, p.81) writes about problem posing education, "a constant unveiling of reality, the emergence of consciousness and critical intervention… through dialogic relations." The dialogic process Freire talks about requires communication that involves a process of shared meaning between people. For Freire (2005, p.14), "…communication should be practised not as message communication but rather as emancipatory dialogue, a particular form of non-exploitive, egalitarian dialogue which is carried out in an atmosphere of profound love and humility…and the focus should be face-to-face emancipatory dialogue". One of the perceived undertones from Freire's position is the value placed on human relationships. Perhaps, if this dialogue of love and humility is transferred to a conflict situation, there is likely to be a transformation in relationship from a chaotic one to an integrative one. Closely tied to the Freirian methodology is the conflict management style described as the problem-solving technique. The problem solving technique in conflict management is "a situation in which the parties to a conflict, either by themselves or through the assistance of a third party, find solutions to their problems in a cordial environment" (Agbu et al, 2006, p. 11). This, therefore, means that communicating peace is a matter of finding a common ground between the conflicting parties. The problem solving procedure according to Agbu et al. (2006, p.12) is: [It is] non-judgmental and highly participatory in character. It promotes cooperation between conflict parties who jointly analyse the structure of the conflict and carefully workout strategies for reconciling with each other. Peace and conflict scholars and practitioners consider problem solving the best method of dealing with conflict as its outcomes are usually self-supporting in the sense that it is advantageous to all parties in the conflict [despite it being a somewhat difficult process].
It is upon these very principles that the Participatory Communication Theory is premised. Friere (2005, p. 17) writes that: "I engage in dialogue because I recognise the social and not merely the individualist character in the process of knowing." Fundamentally, the goal of the dialogic teaching is to create a process of learning and knowing that invariably theorise into shared experiences. Slachmuijlder (2009, p. 7) asserts that the process of learning and knowing is "central to the Freirian methodology, setting all parties on an equal playing field and encouraging collaboration and the development of critical skills of analysis, interpretation and articulation." Drawing from Friere's work, Tufte and Mephalopolus (2009, p.11) aver that there are "fundamental principles to participatory communication, to lead practitioners and stakeholders", invariably, towards a critical perception of the world. These include: Voice, Dialogue, Liberating pedagogy and Action-Reflection-Action. These principles are inter woven and operate on the circular, open ended process of engaging people or stakeholders to explore problems and/or situations and reach a collective solution or needed change. Central to "voice" is the understanding of the power it holds in human relationships and realities; to steer voice "to give groups, time and space to articulate their concerns, define their problems, formulate solutions and act on them" (Tufte & Mephalopolus: 2009, p. 11). The essence is to support and strengthen individuals and communities, ensuring that groups have a platform to voice their concerns, engage in debate and solve problems collectively. This process leads to free and open dialogue. Freire identifies dialogue as the conversation between people or groups in a community or across a community in order to name their world. Accordingly, Friere (2005, p.88) notes that: Human existence cannot be silent, nor can it be nourished by false words, but only by true words, with which men and women transform the world. To exist, humanly, is to name the world, to change it. Once named, the world in its turn reappears to the namers as a problem and requires of them a new naming. Human beings are not built in silence, but in word, in work, in actionreflection.
A situation of free and open dialogue where people can "name their world" entrenches conversations as well as meaningful exchanges that inculcate collaboration, collectivism, voice and horizontal communication, equity and equality. Defining situations this way leads to problem identification, informed and collective decision making which paves the way for a meaningful restructuring and development. This follows Slachmuijlder's (2009, p. 7) observation that dialogue "encourage(s) cooperation, building social capital through networks of communication and understanding, and developing proactive and collaborative ideas for community progress" Liberating pedagogy for Freire can be interpreted as the psychological process of liberating self from negativity to positivity, from disempowerment to empowerment, from being objects to being subjects. Sometimes, communities are tied down by negative tendencies which can be transformed through participatory learning and sharing processes where collective problem identification and solution takes place. This is one thing Nigeria appears to lack in the effort to work collectively to identify common problems and proffer solutions to such problems. Hitherto, the focus has been on ethnicities or ethnic groups and the problems associated with them. Sometimes, even when dialoguing occurs within and among groups, the will to translate the outcome to action remains a problem because there are hardly any structures to facilitate and sustain action owing to the fact that the people still see themselves as objects rather than subjects in their own realities. Freire (2005, p. 70) claims that "the result of the liberating pedagogy is based on dialogue translating into actionoriented awareness and action-reflection-action." Action-reflection-action according to Freire (1970) implies a participatory communication process based on reflection on problem as well as on integration of action. That is, the attempt to collectively act on a problem identified, in real life situations. The key results of participatory communication are the articulation of awareness raising and commitment to action. First, it becomes a process of empowerment for involved communities that feel commitment to and ownership of identified problems. Secondly, the emphasis of the collective nature of the process speaks to actual issues as well as encourages the need for mutual commitment (Tufte & Mephalopulos, 2009). It is against this background that this paper examines peace building which also forms the core in Theatre for Development practice.
The Effectiveness of Applying TfD in Peace Building Discourses
Theatre-for-Development (TfD) has over time been used synonymously with 'Popular Theatre for Community Development', 'Theatre for Integrated Rural Development', Popular Theatre in Development', 'Theatre of the Oppressed', 'Participatory Theatre' and 'Grassroots Theatre' among others (Osofisan, 2004;Omoera, 2010). These several nomenclatures are reflective of the usage of theatre among targeted audiences in both rural and urban settings. TFD as it known today has undergone series of revision and redefinition. The initial method of developing scripts on campus and taking them to communities has since evolved primarily because sometimes, this method was not exactly reflective of the concerns of the people under study or their contribution towards the process. In the face of this, practitioners sought for a process that was hinged on the value of targeted communities becoming programme participants to achieve more effects; ownership and sustainability even after the TfD workers are gone.
TfD then became known as "theatre by the people, about the people and for the people" (Hansel N. Eyoh, 1986 cited in Osofisan, 2004), a practice that dwells on people's participation in order to effect positive change in the lives of communities (Osofisan, 2004) to which Illah (2004, p. 11) asserts "…anticipate certain developments in which barriers between people, their cultures and their reality are removed so that they engage in dialogue and capacity building for genuine development…" because: (i) it is participatory, meaning that it is part of a family of approaches and techniques that enable community groups to share, enhance and analyze their knowledge of life and conditions, to formulate appropriate and empowering action, (ii) [when it makes use of drama], the play-making process is improvisational, not inherently literary, and about community and people centred problems and (iii) sometimes, performance is in the open and open ended, to allow for meaningful intervention by the audience or the spectating participants [or spect-actors as coined by Augusto Boal]. In practice, TfD is a collective participatory process of creating knowledge "which is a far better and lasting procedure for change to occur" (Onuekwe, 2015, p. xiii) that makes use of a simple process consisting of the following:1. information gathering in the target community; 2. analysis of information; 3. story creation and improvisation; 4. rehearsals; and 5. community performance (Daniel & Bappa, 2004) The democratic nature of these processes not only help to spark conversations, they are also premised on the value of hearing localised voices and standpoints, paying attention to rooted idioms and metaphors and the dignity manifested to create new worlds for all-the facilitators and the communities (Onuekwe, 2015). The highlighted processes as well capitalise on and inculcate the values of hearing, seeing and doing simultaneously which account for 90% of effective learning and behaviour change (Iorapuu & Bamidele, 2004). This further enhances the sustainability of the TfD encounter. Furthermore, "it does not intimidate but rather, elicit and sustain participation in generating and obtaining as much helpful information as possible for and about the community" (Daniel & Bappa, 2004 p. 20). Breed (2002, p.1) noted that: Theatre for Development (TfD) is used as an egalitarian method to access and distill information, working with communities to create a self-sustaining tool for dialogue and from that dialogue to affect policy. TfD creates an infrastructure for communities to define themselves by developing systems of communication that identify key issues, implement solutions, and establish partnerships between resource groups.
What this implies is that Theatre-for-Development (TfD) creates an inclusive platform that relies on the communality that defines communities such that everybody becomes a key player in affecting the community positively. Consequently, TfD come into being as another way of looking at things; encouraging communities to express their concerns and reflecting upon the causes and possible solutions as a collective. More so, the TfD methodology is an exciting process as it is non-formal and not based on the literacy of its participants, giving everyone the opportunity to participate and be heard towards the actualisation of community goals. It is upon this democratic optimization that TfD becomes a viable vehicle to facilitate dialogue in peace building. Having conducted a workshop with herders and farmers that on the basis of collective gathering of information and identification of problems, plenary, brainstorming, storytelling and drama creation processes, drama presentation and post presentation discussions, copies of questionnaire were completed by the participants to reflect the effectiveness of applying Theatre-for-Development in peace building.
Chart 1: Military Deployment as an Effective Approach in Peace Building in Plateau State
All 36 respondents at 100% testify to the use of military deployment as an approach to conflict management and peace building. True to this, military check points can be seen at the entrances into Barkin Ladi and Riyom LGAs, security agents live within the communities and can be seen walking or driving around. While their presence within the communities and at the borders embodies some level of calm and orderliness, communities still live in fear and suspicion. Some of the respondents were satisfied with the presence of security agents and some were not. For example a respondent stated: Who are you a bloody civilian to talk to a military personnel? You must dance to their tune even when you are an indigene of the land who is supposed to guide the 'visitors'… I'm not saying that these security agents haven't put in any effort, they are trying but the question is how come we haven't had any peace yet? What the common man needs is peace, nothing more than that. (IDI Jol community, November 18, 2018) Another asserted that: We appreciate the presence of the security agents in our community. They help us in numerous ways; they safeguard our community ... (FGD Gashish, November 23, 2018). And yet another: They have been very helpful. They help maintain law and order especially when they sense foul play. They also accompany us to various places when we are not sure of our safety (a respondent pointed to a military van with some security agents and community members standing by. A respondent said the security agents will accompany those people to their former homes to retrieve some items that were left behind when they fled (FGD Jol community, November 18, 2018).
The feelings of 'fear' and 'suspicion' are borne out of the fact that guerilla attacks are still being perpetuated despite the presence of security agents within the communities. This buttresses the point that the security agents may be able to halt physical confrontations, but are not likely to address deep seated emotions, sentiments, grudges and/or grievances. The use of force in conflict management and peace building is not always necessary. It might be required to restore order but should be followed by other subtle strategies that ensure human relations.
Chart 2: Declaration of Curfew as an Effective Approach to Peace Building in Plateau State.
Compared to military deployment, curfews are not popular as shown in chart 2. But like military deployment, curfews are declared to keep people off the streets and to manage the occurrence of any confrontations which fortunately or unfortunately is still related to halting physical violence which of course is not bad in itself. However, there have been instances of reprisals following the lifting of curfews. Beyond keeping conflict parties away from each other whether by geographical or physical boundaries, more attention needs to be paid to other relational aspects of conflict management and peace building strategies such as town hall meetings, sports activities, concerts/musical shows, inter faith conferences, etc. (Emelonye, 2011). Yet, it is still not popular among the people for whom the inquiries were carried out. Furthermore, recommendations from the investigations and findings have hardly been considered and applied (Krause, 2011). This raises questions about the viability, efficacy and relevance of certain approaches in certain contexts with regard to conflict management and peace building. While the goal of commissions of inquiry, "if conducted properly are an acceptable and useful means of investigating factual situations and obtaining policy recommendations from an independent and impartial source to prevent future tragedies" (Gomery, 2006 p.784), such cannot be said here as the result from respondents prove otherwise. The danger of this still remains that people may not be to move beyond sentiments and emotions despite providing evidence to the commissions which can and have in some contexts led to relapses in conflict and violence.
Chart 4: Overall, I understood the message and the Technique employed
Chart 4 shows that when people contribute first hand in providing raw data and creating a story by themselves for themselves, they are more likely to understand the underpinnings of a message/performance as illustrated. 25% of respondents agree while 75% 'strongly' understood the message and the technique used. Here, because the nature of work being done required no formal training, acting and reenactment did not appear alien to the participants especially as it is sociable and immediate. It was instinctual for the participants to seize the opportunity to tell their stories. Instinctual in the sense that the ground was already being prepared through question and answer interactions, games, exercises, etc., to build confidence and trust leading up to the creation and presentation of the performances. Whether this art form is popular to spect-actors or not, people across cultures, easily identify with theatre. To this effect, it can be noted here that though herders and farmers are suspicious of one another, they easily participated in the story making and messages embedded in the drama skits. No force or coercion was required to gain support or cooperation from participants which lends credence to the viability of theatre as an alternative tool for conflict management and peace building.
Chart 5: The Performance (s) addressed Issues of Conflict and Peace Building Satisfactorily
Chart 5 is a representation and reflection of the efforts made by the facilitators and spect-actors climaxing in the performances. In view of the fact that the spect-actors provided information and determined what formed the plot and sequence of the performances, a contrary summation would have been worrisome. While the performances were not exhaustive or end products in themselves as reflected by 8.3% of the study population, they brought to light some of the fears, worries and likely solutions to the conflict situation and peace building in the study locations. Therefore, TfD aids in providing a multiplicity of perspectives for the interrogation and understanding of issues while offering a platform for the expression of self and the collective. Being able to express self and/or the collective, also means that people can "name their world", invariably naming their problems and discerning ways to tackle them which could bring about healing and the likely dispelling of deep seated grudges.
Chart 6: The Use of Drama is Significant in Peace Building
Having participated in the processes of information gathering and sharing, brain storming, play creation/drama and post-performance discussions which involve the physical, psychological and emotional contributions of the spect-actors in the sense that they actively took part using their bodies and voices, thought through the sequence of the drama and the images they wanted to present encapsulating their emotions; concerns and hope, the respondents strongly agreed at 88.9 % that drama is important in conflict management and peace building. It is important not because it takes away pain or anger or feelings of revenge but because it engenders reality, people encountering people in a face-to-face manner which helps them unburden themselves using the platform as a buffer for ill feelings.
Chart 7: During and after the Performance (s), I felt a Sense of Connection with other Participants
As pointed out earlier, drama or theatre performances are tied to the audiences' and in this case, the spect-actors presence and emotions as theatre cannot play to itself. 69.4% of respondents strongly agree and 30.6 % of respondents agree that they felt a sense of connection with others; fellow spect-actors. Chart 7 shows that through vivid recreation of events made possible by drama, people can witness and feel firsthand what a neighbour sees and feels and how this affects them; putting oneself in the shoes of others. By so doing, a sense of responsiveness and brotherliness is cultivated. Hence, the ability to connect with others, to understand others and perhaps shift positions or change perspectives which are essential for peace building. Interests and emotions are two factors that inflame conflict and violence. When negative interests and emotions are replaced by positive ones, farmers and herders can look beyond their differences and move on. Thus, TfD plays a role as a catalyst for improved relationships. Amidst deteriorating relationships between herders and farmers, the idea is not only to engage people but to also have them consider their strengths as a community than the weaknesses that tear them apart.
Chart 8: My Eyes Were Opened to Issues, Ideas or Points of View that I had not Fully Considered Previously
Empowerment and consciousness remain the core of any TfD encounter. From Chart 8, 47.2% of respondents agree and 52.8% strongly agree that they are now more open to consider other perspectives which simply mean that drama has the ability to inform, educate and entertain and in this tripartite function, issues are raised, various sides of a story are laid bare creating deeper meaning and understanding. This makes it possible to begin to see things in new light, beyond superficial manifestations which is important in the discourses of conflict management and peace building in the sense that people and communities can begin to move from negative to positive positions when they own a process, when they feel that their opinions not only matter, but are duly considered in the events that lead to or bring about development or improvement in their lives.
Chart 9: During and after the Performance (s), I began to think about Drama as another Medium to Address Peace Building in my Community
Chart 9 shows that the use of drama in peace building is not only effective in addressing issues both physically and psychologically but that it is also a process people are willing to adopt which further affirms its effectiveness. 75% of respondents agree and 16.7% of respondents strongly agree to the likelihood of adopting drama as a strategy for peace building. Despite the fact that the respondents are enthusiastic about drama and the TfD process probably because it is a sociable process, fluid and informal and does entail any element of force or coercion, it is however worthy to note that some of the respondents at 8.3% do not fail to realise that the process is one to carefully approach and gradually grow into for effective application and successful outings. A respondent noted thus: I must congratulate you for being able to organise this workshop with all these people here! I don't know how you did it but I am sure a lot of work went into it which is commendable…I don't imagine having the capacity to do this…. (TfD Workshop, Legislative Chambers, Barkin Ladi LGA Secretariat, Plateau State. November 30, 2018) As 'informal' and 'emergent' as the process might be or appear to be, it takes a whole lot of patience and careful planning to implement. Accordingly, Titterton and Smart (2008) noted that community engagements are not about quick wins, but to leave a lasting impression on facilitators and beneficiaries even when projects and/or programmes are long over. While drama is significant to TfD engagements, all TfD engagements do not have to encompass drama. Other tools include music, puppetry, cartooning, etc. The general idea is to assist people and communities organise themselves and create a structure with which they are able to consult one another and mobilise for immediate and future endeavours.
Chart 10: The Performance (s) have Spurred me to Take Action or Make Changes Necessary for Peace Building
Chart 10 provides insights into the respondent's genuineness and intention to use TfD in conflict management and peace building having been asked three times at different points. Here, respondents showed undoubting interests and the likelihood of applying drama in future endeavours. This also validates the position of applying TfD as stated in the discussion that follows Chart 11 which is, to leave a lasting impression on both facilitators and beneficiaries respectively.
Chart 11: Theatre should be considered as an Alternative Approach to Discuss Peace Building Chart 11 justifies the overall claim of this study which is to advance TfD as an alternative approach to building inclusive societies and entrenching peace in Plateau State. Building inclusive societies go hand in hand with democratisation: promoting some level of free handedness rather than the use of excessive force. Freedom and liberty to express oneself in a tranquil atmosphere paves the way for participation, participation leads to negotiation, negotiation leads to reflection and reflection to action all embedded in TfD. Illah (2004, p.16) noted that "Theatre-for-Development is a mode of popular theatre that seeks to dialogue and participate with and not just for communities." Thus, in working with communities, responses show that participants have attained some level of self-consciousness and the ability to work as a collective. Working as a collective also implies that room will be made for the accommodation of 'others' which suggests inclusiveness and in this manner, cementing relationships.
Conclusion
This article concludes that people encountering people is essential for engendering peace building. TfD processes are usually entertaining, dialogic, thought provoking and inclusive (to get people to cooperate without the necessary use of force). In practice, therefore, TfD is an investment in people and operates as a strategy that reflects the realities of people in diverse circumstances and supports them towards achieving improved living standards especially in the context of peace building as the study has demonstrated. Based on this the following recommendations were made: (i) there is the need for capacity building within government agencies to improve their fundamental understanding of conflict related issues and enhance their ability to contribute to just and lasting solutions. One way to go about this is to work closely with or consider hiring experts with specific social skills including development communicators, ethnographers and anthropologists; (ii) in addressing conflict, its management and peace building, the opinions, efforts and cooperation of all stakeholders are basic requirements. Thus, there is the need to conduct proper stakeholders' engagements among other existing measures. In light of this, the TfD framework should be employed to ensure widespread participation in the decision making process especially at the grassroots; and (iii) true and meaningful democracy needs to be operational to harness the various voices of people in the multi-cultural set up of Plateau State. There is the need to expand the frontiers of participation of people and communities beyond partisan divide or the use of force for durable peace to be achieved. | 2022-03-19T15:11:28.127Z | 2022-03-09T00:00:00.000 | {
"year": 2022,
"sha1": "6bad2d677ab704af8104c211577422e23b21e8ce",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/ejotmas/article/download/222638/210053",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "32ee710856cb2a4e5020fb7f3e5c988efc7f0538",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
42260027 | pes2o/s2orc | v3-fos-license | Exponents, symmetry groups and classification of operator fractional Brownian motions
Operator fractional Brownian motions (OFBMs) are zero mean, operator self-similar (o.s.s.), Gaussian processes with stationary increments. They generalize univariate fractional Brownian motions to the multivariate context. It is well-known that the so-called symmetry group of an o.s.s. process is conjugate to subgroups of the orthogonal group. Moreover, by a celebrated result of Hudson and Mason, the set of all exponents of an operator self-similar process can be related to the tangent space of its symmetry group. In this paper, we revisit and study both the symmetry groups and exponent sets for the class of OFBMs based on their spectral domain integral representations. A general description of the symmetry groups of OFBMs in terms of subsets of centralizers of the spectral domain parameters is provided. OFBMs with symmetry groups of maximal and minimal types are studied in any dimension. In particular, it is shown that OFBMs have minimal symmetry groups (as thus, unique exponents) in general, in the topological sense. Finer classification results of OFBMs, based on the explicit construction of their symmetry groups, are given in the lower dimensions 2 and 3. It is also shown that the parametrization of spectral domain integral representations are, in a suitable sense, not affected by the multiplicity of exponents, whereas the same is not true for time domain integral representations.
Introduction
This work is about the class of operator fractional Brownian motions (OFBMs). Denoted by B H = {B H (t)} t∈R = {(B H,1 (t), . . . , B H,n (t)) ′ ∈ R n , t ∈ R}, these are multivariate zero mean Gaussian processes with stationary increments which are operator self-similar (o.s.s.) with a matrix exponent H. Operator self-similarity means that, for any c > 0, (1.1) 0 < ℜ(h k ) < 1, k = 1, . . . , n, (1.2) on the eigenvalues h k of the matrix exponent H, any OFBM B H admits the so-called integral representation in the spectral domain, ( 1.3) Here, x ± = max{±x, 0}, is a complex-valued matrix with real-valued A 1 , A 2 , A indicates the complex conjugate of A, B(x) = B 1 (x) + i B 2 (x) is a complex-valued multivariate Brownian motion satisfying B 1 (−x) = B 1 (x), B 2 (−x) = − B 2 (x), and B 1 and B 2 are independent with induced random measure B(dx) satisfying E B(dx) B(dx) * = dx. Thus, according to (1.3), OFBMs are characterized (parametrized) by the matrices H and A.
In this work, we continue the systematic study of OFBMs started in Didier and Pipiras (2011). We now tackle the issues of the symmetry structure of OFBMs and of the non-uniqueness (multiplicity) of the exponents H, which, in our view, are essentially unexplored. Such issues are strongly connected. Since the fundamental work of Hudson and Mason (1982), it is well known that one given o.s.s. process X may have multiple exponents. More specifically, if we denote the set of exponents of X by E(X), we have that where H is any particular exponent of the process X. Here, is the so-called symmetry group of the process X (where GL(n) is the multiplicative group of invertible matrices), and T (G X ) = C : C = lim n→∞ C n − I d n , for some {C n } ⊆ G X , 0 < d n → 0 (1.7) is the tangent space of the symmetry group G X . By a result for compact groups (e.g., Hoffman andMorris (1998), p. 49, or Hudson andMason (1982), p. 285), it is known that for some positive definite matrix W and some subgroup O 0 of the orthogonal group. As a consequence, the knowledge about (1.5) is subordinated to that about the symmetry group G X of X. For example, the exponent is unique for the process X if and only if the symmetry group G X is finite.
The description and study of symmetry groups beyond the decomposition (1.8) is a reputedly difficult and interesting problem (see, for instance, Billingsley (1966), p. 176, andJurek andMason (1993), p. 60, both in the context of random vectors; see also Veeh (1993, 1995)). In this paper, we take up and provide some answers for this challenging problem in the context of OFBMs. The main goal of this paper is two-fold: to study the symmetry groups of OFBMs in as much detail as possible, and based on this, to closely examine (1.5) for OFBMs X = B H . We emphasize again that, to the best of our knowledge, this is the first work where symmetry groups are examined for any large class of o.s.s. processes (e.g., for the notion of symmetry groups of Markov processes, see Liao (1992) and references therein). Indeed, since its publication, the scope of the work of Hudson and Mason (1982) appears to have remained only of general nature, the same being true for the main result (1.5).
The integral representation (1.3) provides a natural and probably the only means to consider (almost) the whole class of OFBMs. Section 3 is dedicated to the reinterpretation and explicit representation of symmetry-related constructs in terms of the spectral parametrization H, A. One of our main results provides a decomposition of the symmetry groups of OFBMs into the intersection of (subsets of) centralizers, i.e., sets of matrices that commute with a given matrix. For example, in the case of time reversible OFBMs (corresponding to the case when AA * = AA * ), we show that the symmetry group G B H is conjugate to x>0 G(Π x ). (1.9) Here, G(Π) denotes the centralizer of a matrix Π in the group O(n) of orthogonal matrices, i.e., G(Π) = {O ∈ O(n) : OΠ = ΠO}, (1.10) and the matrix-valued function Π x has the frequency x as the argument and is parametrized by H and A. Moreover, which is key for many technical results in this paper, we actually express the positive definite conjugacy matrix W in (1.8) in terms of the spectral parametrization. This is a substantial improvement over previous works on operator self-similarity, where only the existence of such conjugacy is obtained, e.g., as in (1.8).
In view of (1.9) and (1.10), it is clear that the symmetry structure of OFBM is rooted in centralizers. The characterization of the commutativity of matrices is a well-studied algebraic problem (e.g., MacDuffee (1946), Taussky (1953), Gantmacher (1959), Suprunenko and Tyshkevich (1968), Lax (2007)). We apply the available techniques in a variety of ways to provide a detailed study of the symmetry groups and the associated tangent spaces (Sections 4 and 5), as well as of the consequences of the non-uniqueness (non-identifiability) of the parametrization for integral representations (Section 6).
Our study of the symmetry structures of OFBMs is carried out from two perspectives: first, by looking at the extremal cases, i.e., maximal and minimal symmetry for arbitrary dimension, and second, by conveying a full description of all symmetry groups in the lower dimensions n = 2 and n = 3.
Section 4 is dedicated to the first perspective. We completely characterize OFBMs with maximal symmetry, i.e., those whose symmetry groups are conjugate to O(n). We establish the general form of their covariance function and of their spectral parametrization. However, as intuitively clear, maximal symmetry corresponds to a strict subset of the parameter space of OFBMs. In view of this, one can naturally ask what the most typical symmetry structure for OFBMs is, in a suitable sense. A related question is whether the multiplicity of exponents (and, thus, the non-identifiability of the parametrization) is a general phenomenon. Section 4 contains our answer to both questions, which is, indeed, one of our main results. We prove that, in the topological sense, OFBMs with minimal symmetry groups (i.e., {I, −I}) form the largest class within all OFBMs. As a consequence, in the same sense, OFBMs generally have unique (identifiable) exponents. To establish this result, in our analysis of the centralizers G(Π x ), we bypass the need to deal with the major complexity of the eigenspace structure of the function Π x by looking at its behavior at the origin of the Lie group (i.e., as x → 1), where a great deal of information about Π x is available through the celebrated Baker-Campbell-Hausdorff formula.
Section 5 contains a full description of the symmetry structure of low-dimensional OFBMs, namely, for dimensions n = 2 and n = 3. We provide a classification of OFBMs based on their symmetry groups. For example, when n = 2, the symmetry group of a general OFBM can be, up to a conjugacy, of only one of the following types: (i) minimal: {I, −I}; (ii) trivial: {I, −I, R, −R}, where R is a reflection matrix; (iii) rotational: SO(2) (the group of rotation matrices); (iv) maximal: O(2).
Such classification of types for n = 2 stands in contrast with the situation with random vectors, for which SO(n) cannot be a symmetry group (Billingsley (1966)). Nevertheless, we show that the latter statement is almost true for OFBMs, since SO(n) cannot be a symmetry group if n ≥ 3. In both n = 2 and n = 3, we provide examples of OFBMs in all identified classes, and also discuss the structure of the resulting exponent sets E(B H ).
In Section 6, we examine the consequences of non-identifiability for integral representations of OFBMs. We show that the multiplicity of the exponents H does not affect the parameter A in (1.3) in the sense that the latter can be chosen the same for any of the exponents. Intriguingly, this turns out not to be the case for the parameters in the time domain representation of OFBMs, and points to one advantage of spectral domain representations.
In summary, the structure of the paper is as follows. Some preliminary remarks and notation can be found in Section 2. Section 3 concerns structural results on the symmetry groups of OFBMs. OFBMs with maximal and minimal symmetry groups are studied in Section 4. The classification of OFBMs according to their symmetry groups in the lower dimensions n = 2 and n = 3 can be found in Section 5. Section 6 contains results on the consequences of the nonuniqueness of the parametrization for integral representations. The appendix contains several auxiliary facts for the reader's convenience.
Notation
We shall use throughout the paper the following notation for finite-dimensional operators (matrices). All with respect to the field R, M (n) or M (n, R) is the vector space of all n × n operators (endomorphisms), GL(n) or GL(n, R) is the general linear group (invertible operators, or automorphisms), O(n) is the orthogonal group of operators O such that OO * = I = O * O (i.e., the adjoint operator is the inverse), SO(n) ⊆ O(n) is the special orthogonal group of operators (rotations) with determinant equal to 1, and so(n) is the vector space of skew-symmetric operators (i.e., A * = −A). Similarly, M (m, n, R) is the space of m × n real matrices. The notation will indicate the change to the field C. For instance, M (n, C) is the vector space of complex endomorphisms. Whenever it is said that A ∈ M (n) has a complex eigenvalue or eigenspace, one is considering the operator embedding M (n) ֒→ M (n, C). U (n) is the group of unitary matrices, i.e., U U * = I = U * U . S(n, R) is the space of symmetric matrices. We will say that two endomorphisms A, B ∈ M (n) are conjugate (or similar) when there exists P ∈ GL(n, C) such that A = P BP −1 . In this case, P is called a conjugacy. The expression diag(λ 1 , . . . , λ n ) denotes the operator whose matrix expression has the values λ 1 , . . . , λ n on the diagonal and zeros elsewhere. We make no conceptual distinction between characteristic roots and eigenvalues. We also write S n−1 := {v ∈ R n : v = 1}; in particular, we denote the complex sphere by S 2n−1 . 0 represents a matrix of zeroes of suitable dimension. Whenever necessary, we will specify the dimension of the identity matrix by writing I n . Unless otherwise stated, we consider the so-called spectral matrix norm · , i.e., A is the square root of the largest eigenvalue of A * A. For {A n } n∈N , A ∈ M (n, C), we write A n → A when A n − A → 0. V ⊥ is the subspace perpendicular to a given vector subspace V . For a set of (column) vectors v 1 , . . . , v n , A := (v 1 , . . . , v n ) is the matrix whose columns are such vectors. We denote the i-th Euclidean vector by e i , i = 1, . . . , n.
Throughout the paper, we set for an operator exponent H. We shall also work with the real part ℜ(AA For the real part, in particular, we will use the decomposition with an orthogonal S R , a diagonal Λ R and a positive (semi-)definite We shall use the assumption that ℜ(AA * ) has full rank, (2.4) in which case Λ R in (2.2) has the inverse Λ −1 R . As shown in Didier and Pipiras (2011), the condition (2.4) is sufficient (though not necessary) for the integral in (1.3) to be proper and hence to define an OFBM.
All through the paper, we assume n ≥ 2.
Remarks on the multiplicity of matrix exponents
In this section, we make a few remarks to a reader less familiar with the subject of this work. It may appear a bit surprising that an o.s.s. process may have multiple exponents, as formalized in (1.5). This can be understood from at least two inter-related perspectives: the properties of operator (matrix) exponents and the distributional properties of o.s.s. processes. From the first perspective, consider for example matrices of the form where s ∈ R. Being normal, these matrices can be diagonalized as L s = U 2 Λ s U * 2 , where U 2 ∈ U (2) and Λ s = diag(is, −is). In particular, exp(L 2πk ) = I, k ∈ Z, since e i2πk = 1. Since L s and L s ′ commute for any s, s ′ ∈ R, this yields exp (L s ) = exp (L 2πk ) exp (L s ) = exp (L 2πk + L s ), (2.6) and shows the potential non-uniqueness of operator exponents stemming from purely operator (matrix) properties. Note also that the situation here is quite different from the 1-dimensional case: in the latter, the same is possible but only with complex exponents, whereas here the matrices L 2πk have purely real entries. From the perspective of distributional properties, we can illustrate several ideas through the following simple example. The OFBMs in this example will appear again in Section 4 below.
Thus, a single parameter OFBM has multiple exponents. From another angle, for a given c > 0 and L ∈ so(n), we have L log(c) ∈ so(n) and hence exp(L log(c)) = c L ∈ O(n) = G B H . Then, which also shows that the exponents are not unique. For later use, we also note that an equivalent way to define a single parameter OFBM is to say that it has the spectral representation where C is an appropriate normalizing constant and B(dx) is as in (1.3).
Basics of matrix commutativity
We now recap some key facts and results about matrix commutativity that are repeatedly used in the paper. To put it shortly, two matrices A, B ∈ M (n, C) commute if and only if they share a common basis of generalized eigenvectors (see Lax (2007), p. 74). This means that there exists a matrix P ∈ GL(n, C) such that we can write A = P J A P −1 and B = P J B P −1 , where J A and J B are in Jordan canonical form. In particular, if A, B are diagonalizable, then they must share a basis of eigenvectors. When, for example, A = I, we can interpret that A commutes with any B = P J B P −1 ∈ M (n, C) because for (any) P ∈ GL(n, C), A = P IP −1 .
A related issue is that of the characterization of the set of all matrices that commute with a given matrix A, the so-called centralizer C(A). In particular, one is often interested in constructing the latter based on the Jordan decomposition of A.
Before enunciating the main theorem on C(A), we look at an example adapted from Gantmacher (1959).
Example 2.2 Assume the matrix A ∈ M (10, C), with Jordan representation A = P J A P −1 , has the elementary divisors (i.e., the characteristic polynomials of the Jordan blocks) where the eigenvalues λ 1 , λ 2 , λ 3 are pairwise distinct. Then, C(A) consists of matrices of the form (2.9) The blocks on the diagonal correspond to the Jordan blocks of J A and the empty entries are zeroes.
We now turn to the structure of the blocks for the general case, and (2.9) serves as an illustration of the latter. We say that a matrix X αβ ∈ M (p α , p β , C) has regular lower triangular form (e.g., as each of the blocks in (2.9) with letters a through j) if it can be written as where T pα ∈ M (p α , C) is a Toeplitz lower triangular matrix. Also denote by N pα ∈ M (p α , C) the nilpotent matrix The next theorem characterizes C(A). The proof can be found in Gantmacher (1959), p. 219 (see also pp. 220-224).
Theorem 2.1 Let A ∈ M (n, C), where A = P J A P −1 and J A is in Jordan canonical form, i.e., J A = diag(λ 1 I p 1 + N p 1 , . . . , λ u I pu + N pu ) with (not necessarily distinct) eigenvalues λ 1 , . . . , λ u . Then, the general solution to the equation AX = XA is given by the formula X = P X J A P −1 , where X J A is the general solution to the Here, X J A can be decomposed into blocks X αβ ∈ M (p α , p β , C), α, β = 1, . . . , u, where . . , η n ). Assume η j 1 = . . . = η j k (possibly k = 1) for some subset of eigenvalues of A, and η j 1 = η i for any other eigenvalue η i of A. If another matrix M ∈ M (n, R) commutes with A, then M can be represented as . . , o j k are the column vectors in O associated with the eigenvalues η j 1 , . . . , η j k , respectively, and o i 1 , . . . , o i n−k are the column vectors in O associated with the eigenvalues η i 1 , . . . , η i n−k , respectively. Consequently, span R {o j 1 , . . . , o j k } is an invariant subspace with respect to M .
In view of Theorem 2.1 (and Corollary 2.1), it is intuitively clear that, if a matrix Γ commutes with two matrices A and B which exhibit completely different invariant subspaces, then Γ can only be a multiple of the identity. This is accurately stated for the case of symmetric matrices in the next lemma, which is used in Section 4.
Lemma 2.1 Let
A, B ∈ S = := {S ∈ S(n, R) : S has pairwise distinct eigenvalues}. (2.11) Assume A and B have no k-dimensional invariant subspaces in common for k = 1, . . . , n − 1. If M ∈ M (n, R) commutes with both A and B, then M is a scalar matrix.
The proof of Corollary 2.1 and Lemma 2.1 can be found in Appendix A, together with some additional results on matrix commutativity and matrix representations.
Symmetry groups of OFBMs
Consider an OFBM B H with the spectral representation (1.3). In this section, we provide some structural results on the nature of the symmetry group G B H (see (1.6)). In particular, we explicitly express it as an intersection of subsets of centralizers.
For notational simplicity, denote G B H by G H . Since OFBMs are Gaussian and two Gaussian processes with stationary increments have the same law when (and only when) their spectral densities are equal a.e., we obtain that where Consider first the set G H,1 . Using the decomposition (2.2) and working under the assumption (2.4), we have that Taking x = 1 and using the fact that S R is orthogonal, C ∈ G H,1 necessarily has the form with O ∈ O(n) (see also Remark 3.1 below). Substituting (3.5) back into (3.4), we can now express G H,1 as where we use the definition (1.10) of G(Π x ), and Remark 3.1 A simpler way to write (3.5) and (3.6) would be to replace W = S R Λ R S * R by S R Λ R . Note that, with our choice, W is positive definite. The relation (3.6) then takes the form (1.8).
The relation (3.6) describes the first set G H,1 in the intersection (3.1). Instead of describing the second set G H,2 separately, it is more convenient to think of the latter as imposing additional conditions on the elements of G H,1 . In this regard, observe first that, for any y > 0, which simply follows by observing that the condition defining the set G H,1 , is equivalent to Using the relation (3.9), C ∈ G H,1 satisfies the relation defining the set G H,2 , if and only if C ∈ G H,1 satisfies the same relation (3.10) with x = 1. Considering the form (3.5) of C ∈ G H,1 , this imposes additional conditions on the orthogonal matrices O. Substituting (3.5) into the relation (3.10) with x = 1, we obtain that By the expressions (3.1), (3.6) and the discussion above, we arrive at the following general result on the structure of symmetry groups of OFBMs, and in particular, on the form of the conjugacy matrix W .
Theorem 3.1 Consider an OFBM given by the spectral representation (1.3), and suppose that the matrix A satisfies the assumption (2.4). Then, its symmetry group G H can be expressed as
where W is defined in (2.3), and Π x and Π I are given in (3.7) and (3.12), respectively.
The intersection over uncountably many x > 0 in (3.13) can be replaced by a countable intersection in a standard way. We have O ∈ ∩ x>0 G(Π x ) if and only if (3.14) Equivalently, Theorem 3.1 can now be reformulated as follows.
Theorem 3.2 Consider an OFBM given by the spectral representation (1.3), and suppose that the matrix A satisfies the assumption (2.4). Then, its symmetry group G H can be expressed as
where W is defined in (2.3), and Π (m) and Π I are given in (3.16) and (3.12), respectively.
Remark 3.2 Note that the matrix Π x in (3.7) is positive definite. On the other hand, the matrix Π (m) in (3.16) is symmetric because so are the terms which is not positive definite (not even for m = 1). Note also that Π I is skew-symmetric, hence normal and diagonalizable.
On maximal and minimal symmetry groups
An operator self-similar process X is said to be of maximal type, or elliptically symmetric, if its symmetry group G X is conjugate to O(n). At the other extreme, a zero mean (Gaussian) o.s.s. process is said to be of minimal type if its symmetry group is {I, −I}. We shall examine here these symmetry structures in the case of OFBMs. First, we characterize maximal symmetry in terms of the spectral parametrization of OFBMs. Second, we analyze minimal symmetry OFBMs through a topological lens.
OFBMs of maximal type
The following theorem is the main result of this subsection. Recall the definition of single parameter OFBMs in Example 2.1.
Theorem 4.1 Consider an OFBM given by the spectral representation (1.3), and suppose that the matrix A satisfies the assumption (2.4). If an OFBM is of maximal type, then it is a single parameter OFBM up to a conjugacy by a positive definite matrix. Moreover, this happens if and only if
for some real d.
Remark 4.1 Conversely, an OFBM which is a single parameter OFBM (up to a positive definite conjugacy) is of maximal type (see Example 2.1). We also point out that we have a proof of the first claim in Theorem 4.1 which does not make use of spectral representations and dispenses with the assumption (2.4). In the proof of Theorem 4.1 we use spectral representations in order to illustrate how the main results of Section 3 can be used. (4.2) Note that which implies that, for any x 1 , x 2 > 0, is additive over R, and it is measurable (since it is continuous). As a consequence, by Theorem 1.1.8 in Bingham et al. (1987), p. 5, there exists a real d such that log(λ exp(·) ) = −2d(·), i.e., Relations (4.3) and (4.5) imply that the covariance structure of OFBM can be written as In view of the relations (2.2) and (2.7), this shows that B H is a single parameter OFBM up to a conjugacy. Finally, note from the above that Π Remark 4.2 Theorem 6 in Hudson and Mason (1982) shows that every maximal symmetry o.s.s. process has an exponent of the form h I, h ∈ R. For the case of OFBMs, the proof of Theorem 4.1 retrieves this result (see expression (4.5)). Moreover, it is clear that, for a maximal symmetry OFBM B H (or, as a matter of fact, for any maximal symmetry o.s.s. process) and for any H ∈ E(B H ), W −1 HW is normal, since W −1 (H − hI)W ∈ so(n) (see also Section 5 for further results on the structure of exponents for dimensions n = 2 and n = 3). (4.6)
OFBMs of minimal type: the topologically general case
We shall focus here on the relation (4.6) with the following related goals in mind. The first goal is to provide (practical) conditions for (4.6) to hold and, hence, for an OFBM to be of minimal type. This is a non-trivial problem. The structure of G(Π x ) depends on both the eigenvalues and the invariant subspaces of Π x , which are arbitrary in principle. Moreover, their explicit calculation becomes increasingly difficult with dimension. To shed light on (4.6), we take up an idea from Lie group theory: a lot of information about M in the expression Π .7)) is available at the vicinity of the identity in the Lie group, i.e., as x → 1. The general approach we take is to study the behavior of the logarithm of Π x through the Baker-Campbell-Hausdorff formula, i.e., by looking at the associated Lie algebra. The characterization of the behavior of the eigenvectors of Π x will then be retrieved by turning back to the Lie group through the exponential map.
Initially, our conditions for the relation (4.6) to hold are in terms of the matrix M , and not directly in terms of H and A. Our second goal in this section is to show that these conditions on M yield "most" OFBMs in terms of the parametrization M , and then relate them back to H and A. The term "most" is in the topological sense, i.e., except on a meager set. This result should not be surprising: if ∩ x>0 G(Π x ) has non-trivial structure, then this imposes extra conditions on M (or D, W ) as in Section 4.1. Though not surprising, formalizing this fact is not straightforward, as shown here. This second goal leads to the main result of this section, which, for the sake of clarity, we now briefly describe. In analogy with the assumption (1.2), consider the set where d 1 , . . . , d n denote the charateristic roots of D. Theorem 4.2 below states the existence of a set M ⊆ M (n, R) such that, for all D and positive definite W such that W −1 DW ∈ M ∩ D, the OFBM with spectral parametrization D and ℜ(AA * ) := W 2 has minimal symmetry. Moreover, M ∩ D is an open set (of parameters), and it is dense in D. Therefore, M c ∩ D is a meager set. Conversely, every M ∈ M ∩ D gives a minimal symmetry OFBM through an appropriate spectral parametrization. In order to provide easy access to the mathematical content of this section, Remark 4.6 below contains a short heuristic proof of a weaker version of this claim, i.e., that OFBMs have identifiable (unique) exponents for every parametrization except on a meager set. The rest of this section is dedicated to developing these ideas, as well as the framework behind them. Hereinafter, unless otherwise stated, we impose no restrictions on the eigenvalues of M , i.e., the expression Π x = x −M x −M * is taken for any M ∈ M (n, R). Consider the decomposition of the latter space into the direct sum (4.8) where S = (M + M * )/2, L = (M − M * )/2 are, respectively, the symmetric and skew-symmetric parts of M . The next proposition shows that for an appropriately chosen M , the centralizer of the family Π x is minimal. In the proof, the symbol [·, ·] denotes the commutator. Since the point x = 1 is a singularity in the sense that all the information about M from Π x = x −M x −M * is lost at it, the idea is to analyze the behavior of Π x for x in a close vicinity of 1. The proof of Proposition 4.1 is based on the idea that, for a matrix parameter M = S + L, S ∈ S = ⊆ S(n, R) and L ∈ so(n), the existence of non-trivial solutions for the matrix equations OΠ x = Π x O, x > 0, implies the coincidence of some invariant subspace of S and L, which, as we will see afterwards, is a very special situation in the topological sense.
Since the mapping M → exp(M ) is a C ∞ homeomorphism of some neighborhood of 0 in the Lie algebra of GL(n, R) onto some neighborhood U of the identity I in GL(n, R), then its inverse function Log is well-defined on U . Therefore, by the Baker-Campbell-Hausdorff formula, for small enough log(x) we have (see Hausner and Schwartz (1968), p. 63 and pp. 68-69).
We would like to show that there exists δ > 0 such that the symmetric matrices do not share k-dimensional real invariant subspaces with the symmetric matrix M + M * , k = 1, . . . , n − 1. Without loss of generality, assume that there exists {x i }, x i → 1, such that each associated expression in (4.10) shares a k(i)-dimensional invariant subspace with M + M * . Since the number of possible real invariant subspaces of M + M * ∈ S = (n, R) is finite (by Corollary C.2, they are generated by real eigenvectors of M + M * ), by passing to a subsequence if necessary, we can assume that each where Υ is diagonal and assume without loss of generality that the invariant subspace in question is span R {o 1 , . . . , o k } (i.e., the first k columns of O ∈ O(n)). This implies that we can write where by Corollary C.1 J i is block -diagonal of the form Therefore, . Therefore, the right-hand side of (4.11) must converge as x i → 1. Denote the limit by where the entries below the main diagonal are equal to the corresponding ones above the main diagonal. From expressions (4.13) and (4.12), we conclude that the upper right (non-square) block is zero, i.e., Therefore, since the λ's are pairwise different, Therefore, (note that L 11 and L 22 may contain zeroes off the main diagonal but this is inconsequential). Therefore, span R {o 1 , . . . , o k } is also an invariant subspace of L (contradiction). ) ∈ S = (n, R) and the number of possible subspaces of the form span R {v j 1 , . . . , v j k } is finite, then we can assume that the shared invariant subspace is the same for all i. For notational simplicity, write it as span R {v 1 , . . . , v k }; complete this basis with orthonormal vectors v k+1 , . . . , v n and let P = (v 1 , . . . , v k , v k+1 , . . . , v n ) ∈ O(n). Then, by Corollary C.1 Consequently, we also have that Bearing in mind Proposition 4.1, we would like to construct a set of matrices M based on which we can apply the proposition, and whose topology we can characterize. We now take a closer look at an appropriate set of skew-symmetric matrices.
∈ L invar (e 1 , e 2 , e 3 , e 4 , e 5 ) since the real subspace span R {e 1 , e 2 , e 3 } is invariant with respect to L (the same being true for span R {e 4 , e 5 }).
Remark 4.3
The representations of the sets L invar may use different arguments (vectors). For instance, L invar (v 1 , v 2 ) = {0} for every orthonormal pair v 1 , v 2 ∈ R 2 . However, this is inconsequential for the developments in this section.
Proposition 4.2 below establishes the topological properties of the class of "well-behaved" exponents M , i.e., those that will eventually be associated with minimal symmetry OFBMs. Its proof is based on the next two lemmas. Proof: (i) Openness stems from the fact that, for {S k } ⊆ S(n, R) such that S k → S 0 , by Lemma B.1, the eigenvalues of S k converge to those of S 0 . Thus, for large enough k, the latter are pairwise distinct. For denseness, take S 0 ∈ (S = ) c . One can obtain a sequence {S k } ⊆ S = such that S k → S 0 simply by appropriately perturbing the eigenvalues of S 0 , for a fixed conjugacy O ∈ O(n) of eigenvectors of S 0 .
(ii) By contradiction, fix a real orthonormal basis o 1 , . . . , o n and assume that (L invar ) c := (L invar (o 1 , . . . , o n )) c is not open. Then there exists L ∈ (L invar ) c such that, for some {L i } i∈N ⊆ L invar , L i → L. Since there are finitely many k-tuples (o j 1 , . . . , o j k ), k = 1, . . . , n − 1, then we can extract a (convergent) subsequence {L i ′ } for which all L i ′ 's share one invariant subspace span R {o j 1 , . . . , o j k } (i.e., k is not a function of i). This means that we can form an orthogonal and by Corollary C.1 write where L i 11 ∈ so(k), L i 22 ∈ so(n − k). Since lim i→∞ L i 11 , lim i→∞ L i 22 must exist, then L ∈ L invar (contradiction).
As for denseness, once again fix a real orthonormal basis o 1 , . . . , o n . Take any L ∈ L invar := L invar (o 1 , . . . , o n ) and write it in a block-diagonal form as L = Odiag(L 11 , . . . , L jj )O * , where L 11 , . . . , L jj are skew-symmetric matrices. Now form the sequence of matrices {L i } by replacing all the zero entries above the main diagonal of L with 1/i, and correspondingly, the zero entries below the main diagonal with −1/i (this may include entries in the blocks L 11 , . . . , L jj themselves). Then, by Corollary C.1, we must have L i ∈ L c invar , and L i → L. 2 We now define a correspondence (set-valued function) that maps the set S = into sets of skewsymmetric matrices in the classes (4.14). In the next lemma, we show that the graph of the correspondence (4.15) is open and dense in the set (S = , so(n)). The topology under consideration is that generated by the open rectangles and · is the spectral norm. Proof: Openness is a consequence of the fact that, if S 0 ∈ S = , then, as S i → S 0 , the eigenvalues of S i converge to those of S 0 (in the sense of Lemma B.1). Indeed, assume by contradiction that there exists (S 0 , L 0 ) ∈ Graph(l) such that, for some sequence (S i , L i ) / ∈ Graph(l), Note that there cannot be a subsequence {S i ′ } ⊆ S c = such that S i ′ → S 0 (since this contradicts the openness of S = established in Lemma 4.1). Thus, we can assume that {S i } i∈N ⊆ S = . Consequently, we must have L i ∈ L invar (o i 1 , . . . , o i n ), where o i 1 , . . . , o i n is a basis of real eigenvectors of S i . Since S 0 ∈ S = , then by Lemma B.2, we can assume that o i 1 → o 1 , . . . , o i n → o n , where o 1 , . . . , o n is a basis of real eigenvectors of S 0 . Therefore, we can write where, by Definition 4.1 and Corollary C.1, possibly by a permutation of the vectors o i 1 , . . . , o i n the matrix K i ∈ so(n) can be made block-diagonal (see also Example 4.1). Define k * = min{k = 1, . . . , n − 1 : infinitely many L i 's have a k-dimensional real invariant subspace}, i.e., the minimal non-trivial dimension for invariant subspaces over infinitely many terms of the sequence {L i }. Now for the associated sequence of vectors o i 1 , . . . , o i n , for each i choose one subset of indices j 1 (i), . . . , j k * (i) ⊆ {1, . . . , n} such that o i j 1 (i) , . . . , o i j k * (i) generates a real invariant subspace of L i (there may be more than one choice, but this is inconsequential). Since there are at most n k * such choices, by passing to a subsequence if necessary, we can fix a subset of indices j 1 , . . . , j k * such that o i j 1 , . . . , o i j k * generates a real invariant subspace of L i for every i in this (sub)sequence. Since we can change at will the order of columns in the conjugacy matrix, without loss of generality we can assume that j 1 = 1, . . . , j k * = k * . Therefore, where L i 11 ∈ so(k * ), L i 22 ∈ so(n − k * ). Since O i = (o i 1 , . . . , o i n ) → (o 1 , . . . , o n ) and L i → L 0 , then the limits lim i→∞ L i 11 , lim i→∞ L i 22 exist and thus L 0 ∈ L invar (o 1 , . . . , o n ). Therefore, (S 0 , L 0 ) / ∈ Graph(l) (contradiction).
We now show denseness. Assume (S 0 , L 0 ) / ∈ Graph(l). We will break up the argument into cases. Let o 1 , . . . , o n be a basis of real orthonormal eigenvectors of S 0 .
• S 0 / ∈ S = , L 0 ∈ L invar (o 1 , . . . , o n ): As in the previous case, generate a sequence S i = OD i O * by appropriately perturbing the repeated eigenvalues of S 0 so that S i ∈ S = and S i → S 0 . Without loss of generality, assume that the subset of vectors o 1 , . . . , o k generates a real invariant subspace with respect to L 0 . Now apply the same argument as in the proof of Lemma 4.1 and obtain the sequence (S i , L i ) ∈ Graph(l) (since S i and S 0 share the eigenvector basis o 1 , . . . , o n ), with (S i , L i ) → (S 0 , L 0 ). In order to make the claim about the general minimality of the symmetry groups of OFBMs, we need to restrict the parameter space, as in (1.2). For this purpose, we consider the set D in (4.7). The following is the main result of this section. It shows that, except possibly when the parametrization is taken on a meager set, OFBMs are of minimal symmetry. The converse is an immediate consequence of Proposition 4.2. 2 Remark 4.6 A simple heuristic argument may shed light on a weaker version of the claim of Theorem 4.2, i.e., with identifiability (uniqueness) of the matrix exponent H in place of the minimality of the symmetry group. Consider a matrix parameter S + L = M = W −1 DW such that S ∈ S = and L ∈ so(n). For x = 1, but close to 1, the Baker-Campbell-Hausdorff formula gives where P x ∈ O(n) is a matrix of eigenvectors of Π x . In particular, G(Π x ) is finite and thus the matrix exponent of the associated OFBM is identifiable. Moreover, the set S = is open and dense in S(n, R), thus implying that S = ⊕ so(n) is open and dense in the subset of M (n, R) whose eigenvalues have real parts between −1/2 and 1/2. 5 Classification in dimensions n = 2 and n = 3 Theorems 3.1 and 3.2 describe the general structure of symmetry groups of OFBMs. The cases of maximal and minimal symmetry groups were studied in Section 4. In this section, we are interested in identifying all the possible "intermediate" symmetry groups. We shall describe their structure in dimensions n = 2 and n = 3, and make some comments about higher dimensions.
Dimension n = 2
When n = 2, the contribution of the term G(Π I ) in Theorems 3.1 and 3.2 can be easily described, as the next two results show. Proof: Since Π I is skew-symmetric, we have Thus, Π I /λ is a rotation matrix, and thus G(Π I ) = SO(2). 2 Theorems 3.1 and 3.2 can now be reformulated as follows.
Corollary 5.1 For n = 2, under the assumptions and notation of Theorems 3.1 and 3.2, we have Next, we study the possible structures of the groups G(Π) when Π is symmetric (and hence potentially positive definite, as the matrix Π x in (5.1)). Let π 1 , π 2 be the two real eigenvalues of Π. Two cases need to be considered: Case 2.1: π 1 = π 2 , Case 2.2: π 1 = π 2 . (5.3) In Case 2.1, Π = π 1 I and hence In Case 2.2, we can write where the columns of the orthogonal matrix S = (p 1 p 2 ) consist of the orthonormal eigenvectors p 1 , p 2 of Π. By Theorem 2.1, B ∈ O(2) commutes with such Π if and only if B = SGS * where G is a diagonal matrix such that G 2 = I (G 2 = I is a consequence of the fact that B ∈ O(2)), or (2.c) rotational type: SO(2); (2.d) maximal type: O(2).
All the types of subgroups described in Theorem 5.1 are non-empty, as we show next. Since OFBMs of maximal and minimal types were studied in general dimension n in Section 4, we now provide examples of OFBMs of only the two remaining types for dimension n = 2.
Example 5.2 (Trivial type) Consider an OFBM with parameters where d 1 = d 2 are real. Then, implying that, for x = 1, where H is any exponent of the OFBM B H . This relation can be further refined, as the following proposition shows. For this purpose, we need to consider a so-called commuting exponent H 0 ∈ E(B H ), i.e., an exponent H 0 such that for all C ∈ G H . The existence of this useful exponent is ensured by Lemma 2 of Maejima (1998).
Proposition 5.1 Consider an OFBM given by the spectral representation (1.3), and suppose that the matrix A satisfies the assumption (2.4). If E(B H ) is not unique, then the commuting exponents are of the form
and h ∈ C. In particular, )U * 2 W −1 . Therefore, since h 1 , h 2 are also the eigenvalues of U 2 diag(h 1 , h 2 )U * 2 , which must only have real entries, a simple calculation shows that h 1 = h 2 , and thus (5.11) holds. This also yields (5.13).
For H ∈ E(B H ), (5.13) implies that W −1 HW is normal. In particular, we may choose the Remark 5.1 In the case of OFBMs of rotational type, which have multiple exponents, every exponent is a commuting exponent (compare with Meerschaert and Veeh (1993), p. 721, for the case of operator stable measures).
Remark 5.2 For general proper Gaussian processes, one can define symmetry sets (groups) in the same way as for o.s.s. processes, and, in particular, show that they are also compact subgroups of GL(n, R). By applying the argument of the proof of Theorem 4.5.3 in Didier (2007), which is based on general commutativity results, instead of spectral filters, one can show that the classification provided by Theorem 5.1 actually holds for the wide class of proper bivariate Gaussian processes.
Dimension n = 3
We will make use of the partition of O(3) into the following subsets: where for a vector p, (5.14) From a matrix perspective, for some p ∈ S n−1 , where ∼ = indicates conjugacies by orthogonal matrices.
Remark 5.3
The subscript θ in Rot θ or Ref θ only indicates that the angle in question is not 0 or π. Here, θ does not refer to a specific angle. Indeed, even in the case of a fixed p, Rot θ (p) and Ref θ (p) are classes of matrices. Also, in the expression Ref 0 we use the subscript 0 to indicate that there is no rotation before reflection through the plane in question.
Proof: Recall that, under (5.20), the symmetry group G H is conjugate to ∩ x>0 G(Π x ). By Proposition 5.2, G(Π x ) can only be of the forms (5.16), (5.17) and (5.18). The proof is now split into the following cases.
Case 2: all G(Π x ) are the same and have the form (5.17). This gives type (3.d).
Case 3: there are x 1 = x 2 and two different G(Π x 1 ) and G(Π x 2 ) such that both have the form (5.17). Let r 1 , r 2 , r 3 and p 1 , p 2 , p 3 be the corresponding orthonormal vectors in (5.17). For G(Π x 1 ) and G(Π x 2 ) to be different, the corresponding axis span R {r 3 } and span R {p 3 } have to be different. We break up the proof into subcases. In all of them, based on necessary conditions for commutativity, we will construct a set C of "candidate" matrices such that C ⊇ G(Π x 1 ) ∩ G(Π x 2 ). As we will see, in all subcases, C has one of the forms (3.a), (3.b) or (3.c). Since all the possible symmetry groups which are subgroups of (3.c) (the most encompassing of the three) are of the forms (3.a), (3.b) or (3.c), then ∩ x>0 G(Π x ) must be of one of the latter forms, which completes the proof.
Thus, the subcases and the respective sets C are as follows.
(3.i) p 3 ∈ span R {r 1 , r 3 }\(span R {r 1 }∪span R {r 3 }) (or with r 2 in place of r 1 ): then, by Theorem 2.1 (c.f. the proof of Proposition 5.2, case (ii)), both span R {r 3 } and span R {p 3 } are real invariant subspaces of any M ∈ G(Π x 1 ) ∩ G(Π x 2 ). Consequently, since r 3 and p 3 are not orthogonal, and two real eigenvectors of orthogonal matrices associated with different eigenvalues 1 and −1 must be orthogonal, for M we must have that span R {r 3 , p 3 } = span R {r 1 , r 3 } is a two-dimensional real (proper or not) subspace of the eigenspace associated with either 1 or −1. Therefore, the remaining eigenvalue of M must be real, and by Lemma A.2 there exists an associated eigenvector which is orthogonal to span R {r 1 , r 3 }. In particular, r 2 is an eigenvector. Therefore, we obtain the set of candidate matrices C = {±I, ±(r 1 , r 2 , r 3 )diag(1, −1, 1)(r 1 , r 2 , r 3 ) * }.
(3.ii) p 3 ∈ span R {r 1 , r 2 }: then for M ∈ G(Π x 1 ) ∩ G(Π x 2 ), by the same argument as in (3.i), both p 3 ⊥ r 3 are (real) eigenvectors of M associated with real eigenvalues. Thus, the third eigenvalue of M is also real and, by Lemma A.2, (span R {p 3 , r 3 }) ⊥ ⊆ span R {r 1 , r 2 } contains a unit norm real eigenvector of M , which we can denote by q. Thus, we can set (3.iii) p 3 / ∈ ∪ i =j span R {r i , r j }: then, by the same argument as in (3.i), span R {r 3 } and span R {p 3 } are both real invariant subspaces for the solutions. However, since they are not orthogonal, then, for each M ∈ G(Π x 1 ) ∩ G(Π x 2 ), we have that span R {p 3 , r 3 } is a two-dimensional real (proper or not) subspace of the eigenspace associated with either 1 or −1. Thus, by Lemma A.2, there exists q ∈ span R {p 3 , r 3 } ⊥ , q = 1, such that we can write C = {M : M = (q, p 3 , r 3 )diag(±1, ±I 2 )(q, p 3 , r 3 ) −1 }.
Example 5.4 (Type (3.c)) Consider the OFBM with spectral representation parameters where d 1 = d 3 . Then, AA * = diag(|a 1 | 2 , |a 2 | 2 , |a 3 | 2 ) = ℜ(AA * ), ℑ(AA * ) = 0 and Π x = diag(x −2d 1 , x −2d 1 , x −2d 3 ). This yields (5.24) We now extend Theorem 5.2 to the general case of OFBMs which are not necessarily time reversible, i.e., we drop the assumption (5.20). From the perspective of the structural result provided by Theorem 3.1, the lack of time reversibility manifests itself as an additional constraint which may reduce the symmetry group, and even generate a new type, as seen in the next theorem. (1.3), and suppose that the matrix A satisfies the assumption (2.4). Then, its symmetry group G H is conjugate by a positive definite matrix W to the ones described in Theorem 5.2, plus the following: Proof: If Π I = 0, then G(Π I ) = O(n). So, assume Π I = 0. By the same argument as in Theorem 5.2, Case 3, intersecting G(Π I ) with any of the subgroups (3.a), (3.b) and (3.c) implies that, eventually, the resulting symmetry group must be of one the forms (3.a), (3.b) or (3.c). Therefore, we may only look into the intersection of G(Π I ) with subgroups of the form (3.d).
Theorem 5.3 Consider an OFBM given by the spectral representation
Assume that the latter are expressed with respect to an orthonormal basis r 1 , r 2 , r 3 . Since Π I ∈ so (3) Theorems 5.2 and 5.3 stand in contrast with Theorem 5.1 in that they show the much greater wealth of possible symmetry groups in dimension 3 as compared to dimension 2. In a certain sense, this enhances the claim of Theorem 4.2 in that, notwithstanding the increasing complexity of the possible symmetry structures as dimension increases, minimal type symmetry groups remain the topologically general case for any dimension.
We now provide the tangent spaces and exponent sets for each symmetry group with nontrivial tangent space. The proof is along the lines of that for Proposition 5.1.
Proposition 5.3
Under the assumptions of Theorem 5.3, for the symmetry groups associated with non-trivial tangent spaces, the tangent spaces, commuting exponents H 0 and sets of exponents have the form: (3.d) for some orthonormal p 1 , p 2 , p 3 , and the associated matrix S := (p 1 , p 2 , p 3 ), where U = diag(U 2 , 1) and U 2 is as in (5.12), and h 1 ∈ C, h 2 ∈ R; (3.f ) the same as for (3.d).
The case of type (3.e) is straightforward, since T (G H ) = T (SO (3)). 2 Remark 5.4 In general dimension n, there are no additional difficulties in describing the structure of groups G(Π) for a fixed symmetric matrix Π. Equivalently, one can generalize Proposition 5.2 to the context of dimension n without much effort. Nevertheless, it is cumbersome to describe the structure of intersections G(Π 1 ) ∩ G(Π 2 ), which is needed for the full characterization of symmetry groups G H as in (3.13) and (3.17). At this point, a full description of symmetry groups in general dimension n is an open question.
Remark 5.5 The classification given in Theorem 5.1 stands in contrast with the fact that SO(n) cannot be a symmetry group for R n -valued random vectors (Billingsley (1966)). In particular, SO (2) is not a maximal element of its equivalence class of subgroups in the sense of Meerschaert and Veeh (1995), p. 2 (not to be confused with the symmetry group of maximal type in Theorem 5.1). However, it turns out that Billingsley's result is actually almost true for OFBMs, and more generally, proper zero mean Gaussian processes. In other words, for the latter class of processes, SO(n) can only be a symmetry group when n = 2 (cf. Theorem 5.3). Indeed, without loss of generality, assume W = I. Then, it suffices to show that SO(n) ⊆ G(X) implies that O(n) = G(X) when n ≥ 3. However, the latter equivalence is a consequence of Proposition A.1 in the appendix.
On integral representations of OFBMs with multiple exponents
In this section, we show that when an OFBM has multiple exponents, the matrix A in (1.3) can be chosen the same, no matter what matrix exponent is used in the parametrization. We also show that, by contrast, such invariance of the parametrization does not hold for the so-called time domain representation of OFBM. We first consider the latter point. Under (1.2) and ℜ(h) = 1/2 for any eigenvalue h of H, the OFBM {B H (t)} t∈R also admits an integral representation in the time domain, i.e., , and {B(u)} u∈R is a vector-valued process consisting of independent Brownian motions and such that EB(du)B(du) * = du (Didier and Pipiras (2011)). The following example shows that, in general, the matrix parameters M + , M − cannot be chosen independently of the exponent.
The following result shows that, for a given OFBM, one can take the same parameter A in the spectral representation (1.3) for all exponents H of the OFBM in question. Proof: It is enough to show (6.7) with a commuting exponent H η := H 0 (see (5.10)) and the associated matrix A η := A 0 . For simplicity, let H = H λ , A = A λ . We know that where L 0 ⊆ so(n). We can thus write ∆ = W LW −1 with L ∈ so(n). The uniqueness of the spectral density of OFBM implies that, for x > 0, Since D 0 is a commuting exponent and ∆ ∈ T (G H ), then D 0 and ∆ commute. Hence, The last statement of the theorem follows from (6.7). 2
A Auxiliary results on matrix commutativity
We begin by proving Corollary 2.1 and Lemma 2.1. The argument draws upon Theorem 2.1.
Proof of Corollary 2.1: Without loss of generality, we can assume that j 1 = 1, . . . , j k = k. Since M commutes with A, by Theorem 2.1 we can write M = OKO * , where K = diag(K 11 , K 22 ) and K 11 ∈ M (k, R), K 22 ∈ M (n − k, R). The rest of the claim immediately follows. 2 Proof of Lemma 2.1: Consider the spectral decomposition A = OΥO * , O = (o 1 , . . . , o n ) ∈ O(n) and Υ is diagonal. Then, since M and A commute, by Corollary 2.1 and the assumption that A ∈ S = we have M = Odiag(λ 1 , . . . , λ n )O * for some λ 1 , . . . , λ n ∈ R. In particular, M ∈ S(n, R). So, assume by contradiction that λ j 1 = . . . = λ j k (possibly k = 1), where λ j 1 = λ i for any other eigenvalue λ i of M . Without loss of generality, we can assume that j 1 = 1, . . . , j k = k. But since B commutes with M , again by Corollary 2.1 span R {o 1 , . . . , o k } is a k-dimensional invariant subspace of B, which contradicts the assumption. Therefore, λ 1 = . . . = λ n , as claimed. 2 The next proposition is used in Theorem 4.1 and Remark 5.5. It shows that, for n ≥ 3, the group SO(n) is so rich that only a matrix which is a multiple of the identity can contain it in its centralizer. For n = 2, one needs to consider instead the entire orthogonal group.
Proof: We only prove (ii).
Lemma A.2 Let O ∈ O(3). Assume O has two (real) linearly independent eigenvectors p 1 , p 2 , both of which are associated with real eigenvalues. Then O has a third (real) eigenvector p 3 such that p 3 ⊥ span R {p 1 , p 2 }.
Proof: It is clear that, under the assumptions, O has only real eigenvalues. We break up the argument into subcases.
(i) p 1 , p 2 are eigenvectors associated with the same real eigenvalue (without loss of generality, assume the latter is 1): if the third eigenvalue of O is 1, then the proof ends. Otherwise, since O ∈ O(3), eigenvectors associated with different eigenvalues are orthogonal. Thus, there exists a vector p 3 (associated with −1) which is orthogonal to span R {p 1 , p 2 }. Moreover, since the eigenvalue in question is real and O ∈ O(3), one can assume that p 3 is real.
(ii) p 1 , p 2 are eigenvectors associated with different real eigenvalues (without loss of generality, assume such eigenvalues are 1 and −1, respectively): since O ∈ O(3), p 1 ⊥ p 2 . Moreover, since O ∈ O(3), then the eigenspace associated with one of the eigenvalues 1 or −1 has dimension 2. Assume without loss of generality that such eigenvalue is 1. Thus, there must exist another eigenvector p 3 associated with 1. Moreover, since O ∈ O(3), we can assume that p 3 is real. We obtain that p 3 ⊥ p 2 . Also, it is clear that one can choose p 3 so that p 3 ⊥ p 1 , and thus p 3 ⊥ span R {p 1 , p 2 }.
C Additional results on matrix representations
The results in this section are used in Section 4. The proof of Lemma C.1 is omitted because the argument is standard.
Lemma C.1 Let M be a matrix in M (n, R) and let p 1 , . . . , p k be linearly independent vectors in R n . Assume span R {p 1 , . . . , p k } is an invariant subspace of M . For p k+1 , . . . , p n such that p 1 , . . . , p k , p k+1 , . . . , p n is a basis of R n , define the conjugacy matrix P = (p 1 , . . . , p k , p k+1 , . . . , p n ) ∈ GL(n, R). Then we obtain the representation . An analogous conclusion holds if we replace S ∈ S(n, R) with L ∈ so(n).
Corollary C.2 Let S ∈ S(n, R). Each k-dimensional real invariant subspace of S is generated by a set of k real eigenvectors of S. Moreover, the orthogonal subspace is also a (n − k)-dimensional invariant subspace. | 2011-01-24T15:10:45.000Z | 2011-01-24T00:00:00.000 | {
"year": 2011,
"sha1": "0b3f4ad7c86ce4563b357113bfa2b174cc897966",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.4563.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0b3f4ad7c86ce4563b357113bfa2b174cc897966",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
269625691 | pes2o/s2orc | v3-fos-license | Relationship between visit-to-visit blood pressure variability and depressive mood in Korean primary care patients
Background High blood pressure variability (BPV) increases the risk of cardiovascular disease and may be better prognostic factor than blood pressure. Depressive mood is a common symptom among patients visiting primary care. This study aimed to investigate the association between depressive mood and high BPV among Korean primary care patients. Methods The Family Cohort Study in Primary Care (FACTS), conducted from April 2009 to November 2017, utilized a prospective cohort of Korean primary care patients, with a median follow-up period of 7.25 years. Depressive mood was assessed as a score of 21 points or more on the Korean-type Center for Epidemiologic Studies Depression scale. BP was measured at the initial visit and first and second follow-up visit. Visit-to visit SBP variability was analyzed using four metrics: intra-individual standard deviation, coefficient of variation, variation independent of mean, and average real variability. Logistic regression analysis was used to estimate the association of high BPV with depressive mood and other variables. Results Among 371 participants, 43 (11.6%) had depressive mood based on depression scores. Older age (odds ratio [OR]: 1.04, 95% confidence interval [CI]: 1.01–1.07) were associated with high SBP variability regardless of taking antihypertensive medication. Among participants taking antihypertensive medication, those with depressive mood had twice the risk of high SBP variability compared with those who did not (OR: 2.95, 95% CI: 1.06–8.20). Conclusions Depressive mood was associated with high visit-to-visit SBP variability in primary care patients taking antihypertensive medication, potentially indicating increased cardiovascular risk. Primary care physicians should therefore closely monitor BPV in patients with depressive symptoms and provide appropriate interventions.
Introduction
Major depression is a prevalent condition that is frequently characterized by a chronic-recurrent course [1].It affects approximately 2-4% of the general population and 10% of primary care patients [2].Major depressive disorder stands as a leading cause of global disease burden [1,3], with depressive symptoms being associated with major chronic and cardiovascular diseases [4][5][6].
Blood pressure variability (BPV) pertains to the fluctuations in blood pressure (BP) occurring within a specified timeframe, such as minutes, over a period of 24 h, or longer.This phenomenon is thought to result from intricate interactions involving extrinsic behavioral factors and intrinsic cardiovascular regulatory mechanisms [7].Recent research indicates that BPV is independently associated with cardiovascular events and target organ damage [8,9].The 24-hour ambulatory BP monitoring method is commonly used to evaluate short-term BPV [10,11], whereas long-term BPV is typically evaluated based on BP measurements obtained during periodic visits to clinics, commonly conducted monthly or yearly [11].
Previous studies have suggested that high BPV is associated with an elevated incidence of cardiovascular events [12], heightened cardiovascular risk, and increased mortality [13].Although a definitive threshold for BPV elevating this risk remains undetermined, studies utilizing quartiles of standard deviation (SD) of BPV [13] and those based on the median value of SD of BPV [12] have consistently indicated that higher values are associated with an augmented risk of cardiovascular events.Additionally, another study has revealed that heightened BPV is associated with poor outcomes in cerebrovascular diseases [14].This particularly study highlighted that elevated BPV, measured by the average absolute real variability (ARV) of BPV, serves as a predictive factor for poor short-term outcomes in patients with minor ischemic stroke.
The relationship between BPV and emotional status, particularly regarding depressive symptoms or anxiety, has been consistently observed in the literature [15][16][17][18]; however, few studies evaluated BPV as a factor.In the context of elderly-onset depression, evidence suggests an impact on diurnal variations in BP and an association with cerebral infarction [16].Furthermore, a study reported a significant association between late-onset depression and higher systolic BPV [17].Despite the established correlation between depression and BPV, there is a paucity of research on the association between long-term visit-to-visit BPV and depression [15].Therefore, this study aimed to evaluate the influence of depressive mood on long-term visit-to-visit BPV among primary care patients in Korea.
Study participants
The Family Cohort Study in Primary Care (FACTS) was established to evaluate the effects of the familial environment on the health of primary care patients.The study cohort comprised couples and included married, cohabitating, separated, and divorced individuals.Both partners of the couples were recruited among individuals aged between 40 and 75 years who sought primary care physicians for periodic health checkups or treatment of chronic diseases such as hypertension, diabetes, and dyslipidemia.Follow-up began at the first visit to the Department of Family Medicine at one of 22 university hospitals nationwide from April 2009 to June 2011.The final date of follow-up was November 2017.The median follow-up period was 7.25 years.All participants provided written informed consent, and the survey received approval from the Institutional Review Board of Asan Medical Center (2016 − 1183).
Demographic characteristics of study participants
Demographic characteristics were prospectively collected by interviewers or primary care physicians using questions regarding education status, monthly income, and medical history, including hypertension, diabetes, and hyperlipidemia.Educational level was categorized into three groups: < 12 years, 12 years, and > 12 years.Monthly income was evaluated by total household income using a single question and stratified into four categories: < 2.00 million Won ($1715), 2.00-3.99 million Won ($1715-3430), 4.00-5.99 million Won ($3430-5145), and ≥ 6.00 million Won ($5145).
The presence of hypertension, diabetes, or dyslipidemia was determined from the medical records of the study participants, identifying instances when the participants were reported to have any of these diseases and when they started taking antihypertensive medications, oral hypoglycemic agents, insulin, or lipid-lowering agents.Height and body weight were measured to the nearest 0.1 cm and 0.1 kg by trained interviewers.Body mass index (BMI) was calculated as (weight [kg])/(height [m]) 2 and categorized into three groups: < 23.0 kg/m 2 , 23.0-24.9kg/m 2 , and ≥ 25.0 kg/m 2 .BP was measured from the left and right upper arm using a mercury manometer after a 10-minute resting period in a seated position [19].These measurements were recorded as average BP for each visit.
Definition of depressive mood and high visit-to-visit BPV
Depressive mood was assessed using the Korean-type Center for Epidemiologic Studies Depression (CES-D) scale, with a score of 21 points or more indicating the presence of depressive mood [20].BP was measured at the initial visit and first and second follow-up visits, with follow-up intervals ranging between 6 and 24 months.Four metrics were used to assess the visit-to visit SBP variability: intra-individual SD, coefficient of variation (CV), variation independent of mean (VIM), and ARV as indices of visit-to visit SBP variability [21].Among them, ARV was chosen for the primary analysis due to its comprehensive representation of visit-to-visit BPV.High visit-to-visit BPV was defined according to a previous study [14], noting elevated BPV as values higher than average ARV.
Statistical analysis
Variables were presented as numbers with percentages or means with standard deviations (SDs).To compare characteristics between participants with and without depressive mood, the chi-square test was performed for categorical variables and Student's t-test was performed for continuous variables.Additionally, the comparison of four metrics for evaluating visit-to-visit BPV of SBP included intra-individual SD, CV, VIM, and ARV.High BPV was defined when an individual's ARV values exceeded the average ARV value of all participants.Binary logistic regression analysis was performed to estimate odds ratios (ORs) and 95% confidence intervals (CIs) for associations between high BPV and each variable, including depressive mood.Considering the potential influence of hypertension and hypertensive medication on BPV, the multivariable logistic analysis was conducted adjusting for these variables.Multivariable logistic regression analysis was performed to determine associations of high BPV with age, sex, BMI, and depressive mood.All analyses were performed using STATA version 18.0 (StataCorp, College Station, TX, USA) and SPSS ver.21.0 (IBM Co., Armonk, NY, USA).A two-tailed P-value of < 0.05 was considered statistically significant.
Characteristics of the participants
A total of 1040 participants were initially enrolled; however, 88 were excluded due to a lack of initial BP measurement.Among the remaining 952 participants, 485 were lost to first or second follow-up, 44 were excluded for not undergoing follow-up BP checks, and 52 were excluded due to missing CES-D scores or medical history (Fig. 1).Among the remaining 371 participants, 43 (11.6%) had depressive mood according to their CES-D scores.The baseline characteristics of these participants are shown in Table 1.The overall mean age was 60.08 ± 8.06 years, with no significant difference between participants with and without depressive mood (58.98 ± 7.61 vs. 60.22 ± 8.12 years, P = 0.343).A higher proportion of women than men had depressive mood (16.1% vs. 7.0%, P = 0.009), and more than half of the participants (55.8%) were taking antihypertensive medication; 19.4% were taking an oral hypoglycemic agent or insulin, and 41.2% were taking lipid-lowering agents.There were no significant differences in the histories of medications for hypertension, diabetes, and dyslipidemia between participants with and without depressive mood.
Comparison of blood pressure variability according to depressive mood
In Table 2 (7.11).Considering a mean ARV of 10.89 (7.91) across the entire cohort, high visit-to-visit BPV was defined as an ARV exceeding 10.
Logistic regression analysis of associations of high visit-tovisit BPV with participant characteristics and depressive mood
Table 3 presents the individual ORs for factors associated with high visit-to visit BPV.We estimated univariate ORs for age, sex, BMI, education, income, use of antihypertensive medication, and depressive mood.Factors associated with high visit-to-visit blood pressure variability (BPV) include age of 70 years or older (OR: 4.43, 95% CI: 1.63-12.04,P = 0.004), an education lower than high school level (OR: 1.82, 95% CI: 1.00-3.30,P = 0.049), and use of antihypertensive medication (OR: 1.55, 95% CI: 1.02-2.35,P = 0.040).
In the multivariable analysis, the entire cohort was stratified into two groups: those prescribed antihypertensive medication and those who were not (Table 4).The influence of each factor on high visit-to-visit BPV was then evaluated.Significant associations were observed when stratified by age 70 years or older (OR 7.32, 95% CI: 2.40-21.83).Additionally, associations were found in the non-antihypertensive medication groups (OR 11.63, 95% CI: 1.73-78.23)and antihypertensive medication groups (OR: 9.80, 95% CI: 0.76-125.89).Furthermore, when monthly income was less than 2 million won, the association was observed in the antihypertensive medication groups (OR: 3.10, 95% CI: 1.09-8.82),as well as when exhibiting depressive symptoms in the antihypertensive medication groups (OR: 2.95, 95% CI: 1.06-8.20).
Discussion
In this study, we observed a three-fold higher OR for high visit-to-visit BPV in patients taking antihypertensive medication when they had depressive mood.Additionally, we identified older age as a factor associated with high SBP variability.These findings suggest the importance of monitoring BPV in patients visiting primary care, particularly those showing symptoms of depressive mood or those with older age, especially among individuals taking antihypertensive medication.
A previous study demonstrated that elevated visitto-visit BPV increases the risk of cardiovascular disease and is a significant predictor of cardiovascular outcomes [22].Higher systolic BPV was associated with a higher incidence of cardiovascular events and mortality [23].Additionally, another study indicated that the multivariable-adjusted HRs and 95% CIs for the quartiles of the SD of systolic BPV, compared with the first quartile, were incrementally higher for quartiles 2 through 4, demonstrating a progressive increase in risk [13].Thus, monitoring BPV is crucial for assessing the risk of cardiovascular disease among patients visiting primary care.
Several studies have reported autonomic dysfunction in individuals with depression [24][25][26], characterized by elevated plasma or urinary levels of catecholamine compared with controls [24].Additionally, individuals with depression may exhibit heightened heart rate responses to physical or psychological stressors, even in the absence of other medical conditions [24,26].Building upon these findings, the present study suggests that depressive mood can impact BPV, potentially due to autonomic dysfunction in patients with depressive mood.In line with our research, a recent study indicated an association between depression and diastolic BPV [27].However, it's important to note that the individuals in this study were derived from the Alzheimer's Disease Neuroimaging Initiative database.Similar to our findings, another study demonstrated elevated systolic BPV among adolescents with major depression [28], attributing it to an overactivity of the cardiovascular sympathetic nervous system.Furthermore, our results align with the findings from a separate study indicating an association between increased systolic BPV and the prevalence of late-onset depression [17].
Our findings revealed that older age was associated with high SBP variability, aligning with prior research that has consistently reported an association between BPV and advanced age [29][30][31][32].This association could be attributed to the impact of increased arterial stiffness in older age, leading to alterations in the arterial vessel wall and subsequently contributing to increased BPV [11].Notably, our study population comprised primary care patients with and without hypertension.Interestingly, we observed that participants taking antihypertensive medication were more likely to exhibit SBP variability than those who were not.
The impact of antihypertensive medication on BPV can vary based on the specific class of medication [33].A meta-analysis has suggested that calcium channel blockers may decrease long-term BPV, while angiotensin receptor blockers, ACE inhibitors, and beta-blockers could be associated with an increase in BPV [34].In our study, we categorized patients into two groups according to the use of antihypertensive medication, without specifying the type of medication.Therefore, future investigations may be warranted to consider the effects of specific classes of antihypertensive medication on BPV.Furthermore, SBP variability has been linked to mortality [30,35,36] and cardiovascular diseases [35,37,38].Therefore, SBP variability may serve as a valuable indicator of variations in morbidity and mortality compared with DBP variability.This study has several limitations.First, BP was solely measured during clinic visits, without incorporating at-home BP measurements or daily BPV.Second, the assessment of depressive mood was conducted only once, at study recruitment, precluding the evaluation of mood changes over time and potential associations with changes in BPV during follow-up visits.The lack of data on evolving mood status limits the ability to establish a temporal relationship between mood fluctuations and BPV.Additionally, there is a potential for selection bias, given that only 40% of initial participants attended follow-up visits.This could be influenced by factors such as strong doctor-patient relationships and high adherence among those who attended follow-up appointments.However, it's important to note that this may not affect the association between depressive mood and BPV, given that BPV may not be directly associated with patient adherence.Finally, despite adjusting for several potential confounding factors such as age, sex, BMI, and socioeconomic status, the presence of unmeasured residual confounding factors cannot be ruled out.Furthermore, the inclusion of patients with chronic diseases such as diabetes [39,40] in the study population may introduce additional factors that may have affected their BPV.Despite these limitations, our study is meaningful because it examined BPV among primary care patients, utilizing a standardized questionnaire (CES-D scale) to assess depressive mood [20].The findings of our study underscore the importance of closely monitoring BPV in patients with depressive mood, older age, and those prescribed antihypertensive medication.Notably, for primary care clinics treating patients with depressive mood, vigilant monitoring of visit-to-visit SBP may be crucial for optimizing patient outcomes.Additionally, although our study helps elucidate the association between depressive mood and BPV, we could not evaluate whether the improvement of depressive mood could decrease BPV in primary care patients.Further study is warranted to explore this association in a larger number of participants with extended follow-ups to offer greater insights.
Conclusion
Based on our findings, the close monitoring of BPV in patients with depression among patients taking antihypertensive medication is crucial for optimizing treatment outcomes.As symptoms of depression are commonly encountered in clinical practice, our study highlights the necessity for a comprehensive approach to manage not only depression but to monitor BPV in these patient populations.
BMI
Body mass index BP Blood pressure
Table 1
Baseline characteristics of study participants
Table 2
Comparison of systolic blood pressure variability according to depressive mood
Table 3
Comparison of characteristics according high systolic BPV based on ARV of systolic blood pressure
Table 4
Multivariate logistic regression analysis of factors associated with high SBP variability | 2024-05-09T05:10:40.575Z | 2024-05-07T00:00:00.000 | {
"year": 2024,
"sha1": "2bbbbe44778a67c26887089c1e12776a7ad41964",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c0c5752c222467f5294606560463c1c300e628d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237632024 | pes2o/s2orc | v3-fos-license | Neurovascular Compression in Arterial Hypertension: Correlation of Clinical Data to 3D-Visualizations of MRI-Findings
RESEARCH ARTICLE Neurovascular Compression in Arterial Hypertension: Correlation of Clinical Data to 3D-Visualizations of MRI-Findings Panagiota Manava, Peter Hastreiter, Roland E. Schmieder, Susanne Jung, Rudolf Fahlbusch, Arnd Dörfler, Michael M. Lell, Michael Buchfelder and Ramin Naraghi Department of Neurosurgery, Friedrich-Alexander University, Erlangen-Nuremberg (FAU), Erlangen 2 Department of Nephrology and Hypertension, Friedrich-Alexander University, Erlangen-Nuremberg (FAU), Erlangen, Germany 3 Division of Neuroradiology, Friedrich-Alexander University, Erlangen-Nuremberg (FAU), Erlangen, Germany 4 International Neuroscience Institute, Hannover, Germany 5 Institute of Radiology and Nuclear Medicine, Paracelsus Private Medical University, Salzburg, Austria 6 Department of Neurosurgery, German Armed Forces Hospital Koblenz, Koblenz, Germany 7 Department of Radiology, Friedrich-Alexander University, Erlangen-Nuremberg (FAU), Germany
INTRODUCTION
NVC describes a pathological contact between the Root Entry/Exit Zone (REZ) of a cranial nerve and a vessel at the brain stem, resulting in a hyperactive dysfunction. A possible etiologic association between NVC at the left VLM and arterial HTN was noticed by JANNETTA et al. [1]. A significant reduction of arterial HTN was observed after microvascular decompression (MVD) [1 -3]. A microanatomical study classified the NVC at the VLM in HTN [4,5]. Animal models have defined the role of the sympathoexcitatory neurons of the rostral VLM and the inhibitory neurons of the caudal VLM in blood pressure regulation [4]. It seems that NVC at the VLM leads to a permanent irritation and activation of the C1 neurons located at the VLM. A possible explanation for the left NVC in essential HTN is the additional compression of the REZ of the ninth and 10 th Cranial Nerves (CN IX-X). The major part of the afferent inputs from the myocardial receptors of the left ventricle and atrium to the nucleus tractus solitarii are conducted by the low-myelinated cardiac c-fibers of the left vagus nerve [4]. Several imaging studies supported the evidence of NVC [6 -10] opposed by other studies, which could not find a significant association [11 -17]. JANNETTA's surgical results were verified in multiple independent short and long term studies [2,3,6]. Elevated sympathetic nerve activity was observed in hypertensive patients with NVC at VLM in magnetic resonance imaging (MRI) [18,19].
Physical parameters characterizing body size, such as Body Mass Index (BMI), are associated with an elevated BP. HTN itself results in target organ damage like myocardial hypertrophy and vascular changes, which also result in abnormal parameters detected by physical examination and cardiovascular imaging. Elevated BP is present in approximately 80% of patients with ischemic stroke [20,21]. Furthermore, BP variability (BPV) is an independent cardiovascular risk factor in hypertensive patients [22].
It seems that NVC of the left VLM is correlated to HTN, but only a few studies on the cause and the treatment, either clinical [1,2,23,24] or experimental [1, 25 -28], have been reported in the literature despite the possibility that such a neurogenic mechanism might be implicated in a relatively large number of patients [29], so this hypothesis of neurogenic hypertension still remains an object of debate.
The concept of vascular compression of the VLM as a mediator of increased BP via sympathetic stimulation is intriguing and may prove relevant in various clinical scenarios associated with elevated BP and its consequences, including stroke [30]. Until now, there exists no study correlating 3Dvisualizations of MRI for detection of NVC at the VLM to hemodynamic and clinical parameters of patients with HTN. In this study, we attempted to identify clinical parameters predicting the absence or presence of NVC in HTN.
The study protocol with the number: 2633 was approved by the local ethics committee and performed according to the Declaration of Helsinki and the guidelines for "Good Clinical Practice" (GCP). Before enrollment, each patient provided written informed consent prior to inclusion in this observational study. The main results with respect to the outcome, BP and sympathetic activity have been previously published [18].
Clinical Data
The study cohort consisted of 44 patients (8 women and 36 men, age range 32-70 years, mean 57.5 years) with treatment resistant HTN, defined as office BP ≥140/90 mmHg (mean of two measurements), while treated with at least 3 antihypertensive drugs, one of which was a diuretic [31]. True treatment resistant HTN was confirmed by 24-hour ambulatory BP measurements (average 24-hour BP ≥ 130/80 mmHg). All patients were consecutively enrolled and referred to our University outpatient service to rule out secondary causes of HTN and discuss with the patient alternative interventional therapies.
In this prospective observational study, the following clinical parameters were obtained: age, gender, height, weight, body surface (BSA), BMI, family history of HTN, known duration of HTN (months), and duration of medical therapy. Furthermore, with the aid of questionnaires the following parameters were assessed: alcohol consumption (g/week), sporting activity (hours/week) and consumption of nicotine (pack years).
The patients were on a typical Bavarian diet without any restrictions, including salt intake. Otherwise, we would have needed a metabolic ward, which requires a completely different setting.
Blood Pressure Measurements
Office systolic and diastolic BP as well as heart rate (HR) were captured according to European Society of Hypertension/European Society of Cardiology (ESH/ESC) guideline recommendations. Additionally, we measured the casual systolic and diastolic BP (in mmHg) and the HR (heartbeats/minute) 3 minutes after standing up. Ambulatory 24-hour BP measurements provided averages of 24-hour systolic and diastolic BP, daytime and night-time diastolic and systolic BP, 24 h ambulatory HR and 24 h ambulatory mean arterial pressure. Daytime measurements were taken every 15 minutes and nighttime measurements every 30 minutes. As quality criteria, at least 50 measurements have been successfully completed over a 24-hour period in order to be entered into the database.
Cardiovascular Parameters
In all patients, a two-dimensional (2-D) guided amplitude mode (A-Mode) echocardiography was conducted to assess left ventricular structure and function. Septal wall thickness, posterior wall thickness and left ventricular mass were calculated, as previously recommended [32].
For an accurate and precise noninvasive determination of left ventricular mass and volume, cardiac MRI was applied [33,34]. The prognostic significance of left ventricular function was well established. All subjects were imaged on a Siemens Symphony and Sonata MRI (Siemens AG, Erlangen, Germany). Cine images were acquired in 6 equally spaced short-axis locations from apex to base and 3 long-axis locations orthogonal to the short axis and orientated at 60° increments around the left ventricular central axis [34].
A 12-lead electrocardiogram (ECG) was recorded at rest in supine position and the electrocardiographic criteria of left ventricular hypertrophy (LVH) were analyzed: Sokolow-Lyon-Voltage-Index [35] and Cornell voltage duration product (Cornell) [36], heart rate and QT-time (in seconds) were obtained. To measure central systolic pressure and central pulse pressure, radial arterial wave form was recorded by the Sphygmocor System (Atcor-CardieX, New South Wales, Australia) and corresponding central aortic wave form were then automatically generated through a validated transfer function. In addition, pulse wave velocity was measured as a marker of arterial stiffness.
With this technique, we produced 3D representations of the brain stem and cranial nerves together with the vessels at the surface of the brain stem in each individual [38]. The relationships between cranial nerves and vessels, especially at the VLM were analyzed regarding the existence of NVC by two independent expert observers. The location and type of NVC [4] were determined (Fig. 1). The presence or absence of NVC in the 3D representations divided the patients into two groups. The obtained clinical and hemodynamic parameters were correlated between the two groups. In addition, an axial T2-weighted turbo spin echo sequence (T2w TSE), coronar fluid attenuated inversion recovery (FLAIR), axial T1-weighted turbo spin echo sequence (T1w TSE) and diffusion weighted sequence (DWI) of the standard mri protocol were acquired and analyzed. The coronar FLAIR and transversal T2-weighted sequence were used to determine the degree of deep and periventricular white matter lesions (WMLs). The Fazekas score was used for the classification of WMLs [40].
Statistics
Clinical variables were compared between the two groups using a Mann-Whitney-U-Test. Over all groups, a Levene's Test was applied to assess the equality of variances of the values. A standard level of statistical significance (p <0.05) was used and data were reported with 95% confidence intervals. The statistical analysis was performed using SPSS (IBM SPSS Statistics, Version 25).
Anatomical Results
All 44 patients underwent brain MRI and image processing was performed for all MRI images. Twenty-nine patients (66%) had evidence of NVC at the VLM. The left side was affected in 16 patients (36%) and the right side in 7 patients (16%). We observed a bilateral NVC in 6 (14%) patients. No NVC (either left or right) was seen in 15 (34%) patients ( Table 1). The total number of NVC observed on the left is 22 and on the right side 13 ( Table 1). According to a previously published classification [4], we found NVC type I in 16 patients, NVC type II in 4 patients and NVC type III on the left VLM in 2 patients. On the right VLM, 10 patients had an NVC type I, 2 patients a NVC type II and 1 patient a NVC type III.
Patients were separated into two groups characterized by cases with left NVC including bilateral NVC ("NVC left all") and all cases without NVC on the left including cases showing NVC right ("No NVC left"); (Table 1). Overall, we had 22 cases as "NVC left all" and 22 cases as "no NVC left".
Women were affected in 4 of the 22 cases of left NVC and men in 18 cases. Similar results were found in the group without NVC left with 4 women and 18 men; there was no difference between gender and the detection of left NVC (χ 2 = 0.00, p = 1.0).
Comparison of Body Characteristics, Behavior and Lifestyle
We analyzed age, BMI, alcohol consumption, sporting activity, nicotine consumption in pack-years and the duration of hypertension. Two patients lacked information about their smoking behavior. The group variances in age were not homogeneous (p=0.018) ( Table 2).
There was a significant difference in age between the two groups (median NVC left all= 57, no NVC left = 63; p=0.034; (Table 3a and Fig. 2
Comparison of BP, ECG and Vascular Parameters
The systolic casual BP in patients with left NVC was measured with a mean value of 158.10 (SD 19.65) mmHg in comparison to the cases without NVC left with 161.68 (SD 158.10) mmHg. In the variances of the measurements of 24-h-BP, statistical significances could be found especially in the night measurements between the two group; nighttime diastolic BP (NVC left all: 76.10 mmHg, SD 13.76; no NVC left all: 77.41mmHg, SD 5.7; p<0.001), nighttime mean arterial pressure (MAP) (NVC left all: 94.71 mmHg, SD 14.81; no NVC left all: 97.91 mmHg, SD 8.01; p=0.010), day-and nighttime diastolic BP (NVC left all: 84.95 mmHg, SD 11.90; no NVC left: 85.32 mmHg, SD 5.96; p=0.003) and day-and nighttime MAP (NVC left all: 104.14 mmHg, SD 12.76; no NVC left: 106.33 mmHg, SD 6.61; p=0.020), (Figs. 3 and 4).
24-h-BP measurements and office systolic and diastolic BP
showed no statistical differences between the two groups (Tables 3b and 3c). One patient did not participate in the 24-h-BP measurements.
Comparison of Echocardiographic and Cardiac MRI Data
No differences were found between the two groups in any of the cardiac MRI and echocardiographic data with a posterior wall thickness of 11.62 mm diastolic in the group of left NVC all and 10.79 mm in the group without NVC left ( Table 3d). The systolic posterior wall thickness was 18.04 mm in NVC left all and 17.68 mm in no NVC left similar. We obtained cardiac MRI data in 42 patients. The right ventricular wall could be measured only in 39 patients due to artifacts.
Comparison of Pulse Wave Analysis (PWA) and Pulse Wave Velocity (PWV)
There were no significant differences among the two groups in the measurements of PWA and PWV (Table 3e). Pulse wave velocity was measured in 35 patients. Table 4 presents the medication profile between the two groups, where no differences could be found. In the group "NVC left all", 12 patients received an ACEI's and 12 patients and ARB's. A combination of both were given only in 2 cases with left NVC all. In the group without left NVC, 11 patients received ACEI's, 14 patients ARB's and a combination in 3 cases.
Data on Antihypertensive Medication
All patients received thiazide type diuretics. 18 cases of left NVC all received CCB and only 8 cases received CCB in the group no NVC left. Beta-blockers were given to 15 patients with left NVC and 9 patients without NVC. Central sympatholytic drugs were given to six patients with NVC left and to four patients without NVC. Fig. (4) Central symphatholytic drugs 6 4 Alpha-blockers 2 5
Analysis of WMLs
WMLs could be found in 11 cases in NVC left, 9 of those with moderate findings (Score 1) and 2 patients with severe findings (Score 2). In the group without NVC, we found WMLs in 14 cases, of those 10 with moderate findings (Score 1) and 4 with severe changes (Score 2) according to Fazekas.
DISCUSSION
In this study, we tried to identify specific patterns of clinical and hemodynamic data of hypertensive patients which may indicate the presence of an NVC at the VLM. MVD can be an option for therapy in patients with severe HTN refractory to medical therapy [41]. The long term results after MVD in patients with essential HTN and NVC show that surgery could be a successful alternative therapy in certain subgroups of patients with HTN [3].
Several studies have already demonstrated an association between NVC and elevated sympathetic activity [10,19,42,43]. An elevated sympathetic activity is associated with an elevation of left ventricular mass [33,44]. Our data showed no significant difference between hypertensive patients with or without NVC in the chamber sizes and functions obtained by CMR and echocardiography. This could be explained by the nearly equal duration of HTN in both groups (p=0.301). LVH appears in all types of HTN independent of the respective cause.
Correlation between vessel contact with the left retroolivary sulcus and higher plasma norepinephrine concentrations with depressor response to clonidine support the theory that vascular contact on the VLM may contribute to neurogenic mediated essential HTN [19].
Our group cohorts were similar in regard to gender differences. The same number of women and men were affected in both groups, with a higher frequency of men.
The structural changes regarding the WMLs in brain MRI were also similar in the two groups.
The significantly lower mean age (median: NVC left all= 57; no NVC left all=63; p=0.034) and greater variance (p=0.018) in the group NVC left all indicate that NVC can appear in younger patients and may develop faster into a treatment resistant HTN. Patients in the group "no NVC left", demonstrate a more homogenous age distribution, (Fig. 2).
From the 24-h-BP measurements, we could identify a statistically significant difference of variance for daytime (p=0.020) and nighttime diastolic BP (p<0.001) as well as mean arterial pressure (p=0.020) between both groups and not interindividual as previously reported [21], (Figs. (3 and 4). This might be explained by a still existing autonomic regulation of BP in cases with NVC.
We could not find any significant differences between the two groups for PWA and PWV. This might be explained by the nearly similar duration of HTN in both groups. Organ damage is the same in all types of HTN, independent of the cause of HTN.
Since JANNETTA's first suggestion of an association between arterial HTN and NVC of the left VLM and the REZ of the ninth and tenth cranial nerve (N.IX and N.X), a series of imaging studies with controversial results were published [6, 7, 10 -16, 41, 43, 45, 46]. The meta-analysis of BOOGAARTS et al. [47] showed, a statistical significant higher prevalence of NVC at the left VLM in essential HTN as compared to normal control subjects. At this point, one may question whether all these studies are comparable, because some of them are retrospective [13,16] and some of them are non-blinded studies [7,16]. Several further aspects have to be taken into consideration in the meta-analysis. The first aspect is the precise definition and localization of NVC at the VLM and which structures are involved. Some studies did not give a precise description of how they define NVC and the respective region [11,16]. A detailed anatomical examination of the NVC at the left VLM was published in an anatomic cadaveric study and described three distinct types of NVC associated with HTN [4]. Another aspect is the different quality of imaging. Some studies used high-resolution MRI and others used standard T1or T2-weighted images with 3-5 mm slice thickness [7,11,13,16,45]. Nowadays, high resolution MR-sequences like CISS in combination with high resolution MR-Angiography (TOF) are widely accepted and considered as the appropriate sequences for evaluating the relationship between cranial nerves and vessels in the posterior cranial fossa [48 -50]. One major aspect is the difference between prospective and retrospective studies and the quality of patient data collection regarding HTN. In our study, all patients received multiple workups over several years.
With the method used in this study, we introduced an advanced technique and standard for the evaluation of neurovascular relationships especially for the detection of NVC in essential HTN. With the use of this technique, it is possible to obtain 3D-presentations of NVC in HTN, which are reproducible.
Using 2D slice images as an exclusive source of information, an extensive experience on the part of the observer is required to achieve a correct assessment of the relationship of vessels and cranial nerves in the posterior cranial fossa. Hyperactive cranial nerve dysfunction syndromes require a comprehensive 3D analysis [38]. In this study, we used for the first time an advanced technique of image processing to generate 3D visualizations of the posterior cranial fossa in patients with HTN [39]. This was achieved through explicit segmentation of the cerebrospinal fluid (CSF), the brain stem and the cranial nerves, followed by implicit segmentation and separation of the vessels and cranial nerves embedded in CSF. Lastly, 3D visualizations were generated. The highest quality of the 3D visualization was achieved mostly through fusion of the CISS-and TOF-data. With this technique, nearly all the technical artifacts that may appear in this region, like the hemodynamic pulsation in the CSF were eliminated. Using these steps of image processing as previous introduced [48] and described [38,39], we were able to have an exact and comprehensible anatomical representation of each patient (Fig. 1).
The literature affords robust anatomical and physiological evidence that compression of REZ of CN IX-X and adjacent VLM -especially on the left side, is known to convey inputs from the baroreceptors of the (left-sided) cardiac atriumcould be the origin of systemic arterial HTN [29]. JANNETTAS' results were confirmed in several surgical studies [1 -3, 29, 51] with short and long term results.
NVC of the VLM has also been detected in brachydactyly associated with HTN [52]. The latest genetic studies on HTN and its association with brachydactyly strongly suggest that mutations in PDE3A are responsible for HTN by contributing to a general increase in peripheral vascular resistance in this syndrome [53]. Until now, we have no data regarding PDE3A from our patients.
Previous reports showed that compression of VLM was closely associated with BP variability in the subacute ischemic stroke phase [21]. Elevated BP is present in approximately 80% of patients with ischemic strokes [54]. As with intracranial aneurysms, a wide range of BP variability seems to be also relevant for NVC in HTN [55,56]. Further studies are necessary to find possible correlations.
The small number of patients could be considered as a limitation of our study. However, this could be explained by highly selective criteria for diagnosis of treatment resistant HTN as well as the availability of all clinical and hemodynamic parameters. Further large and detailed clinical studies will be needed to elucidate the clinical factors to influence the interaction between increased BP variance and the detection of NVC at the left VLM.
CONCLUSION
NVC of the VLM can be detected in patients with arterial HTN. Young patients with arterial, treatment resistant HTN should be examined for the presence of NVC at the VLM before permanent organ damage occurs. The possible association of NVC in HTN and the detailed analysis of 24 h BP regarding the BP variability should be further evaluated.
Clinical and hemodynamic parameters do not sufficiently indicate the presence of NVC. Possible candidates for MVD can only be identified after confirming a positive finding of NVC with the use of MRI.
AUTHORS' CONTRIBUTION
All authors contributed to the study's conception and design. Material preparation, data collection and analysis were performed by Panagiota MANAVA, Ramin NARAGHI and Peter HASTREITER. The first draft of the manuscript was written by Panagiota MANAVA and all authors commented on the previous versions of the manuscript. All authors read and approved the final manuscript.
ETHICS APPROVAL AND CONSENT TO PARTI-CIPATE
This study was approved by the ethics committee of the University Erlangen-Nuremberg (FAU) in Erlangen, Germany with the ethical approval number: 2633.
HUMAN AND ANIMAL RIGHTS
No animals were used in this research. All human research procedures followed were in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), Declaration of Helsinki and the guidelines for "good clinical practice" (GCP).
CONSENT FOR PUBLICATION
Each patient provided written informed consent prior to inclusion in this observational study.
AVAILABILITY OF DATA AND MATERIALS
The data supporting the findings of the article is available from corresponding author [P.M] upon reasonable request.
FUNDING
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
CONFLICT OF INTEREST
We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no financial support for this work that could have influenced its outcome. | 2021-09-25T15:55:55.366Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "46ce9fe9c2d4575341115aceefb8a6d7f28c9aff",
"oa_license": "CCBY",
"oa_url": "https://openneuroimagingjournal.com/VOLUME/14/PAGE/16/PDF/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0bf812fd95434ebac271346e859579fb1f2f81a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
235684090 | pes2o/s2orc | v3-fos-license | Afatinib versus erlotinib as second-line treatment of patients with advanced squamous cell carcinoma of the lung: Final analysis of the randomised phase 3 LUX-Lung 8 trial
Background LUX-Lung 8 was a randomised, controlled, phase 3 study comparing afatinib and erlotinib as second-line treatment of patients with advanced squamous cell carcinoma (SCC) of the lung. We report the final overall survival (OS) and safety analyses of LUX-Lung 8 and investigate the characteristics of patients who achieved long-term benefit (≥12 months’ treatment). Methods LUX-Lung 8 (NCT01523587) enroled patients between March 2012 and January 2014 in 183 cancer centres located in 23 countries worldwide and this final analysis had a data cut-off of March 2018. Eligible patients had stage IIIB or IV lung SCC and had progressed after at least four cycles of platinum-based chemotherapy. Patients were randomly assigned (1:1) to receive afatinib (40 mg per day) or erlotinib (150 mg per day) until disease progression. Endpoints included OS and safety; a post-hoc analysis of patients with long-term benefit (≥12 months on treatment) was also conducted. Findings 795 eligible patients were randomly assigned (398 to afatinib, 397 to erlotinib). OS was significantly prolonged with afatinib compared with erlotinib (median 7·8 months vs 6·8 months; hazard ratio 0·84; 95% CI 0·73–0·97; p = 0·0193). These findings were consistent with those of the primary analysis and were consistent across subgroups. Adverse events (AEs) were manageable with dose interruption and reduction, with similar AEs being experienced between both groups. Twenty-one (5·3%) patients receiving afatinib and 13 (3·3%) patients receiving erlotinib achieved long-term benefit; median OS was 34·6 months and 20·1 months, respectively. Amongst 132 afatinib-treated patients who underwent tumour genetic analysis, ERBB family mutations were more common in patients with long-term benefit than in the overall population (50% vs 21%). Interpretation Afatinib is a treatment option for patients with SCC of the lung progressing on chemotherapy who are ineligible for immunotherapy, particularly those with ERBB family genetic aberrations. Afatinib has a predictable and manageable tolerability profile, and long-term treatment may be well tolerated.
Introduction
Squamous cell carcinoma (SCC) of the lung is the second most common histological subtype of non-small cell lung cancer (NSCLC) after adenocarcinoma, accounting for~30% of cases [1]. Over the last 5 years, the available first-line treatment options for patients with advanced SCC of the lung have markedly expanded [2,3]. While firstline treatment for SCC has traditionally been platinum-doublet chemotherapy, the checkpoint inhibitor agents, pembrolizumab and atezolizumab, have recently demonstrated significant clinical activity in this setting, either as monotherapy (depending on PD-L1 expression) [4,5] or combined with chemotherapy [6,7]. Further options include the checkpoint inhibitor, nivolumab, combined with the anti-CTLA-4 monoclonal antibody, ipilimumab, [8] and nivolumab plus ipilimumab and chemotherapy [9]. Second-line treatment options have also expanded [2,3,10] but is dependant on the first-line treatment delivered and patient profile. Potential options include nivolumab [11], atezolizumab [12], pembrolizumab [13]; single-agent docetaxel [14,15]; the vascular endothelial growth factor receptor (VEGFR)À2 targeted monoclonal antibody ramucirumab in combination with docetaxel [16]; and the irreversible ERBB family blocker afatinib [17]. However, a major challenge remains in identifying the optimal second-line treatment for patients with SCC of the lung, especially with immunotherapy moving into first line in combination with chemotherapy.
Overexpression of the epidermal growth factor receptor (EGFR/ ERBB1) is more commonly detected in SCC than in non-SCC NSCLC (~80% vs 44%) [18,19], although EGFR mutations are less prevalent in SCC (5% vs 20À50%) [20]. Overexpression may explain why some patients with SCC of the lung are sensitive to EGFR-targeted treatments, such as erlotinib [21], even though EGFR mutations occur infrequently in these patients [20]. In addition to EGFR, genetic alterations and deregulated expression of other members of the ERBB protein family, including HER2 (ERBB2), HER3 (ERBB3) [22], and HER4 (ERBB4) [22] have been identified in patients with lung SCC.
Afatinib is an irreversible tyrosine kinase inhibitor (TKI) that selectively blocks signalling from all homo-and heterodimers formed by EGFR, HER2, HER3, and HER4 [23,24]. Based on its broader mechanism of action [23] and encouraging activity in patients with cancers of squamous histology [25], it was hypothesised that afatinib would have improved efficacy compared with erlotinib in patients with advanced SCC of the lung. This was investigated in the randomised, phase 3 LUX-Lung 8 study (NCT01523587) [17]. Erlotinib was the only EGFR TKI approved in this setting at the time of study planning [26], but is no longer indicated for SCC in the United States or Europe [26,27].
A previously reported secondary analysis of LUX-Lung 8 described 245 patients who underwent tumour genetic analysis (TGA) using next-generation sequencing (132 in the afatinib arm and 113 in the erlotinib arm) [29]. ERBB family mutations were detected in 53/245 patients (21¢6%) in the TGA cohort. Of note, PFS and OS were longer amongst afatinib-treated patients with ERBB mutations than those without [29]. However, EGFR expression level did not appear to correlate with outcomes in these patients. ERBB mutations were detected in five of 10 evaluable patients (50%) with long-term benefit from afatinib (receiveing 12 months of treatment) whereas amongst the three patients who achieved long-term benefit on erlotinib and underwent TGA, one had an EGFR mutation and two were ERBB wild type.
The LUX-Lung 8 data set was also used to investigate the ability of the VeriStrat Ò serum protein test, which classifies patients as either VeriStrat-Good (VS-G) or VeriStrat-Poor (VS-P), to predict outcomes in patients with SCC of the lung [30]. The analysis found that VS-G classification was strongly associated with favourable survival outcomes in patients treated with afatinib or erlotinib compared with VS-P classification, and, in VS-G patients, afatinib was associated with longer OS than erlotinib.
Based on the findings of the LUX-Lung 8 study, afatinib was approved for the treatment of patients with locally advanced or metastatic NSCLC of squamous histology progressing on or after platinum-based chemotherapy [31,32]. Here, we present the final analysis of OS and safety data from the LUX-Lung 8 study. In addition, to further explore factors that may be predictive of the efficacy of afatinib in this setting, we conducted a post-hoc analysis of clinical outcomes, safety, and biomarker status in patients who derived long-term benefit, defined as receiving treatment for 12 months.
Research in context
Evidence before this study We reviewed PubMed and congress abstracts/presentations at the American Society of Clinical Oncology, European Society of Medical Oncology and World Congress of Lung Cancer congresses, up to January 2020, to identify the agents for the treatment of lung SCC that have emerged since LUX-Lung 8 was conducted and completed,. The treatment landscape has markedly expanded with many new, effective agents now recommended for the firstand second-line treatment of lung SCC, including several immuno-oncology agents. As such, the role of afatinib in the treatment of progressive lung SCC is now less clear than when LUX-Lung 8 was originally published. Therefore, while conducting the final survival analysis of LUX-Lung 8, we additionally investigated potential biomarkers that were suggestive of long-term benefit from afatinib.
Added value of this study
The analyses reported here suggest that patients with certain ERBB family genetic aberrations may be particular candidates for afatinib-containing treatment; although patient numbers were low, patients with long-term disease control were found to have these mutations more often than those with a shorter OS.
Implications of all the available evidence
While the role of afatinib in the treatment of progressive lung SCC is now less clear than when LUX-Lung 8 was originally published, data from this study suggest that patients with ERBB mutations are most likely to benefit from afatinib. Afatinib may therefore be a useful second-or third-line option for patients with SCC of the lung progressing on or following chemotherapy who are ineligible for immunotherapy or who have received chemo-immunotherapy, particularly those with ERBB family genetic aberrations.
Study design and patients
Full details of the study design have been reported previously [17]. Briefly, LUX-Lung 8 was a randomised, controlled, phase 3 study conducted worldwide. Eligible patients were aged 18 years and had a diagnosis of stage IIIB or IV NSCLC of squamous (including mixed) histology. Patients were to have received at least four cycles of platinum-based doublet chemotherapy as first-line treatment and have experienced disease progression. Other inclusion criteria included an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0 or 1, measurable disease according to Response Evaluation Criteria in Solid Tumors (RECIST) version 1¢1, and adequate organ function [17].
Exclusion criteria included previous treatment with EGFR TKIs or antibodies, active brain metastases, radiotherapy within 4 weeks before randomisation, and the presence of any other malignancy.
The study protocol was designed in accordance with the Declaration of Helsinki, the International Conference on Harmonisation Guideline for Good Clinical Practice, and applicable region-specific regulatory requirements. It was approved by independent ethics committees at each centre. All patients provided written informed consent. Eligible patients were randomly assigned (1:1) to afatinib or erlotinib. Randomisation was stratified by ethnic origin (eastern Asian vs non-eastern Asian). Neither clinicians nor patients were blinded as to treatment assignment.
Procedures
Patients in the afatinib arm received oral afatinib 40 mg once daily. The dose could be escalated to 50 mg once daily in the absence of treatment-related adverse events (AEs) of more than grade 1. Afatinib was paused for no more than 14 days if patients had any grade 3 treatment-related AE, grade 2 diarrhoea lasting 2 days, or nausea/vomiting for 7 or more consecutive days despite best supportive care. After treatment interruption and recovery to grade 1 or baseline grade, the afatinib dose was reduced by 10 mg decrements to a minimum dose of 20 mg. Treatment was permanently discontinued in patients who did not recover to grade 1 or baseline grade. Patients in the erlotinib arm received the approved daily oral dose of 150 mg. In the event of AEs, dose reduction of erlotinib was permitted according to approved label instructions.
Tumour assessments were performed using computed tomography or magnetic resonance imaging of no more than five target lesions at baseline and at weeks 8, 12, 16, and every 8 weeks thereafter until confirmed progression or withdrawal. Scans were reviewed by a blinded independent central imaging group. AEs were graded according to Common Terminology Criteria for Adverse Events (CTCAE) version 3¢0.
Outcomes
OS and safety were investigated at the final data cut-off March 19, 2018. To reduce the time to trial closure, and because other key efficacy endpoints had already been met, OS and safety were the only endpoints included in the final analysis [17]. As of April 2015 (primary data cut-off), only 9 (1%) patients remained on treatment, so minimal changes were anticipated. OS was defined as the time from randomisation to death. Safety was assessed based on the evaluation of AEs and laboratory findings.
A post-hoc analysis was conducted to investigate clinical outcomes in patients with long-term benefit. These were defined as patients with 12 months' treatment on the study drug. PFS was assessed in these patients and was defined as the time from randomisation to progression or death, whichever occurred first. PFS was assessed by a central independent review committee according to RECIST version 1¢1. TGA of baseline tumour samples was conducted using next-generation sequencing (NGS) in a cohort enriched for patients with PFS of more than 2 months, as previously described [29]. The VeriStrat Ò serum protein test was used, as previously described, to assign a VeriStrat Ò status to each evaluable sample [30].
Statistical analysis
Survival in the two arms was compared using a log-rank test stratified by ethnic origin, with a two-sided a of 0¢05. A Cox proportional hazard model was used to estimate the HRs and corresponding 95% CIs for survival. Greenwood's standard error estimate was used to calculate KaplanÀMeier estimates and 95% CIs. Efficacy analyses were performed in the randomised (intention-to-treat) population. Safety analyses included all patients receiving at least one dose of study drug. Analysis of AEs was descriptive. Statistical analyses were performed using SAS Version 9¢2.
Role of the funding source
Employees of Boehringer Ingelheim played a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication and as such are included in the author list. To ensure independent interpretation of clinical study results, Boehringer Ingelheim grants all external authors access to all relevant material, including participant-level clinical study data, and relevant material as needed by them to fulfil their role and obligations as authors under the ICMJE criteria. All authors had access to the collated data and the funder, all the authors and the corresponding author took the decision to submit for publication.
Results
Between March 30, 2012 and Jan 30, 2014, 977 patients were screened and 795 were enroled (398 to the afatinib group and 397 to the erlotinib group; Fig. 1) at 183 cancer centres in 23 countries worldwide [17]. The cut-off date for this final analysis was March 19, 2018; at this time all patients had discontinued treatment, the main reason being due to disease progression (Fig. 1).
Patient baseline demographics and disease characteristics have previously been reported [17]. The baseline characteristics were generally well balanced between the treatment arms (Table 1). Median age was 64 years, 84% (666 patients) were men, 22% (172 patients) were of eastern Asian origin, and 94% (751 patients) were current or ex-smokers (light or other).
The effect of afatinib on OS was consistent across subgroups (Fig. 2B). Improvements in OS with afatinib compared with erlotinib were observed in Eastern Asian patients, male patients, patients with a best response of stable disease to first-line chemotherapy, patients with less than a 16-week interval between the end of first-line and the beginning of second-line treatment, patients with squamous histology (versus those with mixed squamous histology), other current and ex-smokers with a history of more than 15 pack years, patients with an ECOG PS of 1 at baseline, patients aged <65 years, and patients who did not receive maintenance therapy.
The most common any-grade treatment-related AEs with afatinib were diarrhoea, rash/acne, and stomatitis, while rash/acne, diarrhoea, pruritus and fatigue were the most common with erlotinib ( Table 2). The incidences of treatment-related grade 3 diarrhoea and stomatitis were higher with afatinib than erlotinib, while the incidence of treatment-related grade 3 rash/acne was higher with erlotinib than afatinib.
Treatment-related AEs leading to death were reported for six patients in the afatinib arm (interstitial lung disease [n = 2], pneumonia, respiratory failure, acute renal failure, general physical health deterioration [n = 1 each]) and five patients in the erlotinib arm (interstitial lung disease, pneumonia, peritonitis, pneumonitis, intestinal obstruction [n = 1 each]).
In total, 21 (5¢3%) patients in the afatinib arm and 13 (3¢3%) in the erlotinib arm received treatment for 12 months and were defined as having received long-term benefit. Baseline characteristics were broadly balanced between the treatment groups in patients with long-term benefit, with some notable exceptions (Table 1). Patients receiving afatinib were on average younger than those receiving erlotinib (median age 64 vs 71 years). All 13 patients in the erlotinib group had stage IV cancer at screening, but 14% of afatinib-treated patients had stage IIIB. Erlotinib-treated patients also had a better response to chemotherapy, with 62% of patients exhibiting a complete or partial response versus 48% in the afatinib group (the remaining patients all had stable disease).
Discussion
Results from this final analysis of LUX-Lung 8 were consistent with those previously reported for the primary analysis [17]. In the updated analysis, OS was significantly longer with afatinib than erlotinib (median 7¢8 vs 6¢8 months [HR 0¢84, p = 0¢0193]). As for the primary analysis, the clinical significance of the 1¢0 month extension in OS could be debated. The effect was consistent across subgroups including large groups such as those of eastern-Asian origin, patients with stable disease following first-line chemotherapy, patients with ECOG PS of 1, and younger patients (<65 years).
The nature of the AEs was similar between treatment arms, reflecting the similar mechanism of action of both drugs, and, overall, AEs were manageable with dose interruptions and reductions. The AEs leading to treatment discontinuation were mainly treatment class-related AEs, such as diarrhoea (of which the incidence was higher in the afatinib group) or rash/acne; although the overall rate of treatment discontinuation due to AEs was slightly higher with afatinib than erlotinib. The incidence of AEs leading to dose reduction was at the level expected.
Twenty-one patients in the afatinib group (5¢3%) and 13 in the erlotinib group (3¢3%) received long-term benefit from treatment. Afatinib-treated patients achieved a prolonged median OS of 34¢6 months, with a median treatment duration of 19¢0 months. There did not appear to be any demographic characteristics that predisposed patients to receive long-term benefit from these treatments. However, in the afatinib group, patients with long-term benefit were more likely to have ERBB family mutations than patients with no long-term benefit. Similarly, in the primary analysis of the LUX-Lung 8 cohort, OS was longer amongst afatinib-treated patients with ERBB mutation-positive tumours than those without [29]. Patient screening for these biomarkers may be a useful predictive tool, as it is possible that patients with ERBB mutations are more likely to respond to afatinib than those without.
The majority of long-term responders in this study had VS-G classification. The benefit of VS-G classification in terms of outcome with afatinib was previously observed in the primary analysis of the LUX-Lung 8 cohort; VS-G classification was strongly associated with favourable survival outcomes with afatinib or erlotinib, compared with VS-P classification [30]. Thus, the data support these previous findings that patients surviving for longer are more likely to be VS-G than VS-P. However, patient numbers in this post-hoc analysis were limited and the findings therefore need to be interpreted with caution. As such, further analysis will be required to establish whether VeriStrat Ò classification can provide prognostic information.
In summary, these data suggest that afatinib is a valid treatment option for patients with SCC of the lung who have progressed on or after chemotherapy. Patients with certain ERBB family genetic aberrations may be particular candidates for afatinib-containing treatment sequences; although patient numbers were low, patients with longterm disease control were found to have these mutations more often than those with a shorter OS. However, it is worth noting that as next-generation sequencing is not currently performed routinely in patients with SCC of the lung, this might compromise the potential use of afatinib as a second-line option in those patients most likely to benefit. Afatinib has a well-established, predictable safety profile, which is manageable with supportive care and tolerability-guided dose reductions, and long-term treatment may be well tolerated. Although this trial was performed when the standard of care for firstline treatment was chemotherapy, immunotherapy with a PD-L1 inhibitor with or without chemotherapy is now an established firstline treatment for eligible patients [2,3]. A recent real-world study showed that second-line afatinib was generally well tolerated and effective in patients with metastatic squamous NSCLC who had received first-line pembrolizumab plus platinum-based chemotherapy [33,34]. Further trials to determine the optimum second-line and third-line therapy in patients with SCC of the lung, particularly in those who have received prior chemo-immunotherapy, are warranted.
Contributors
SL, AA contributed to study conception and design. SL, KS contributed to methodologies and software used for study data analysis. DI, *Patients were ordered and numbered by treatment duration (at data cut-off), with patient 1 being on treatment longest. y Patient transferred to commercial drug on discontinuation from study drug. x Patient also had rearrangements in two genes. { First observed response at time of tumour measurement. **1 Mutation present in at least 3/10 patients with long-term benefit, or part of the ERBB family (EGFR, ErbB2, ErbB3, ErbB4). ERBB family mutations included: EGFR (n = 2; R1052K and unknown), ERBB2 (n = 2; Q57R, E395K) and ERBB4 (n = 1; G668V). AA contributed to study data validation. SL contributed to the formal analysis of the data, MC, KS, EG, DI, AM, YJM, AA, EF. MC, KS contributed to study data visualisation. GDG, MC, SL, KS, AA contributed to study supervision. DI, AA contributed to study data validation. MC, KS, EG, DI, AM, YJM, AA, EF contributed to study. MC, KS, KHL, VG contributed to study data curation. All authors were involved in the drafting and reviewing of the manuscript and provided approval for submission.
Data sharing
To ensure independent interpretation of clinical study results, Boehringer Ingelheim grants all external authors access to all relevant material, including participant-level clinical study data, and relevant material as needed by them to fulfil their role and obligations as authors under the ICMJE criteria. Furthermore, clinical study documents (e.g. study report, study protocol, statistical analysis plan) and participant clinical study data are available to be shared after publication of the primary manuscript in a peer-reviewed journal and if regulatory activities are complete and other criteria met per the BI Policy on Transparency and Publication of Clinical Study Data: https://trials.boehringer-ingelheim.com/transparency_policy.html Prior to providing access, documents will be examined, and, if necessary, redacted and the data will be de-identified, to protect the personal data of study participants and personnel, and to respect the boundaries of the informed consent of the study participants.
Clinical Study Reports and Related Clinical Documents can be requested via this link: https://trials.boehringer-ingelheim.com/trial_results/clinical_sub mission_documents.html All such requests will be governed by a Document Sharing Agreement.
Bona fide, qualified scientific and medical researchers may request access to de-identified, analysable participant clinical study data with corresponding documentation describing the structure and content of the datasets. Upon approval, and governed by a Data Sharing Agreement, data are shared in a secured data-access system for a limited period of 1 year, which may be extended upon request.
Researchers should use https://trials.boehringer-ingelheim.com to request access to study data.
Declaration of Competing Interest
GDG received travel expenses from AstraZeneca and advisory board honoraria from Celgene. SL received research support from | 2021-07-01T05:14:08.513Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "b82294f2eb8ed30c5354197a69f6d91f946bb07a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thelancet.com/article/S2589537021002200/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b82294f2eb8ed30c5354197a69f6d91f946bb07a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139123122 | pes2o/s2orc | v3-fos-license | Characterization of Powder-Precursor HVOF-Sprayed Al2O3-YSZ/ZrO2 Coatings
Thermal spraying using liquid feedstock can produce coatings with very fine microstructures either by utilizing submicron particles in the form of a suspension or through in situ synthesis leading, for example, to improved tribological properties. The focus of this work was to obtain a bimodal microstructure by using simultaneous hybrid powder-precursor HVOF spraying, where nanoscale features from liquid feedstock could be combined with the robustness and efficiency of spraying with powder feedstock. The nanostructure was achieved from YSZ and ZrO2 solution-precursors, and a conventional Al2O3 spray powder was responsible for the structural features in the micron scale. The microstructures of the coatings revealed some clusters of unmelted nanosized YSZ/ZrO2 embedded in a lamellar matrix of Al2O3. The phase compositions consisted of γ- and α-Al2O3 and cubic, tetragonal and monoclinic ZrO2. Additionally, some alloying of the constituents was found. The mechanical strength of the coatings was not optimal due to the excessive amount of the nanostructured YSZ/ZrO2 addition. An amount of 10 vol.% or 7 wt.% 8YSZ was estimated to result in a more desired mixing of constituents that would lead to an optimized coating architecture.
Introduction
Thermally sprayed ceramic coatings are typically used in the aerospace industry for their low thermal diffusivity and high-temperature erosion resistance. Other applications are found, e.g., in components in the process industry, such as center rolls and dewatering elements for paper machines, mechanical seals and process valves (Ref 1). In these components, combined wear-and corrosion resistance is the key factor and the main reason for choosing ceramic coatings over other material options. However, typically brittle behavior and interlamellar cracking prevent their use in many applications (Ref 2). Al 2 O 3 is a widely used ceramic material in thermal spraying due to its low cost and various favorable properties, such as resistance to abrasive and sliding wear, and high dielectric strength ( Ref 3,4). However, its tribological properties are not up to par with other ceramic coatings, such as Cr 2 O 3 (Ref 5), in more demanding conditions. In a technical perspective, a vast amount of research has gone into improving the fracture toughness of Al 2 O 3 with additions of ZrO 2 , with the intent of strengthening the composite. This can be achieved by the well-known toughening effect of the phase transformation of tetragonal ZrO 2 to monoclinic and the following volume change as well as the ferroelastic domain switching in tetragonal ZrO 2 (Ref [6][7][8][9][10]. An undesirable side effect of the phase change is a large volume increase, which deteriorates the This article is an invited paper selected from presentations at the 2018 International Thermal Spray Conference, held May 7-10, 2018, in Orlando, Florida, USA, and has been expanded from the original presentation. & Jarkko Kiilakoski jarkko.kiilakoski@tut.fi 1 Laboratory of Materials Science, Tampere University of Technology, Tampere, Finland coating integrity. However, it can be countered by stabilizing the ZrO 2 to either non-transformable tetragonal or cubic phases by adding stabilizing oxides, such as MgO (resulting in MSZ, magnesia-stabilized zirconia) or Y 2 O 3 (leading to YSZ, yttria-stabilized zirconia). This practice is already widely utilized in top coats of thermal barrier coatings, where the coatings resistance to catastrophic failure is critical due to immense cyclic thermo-mechanical loading ( Ref 1,11). While improvements are usually gained in the tribological behavior of thermally sprayed Al 2 O 3 -YSZ coatings, no evidence of toughening has been found, though some studies claim the transformation is the cause for the improved wear behavior ( Ref 12,13). Sometimes, however, the improvement is attributed to the lower melting point and better cohesion of the coating ( Ref 14). We have previously studied the effect of this phase change in conventional thermally sprayed Al 2 O 3 -40ZrO 2 coatings as well, but the toughening was not evident ( Ref 15,16).
Nanocrystalline structures have been achieved in thermal spraying using suspension-and solution-precursor feedstocks ( Ref 13,[17][18][19][20][21][22][23][24], indicating the potential of novel processing routes in achieving the desired toughness increase. The toughening was achieved without compromising structural cohesion, for example, by increasing the crack propagation resistance when small unmelted ZrO 2 particles are preserved in the coating matrix (Ref 13). However, alloying Al 2 O 3 with ZrO 2 is challenging in thermal spraying of nanoscale feedstock as the composition can easily lead to the formation of an amorphous phase during processing that deteriorates mechanical properties due to a reduction in available slip planes leading to increased brittleness ( Ref 13,25).
HVOF spraying of ceramics requires a combustible gas capable of producing a high flame temperature with oxygen to produce the energy required to melt the material. Due to the restriction of available energy combined with high velocities leading to short dwell times, the HVOF system is mainly used with low-melting ceramics, such as TiO 2 (Ref 26) or Al 2 O 3 (Ref 27). For higher melting materials, such as ZrO 2 , particle size is the key between achieving a coating by melting or partly melting and partly sintering the material (Ref 28).
Solution-precursor HVOF (SP-HVOF) spraying is a novel spray process, in which coating formation through in-flight nanoparticle synthesis and subsequent melting is attainable. The size of the synthesized particles in SP-HVOF is 10-500 nm ( Ref 29). However, creating a thick, cohesive coating with SP-HVOF is not only tedious due to the relatively low deposition rate, but also difficult due to the necessity to melt the particles to form the coating without inducing excessive grain growth. This problem has been tackled by Joshi et al. In this study, the feasibility of a hybrid HVOF process utilizing a powder with a second feedstock, in this case solution-precursor, is examined for Al 2 O 3 -YSZ/ZrO 2 composite coatings. The goal is to be able to deposit nanosized YSZ/ZrO 2 particles at the splat boundaries between the Al 2 O 3 splats to bind the splats together and, thus, improve the mechanical properties of the coatings. Two separate variables were studied: the effect of the amount of YSZ (0, 20 and 40 wt.%), and the effect of stabilization of the ZrO 2 . The coating microstructures were characterized by FESEM and XRD, and their mechanical properties, hardness and cavitation erosion resistance were measured.
Coating Preparation
The powder feedstock material for the alumina (Al 2 O 3 ) component was Amperit 740.001 (-25 ? 5 lm) (H. C. Starck GmbH, Munich, Germany). The yttria-stabilized zirconia (YSZ) solutions were manufactured by mixing a saturated water-based solution (at 20°C) of yttrium(III)nitrate hexahydrate (Acros organics/Thermo Fisher Scientific Inc., Geel, Belgium) and a 16 wt.% zirconium acetate solution in dilute acetic acid (Sigma-Aldrich/Merck KGaA, Darmstadt, Germany) in proper ratios to achieve 8 wt.% of yttria in zirconia after pyrolysis in the flame to create stabilized tetragonal zirconia (8YSZ). One solution was prepared without adding yttria in order to examine the effect of stabilization of the zirconia.
The coatings were sprayed with a TopGun HVOF system (GTV GmbH, Luckenbach, Germany) modified for liquid feedstock spraying, using ethene as a fuel gas and oxygen as an oxidant on 50 9 100 9 5 mm substrates of stainless steel (AISI 316) that were grit-blasted with 180-220 mesh alumina prior to deposition. The powder feedstock was fed with a commercial 9MP powder feeder (Oerlikon Metco AG, Wohlen, Switzerland), and the solution was fed with a diaphragm pump feeder made inhouse, that was equipped with a closed-loop mass-flow meter to stabilize the solution flow rate. The powder and the liquid precursor were injected in the same injector where they were mixed and injected together into the combustion chamber. Atomizing of the mixture was brought about by the carrier gas of the powder. The feeding setup is explained in detail by Björklund et al. (Ref 33). The processing parameters were optimized in preliminary studies and are listed in Table 1. A schematic of the hybrid powder-precursor HVOF spray process is presented in Fig. 1.
Coating Characterization
The coatings were characterized with field emission scanning electron microscopes (FESEM) (Zeiss ULTRAplus and Zeiss Crossbeam 540, Carl Zeiss Microscopy GmbH, Jena, Germany). The FIB cross section and consequent analysis were performed with a Helios Nanolab 600 (FEI Company/Thermo Fisher Scientific Inc., Hillsboro, OR, United States). Compositional analyses of the coated surfaces were carried out using Inca x-act 350 energy-dispersive spectrometer (EDS) attached to the FESEM (ULTRAplus) and the phase analysis with x-ray diffraction (XRD, Empyrean, PANalytical, Cu-Ka radiation, The Netherlands).
Mechanical Characterization
The coating hardness was determined from ten indentations on the coating cross section using a Vickers hardness tester (MMT-X7, Matsuzawa Co., Ltd., Akita, Japan) with a load of 300 gf (HV 0.3 ). The cavitation erosion test was performed with an ultrasonic transducer (VCX-750, Sonics and Materials Inc., Newtown, CT, USA), according to the ASTM G32-10 standard for indirect cavitation erosion. The vibration tip, made of a Ti-6Al-4V alloy, was vibrating at a frequency of 20 kHz with an amplitude of 50 lm at a distance of 500 lm from the surface. The sample surfaces were ground flat and polished with a polishing cloth and diamond suspension (3 lm). The samples were cleaned in an ultrasonic bath with ethanol and weighed after drying. Samples were attached on a stationary sample holder, and the head of the ultrasonic transducer was placed at a distance of 0.5 mm. Samples were weighed after 15, 30, 60 and 90 min. One sample per coating was tested. The long duration and high impact frequency of the test lend credibility to sufficient statistical certainty. Cavitation resistance of the coatings was calculated as the reciprocal of the mean depth of erosion per hour, which in turn is derived from the theoretical volume loss (presuming a fully dense coating) and the area of the vibrating tip.
Microstructural Characterization
The cross sections of the coatings are presented in Fig. 2 and in higher magnification in Fig. 3. The coatings adhered well to the substrate, and in all hybrid coatings bimodality was achieved with in situ synthesized YSZ/ZrO 2 particles of \ 200 nm embedded between the well-melted Al 2 O 3 splats. The coating structures seemingly have some apparent porosity; however, it is likely stemming from pullouts, i.e., the removal of poorly bonded particles or agglomerates during the sample preparation. According to the cross-sectional images, the synthesized nanoparticles obtained a round morphology indicating partially molten state, but a majority of the nanoparticles were not well integrated to the alumina splats, as presented in Fig. 4(a). In addition, some mixed-phase areas were found, shown as the light gray areas in Fig. 4(b). This indicates the possibility of strengthening the traditional powder-sprayed coating structure by interlocking the microscale splats, as the nanosized ceramics (with a very large surface area) enhance the diffusion of atoms in splat-nanoparticle interface (Ref 34). Additionally, since it has been shown that reducing the primary particle size to the nanoscale can lead to significant reductions in the sintering temperature in stabilized ZrO 2 (Ref 35), some amount of sintering and enhanced diffusion could occur during spraying. Therefore, in optimal conditions, the sintered YSZ could be chemically bonded by diffusion/mixed phase with surrounding Al 2 O 3 splats, leading to an extremely coherent structure. Some amount of unmelted nanoparticles can potentially also improve the toughness and wear resistance of the coating when the nanostructured zones are well embedded in the coating and act as crack arresters (Ref 36). The EDS analyses of the coatings are presented in Table 2. The amount of ZrO 2 was about half of the nominal amount from the feedstock. The low deposition efficiency of ZrO 2 in comparison with Al 2 O 3 can be attributed to its higher melting point as well as small particle size, leading to particles drifting along with the gas flow away from the substrate due to the high stagnation pressure zone near the surface slowing the particles down (bow-shock effect) and drag along the surface (Ref [37][38][39]. Further optimization of the process parameters should improve this aspect along with decreasing the amount of unmelted YSZ/ZrO 2 particles.
The EDS map of the coating A40Y is presented in Fig. 5. As expected from the contrast in the FESEM images, the bright areas consist of YSZ and darker areas of Al 2 O 3 . A mixed-phase area exists with a grayish color that can be verified from the EDS map, consisting of oxides of both aluminum and zirconium. In order to confirm that the mixed color did not arise from a brighter layer of zirconia under a darker layer alumina, FIB-FESEM studies were carried out. The sample was inspected from a FIB-milled cross section, where a light gray area was located, and the sample was subsequently milled from that plane on two more times, 2 lm at a time, as displayed in Fig. 6. It was ensured that the mixed-phase splat was indeed continuous also in the third dimension and not an artifact from the penetration depth of the electrons.
The x-ray diffraction patterns are presented in Fig. 7. As expected, the ZrO 2 in the stabilized Al 2 O 3 -YSZ coatings was in the tetragonal form. A40Z consisted also of monoclinic and cubic ZrO 2 , which was unexpected since the stable phase of ZrO 2 in room temperature is monoclinic. The occurrence of all the phases of ZrO 2 in the unstabilized coating can be explained either by their metastability or by the size dependence of the phase transformations: For example, cubic ZrO 2 stays stable in room temperature with crystallite sizes \ 2 nm ( Ref 40,41). Al 2 O 3 was in all cases as aand c-phases, as is typical in HVOF spraying of Al 2 O 3 , where the core of some particles of the feedstockpowder presumably does not melt and stays as a-phase. Interestingly, amorphous compounds, which are a common product of thermal spraying Al 2 O 3 -ZrO 2 mixtures from powder (Ref 16,25), were not found in significant quantities.
Mechanical Properties
Coating hardnesses were decreased by the addition of YSZ/ ZrO 2 in Al 2 O 3 , as presented in Table 3. However, the reduction in hardness was moderate in the case of A20Y, indicating the possibility to achieve a dense structure even with the addition of nanoparticles, as also evidenced by Goel et al. (Ref 31). The coatings with 40 wt.% nominal addition of YSZ/ZrO 2 underwent a more severe reduction in hardness, as could be predicted from the large clusters of nanoparticles between the splats as seen in all hybrid coatings in the cross sections in Fig. 3 and in detail for A40Y in Fig. 4; the cohesion of the coatings was reduced as compared to the reference. Cavitation erosion resistance is typically a good measure of the cohesion of thermally sprayed coatings, and it is able to reveal weak links in the microscale (Ref 15, 42). The experiment revealed structural weakness in the hybrid powder-precursor-sprayed coatings: Similarly to the hardness values, a reduction in cavitation resistance was observed with increasing amounts of YSZ/ZrO 2 , as can be observed in Fig. 8. However, this time the reduction is significant, dropping to less than half with 20 wt.% YSZ and to less than a quarter with 40 wt.% YSZ and ZrO 2 additions as compared to pure Al 2 O 3 . This is supported by Fig. 9, where surface images of A40Y are presented assprayed and after the test. Clearly, there are vast amounts of YSZ nanoparticles on top of the surface (Fig. 9a), like was seen between the splats as well (Fig. 4a), that impair the cohesion of the coating, which is most exposed under fatiguing conditions. These particles were removed during the experiment, as can be seen from their lesser amount in Fig. 9(b), where mainly larger well-melted splats are visible. The cavitation erosion resistance is typically favored (Ref 15, 42). The hybrid HVOF-sprayed coatings aimed to achieve these beneficial qualities but missed the goal as the nanoparticles were not successfully embedded in the coating and an insufficient amount of mixed-phase regions were created to strengthen the coating. These results are contradictory to, for example, the results of Murray et al. (Ref 32) who were able to increase the wear resistance in dry-sliding ball-onflat test of the reference Al 2 O 3 coating by the addition of YSZ suspension. The test conditions were 30 min with a 6.3-mm-diameter alumina ball, a load of 10 N and a sliding speed of 10 mm/s. In their case, however, the major difference was that an axial-feed plasma-spray system was used, which has enough power to melt the YSZ particles from the suspension and there was no detrimental nonintegrated particles embedded in the coating. Additionally, the scale in their test is some orders of magnitude larger since the cavitating bubbles, being a few tens of microns, mainly nucleate on surface asperities and cavities of similar size (Ref 43). The tests measure, thus, somewhat different features.
Tailoring Possibilities of the Coating Architecture
In order to evaluate the next steps to optimize the coating architecture, calculations were performed on black/white histograms of a cross-sectional image (as visualized in Fig. 10) of the pure Al 2 O 3 coating to obtain the amount of horizontal vacancies between the lamellae. These vacancies could be filled with the nanosized YSZ/ZrO 2 particles or the Al 2 O 3 -YSZ/ZrO 2 mixed phase in order to increase the structural integrity of the coating. Theoretically, packing the interlamellar vacancies with a sufficient amount of nanoparticles while avoiding overpacking would lead to the densest achievable coating with optimal properties, as pictured in Fig. 10(b). Five vertical areas of the image were selected, and the area fraction of interlamellar vacancies to lamellae is calculated from a black/white histogram, Fig. 10(c). By selecting narrow, vertical areas from regions with few vertical cracks, we aim to isolate the horizontal vacancies between the lamellae, which are desirable to be filled. An average of 10 vol.% of horizontal vacancies/splat boundaries was determined, which translates to a theoretically optimal mixture of 7 wt.% of the 8YSZ solution and 93 wt.% Al 2 O 3 powder. This is not far from the actual amount obtained from A20Y due to the lower deposition efficiency of YSZ. Hence, a coating with this or lower ratio should be manufactured and evaluated in the next phase of the study, in order to create a distinct enough difference to A20Y.
Drawing from the results of this study, future research should steer itself toward lower melting feedstock or additives in the solution, such as citric acid or acetic acid, to increase the exothermic nature of the synthesis reaction (Ref 44). By combining solution and powder feedstock in situ, it is possible to combine oxides into coatings jointly with other oxides, hard metals or metals of virtually any combination without the limitations of powder or
Conclusions
Preliminary results on the characterization of coatings prepared via a hybrid powder-precursor HVOF spray process were presented in this study. The coatings were manufactured from a solution of zirconium acetate and yttrium nitrate hexahydrate, and a commercial powder feedstock of Al 2 O 3 . Microscopic characterization techniques were used to investigate the formed microstructures, and cavitation erosion was utilized to evaluate the cohesion and structural integrity of the coating. The coating structures were macroscopically dense and bimodal, with nanosized YSZ/ZrO 2 particles and agglomerates thereof occupying the interlamellar regions between the Al 2 O 3 splats, along with some mixed phase of Al 2 O 3 -YSZ/ZrO 2 . However, the addition of YSZ/ZrO 2 lowered the hardness of the coating slightly. The cause was determined to be the areas with agglomerates of unmelted YSZ/ ZrO 2 that were also found to weaken the coating in cavitation erosion, which tests the structural integrity and cohesion of the coating through fatigue from microscopic impacts. Thus, the desired improvement in mechanical properties was not achieved yet. The role of Y 2 O 3 in stabilizing t-ZrO 2 as compared to unstabilized ZrO 2 had no effect on the hardness or cavitation resistance of the coating with the current achieved microstructure.
The usability of the novel hybrid powder-precursor HVOF process has been successfully demonstrated, and with further process optimization the composition is believed to provide interesting results. The results implicate that by reducing the amount of solution-precursorsynthesized YSZ/ZrO 2 , the coating cohesiveness would be sufficient to bring out the toughening effect of the added nanostructured phase. Based on the calculations of vacancies between splats in the Al 2 O 3 coating, an optimal feedstock mixing ratio would be 7 wt.% of 8YSZ solution of the total feed presuming identical deposition efficiencies of the two feedstocks. Further studies are recommended to optimize the relationship between the feedstock and deposition parameters. Additionally, by utilizing different feedstocks with lower melting points than YSZ or ZrO 2 , it is foreseen to be possible to produce interesting nano-microcomposite coatings of various compositions with relative ease and reproducibility bringing material tailoring of thermally sprayed coatings to new levels. | 2019-04-30T13:08:46.360Z | 2018-05-07T00:00:00.000 | {
"year": 2018,
"sha1": "0818ab1f2e3125f0a764e440a4873d3f6d9d5f70",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11666-018-0816-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4dc4879bd36061ca9a26ff359b765bd0e26ba27b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
229177800 | pes2o/s2orc | v3-fos-license | Blast Alleviation of Sacrificial Cladding with Graded and Uniform Cellular Materials
Graded cellular material is a superb sandwich candidate for blast alleviation, but it has a disadvantage for the anti-blast design of sacrificial cladding, i.e., the supporting stress for the graded cellular material cannot maintain a constant level. Thus, a density graded-uniform cellular sacrificial cladding was developed, and its anti-blast response was investigated theoretically and numerically. One-dimensional nonlinear plastic shock models were proposed to analyze wave propagation in density graded-uniform cellular claddings under blast loading. There are two shock fronts in a positively graded-uniform cladding; while there are three shock fronts in a negatively graded-uniform cladding. Response features of density graded-uniform claddings were analyzed, and then a comparison with the cladding based on the uniform cellular material was carried out. Results showed that the cladding with uniform cellular materials is a good choice for the optimal mass design, while the density graded-uniform cladding is more advantageous from the perspective of the critical length design indicator. A partition diagram for the optimal length of sacrificial claddings under a defined blast loading was proposed for engineering design. Finally, cell-based finite element models were applied to verify the anti-blast response results of density graded-uniform claddings.
Introduction
Cellular materials have been popularly regarded as an ideal anti-blast/cushion material due to their superior energy absorption capability and shock mitigation [1][2][3][4]. The primary applications of cellular materials are energy protectors (such as helmets and shields) and packaging of fragile components. Under blast/impact loading, cellular materials can undergo a large deformation and simultaneously dissipate shock energy with a low transmitted stress. Thus, sandwich structures with cellular materials as core materials appear to be gradually within our sight. Cellular sacrificial cladding, a kind of sandwich structure, is composed of two cover plates and a cellular material core, where the function of the cover plates is to improve the flexural rigidity of sacrificial cladding and to avoid nonuniform blast loading. When subjected to impact/blast loading, cellular sacrificial cladding can absorb enormous energy with a layer-wise deformation mode and can protect the main structure behind the cladding from destruction [5,6].
It is a crucial point to understand the mechanical behavior of cellular material for guiding the anti-blast design of cellular sacrificial cladding. Under impact/blast loading, cellular material deforms density of the uniform foam. Thus, the density distribution of the graded-uniform cellular sacrificial cladding can be expressed as where X is the Lagrangian coordinate, L 2 is the length of the graded layer, L is the total length of the graded-uniform sacrificial cladding, ρ(X) represents the relative density distribution, ρ 0 is the relative density of uniform cellular cladding, γ = 2(ρ(L 2 ) − ρ(0))/(ρ(L 2 ) + ρ(0)) is the density-gradient parameter and ρ s is the density of the base material. For the case of γ > 0, the density increases linearly along the graded foam rod with a slope of 2ρ s ρ 0 γ/(2L − |γ|L 2 ), and for the case of γ < 0, the density decreases linearly with the same slope. Two cover plates added to the cladding are used to increase the flexural capacity of the cladding, and the mass per unit area of the plates are m 2 and m 1 . The thicknesses of both cover plates are negligible in the theoretical analysis.
Materials 2020, 13, x 3 of 20 equal to the density of the uniform foam. Thus, the density distribution of the graded-uniform cellular sacrificial cladding can be expressed as where X is the Lagrangian coordinate, L2 is the length of the graded layer, L is the total length of the graded-uniform sacrificial cladding, ρ(X) represents the relative density distribution, ρ0 is the relative density of uniform cellular cladding, γ = 2(ρ(L2) − ρ(0))/(ρ(L2) + ρ(0)) is the density-gradient parameter and ρs is the density of the base material. For the case of γ > 0, the density increases linearly along the graded foam rod with a slope of 2ρsρ0γ/(2L − |γ|L2), and for the case of γ < 0, the density decreases linearly with the same slope. Two cover plates added to the cladding are used to increase the flexural capacity of the cladding, and the mass per unit area of the plates are m2 and m1. The thicknesses of both cover plates are negligible in the theoretical analysis. The blast loading can be approximated as a plane pressure wave when the explosion takes place at a certain distance away from the cladding. In general, the blast pressure can be assumed to be an exponential attenuation wave [32,33,35,36], written as / 0 ( ) t p t Pe (2) where t is the time, P0 is the initial peak of the blast loading and τ is the decay time of the blast loading. The impulse of the blast can be obtained as P0τ. It should be noted that the blast loading considered here takes into account the fluid-structure interaction effects, which are the interactions between the blast pressure produced by the explosion and the motion of the cover plate of the cellular sacrificial cladding.
Stress-Strain Relations of Cellular Materials
Due to the difficulty of obtaining the dynamic stress-strain relations of cellular materials, the quasi-static stress-strain curve is often used to evaluate and design the cellular cladding for engineering projects. The stress-strain curve of cellular materials under quasi-static compression presents three distinct regions, namely an elastic region, a long plateau plastic region and a densification region. Several simplified idealizations were therefore proposed to characterize the The blast loading can be approximated as a plane pressure wave when the explosion takes place at a certain distance away from the cladding. In general, the blast pressure can be assumed to be an exponential attenuation wave [32,33,35,36], written as where t is the time, P 0 is the initial peak of the blast loading and τ is the decay time of the blast loading. The impulse of the blast can be obtained as P 0 τ. It should be noted that the blast loading considered here takes into account the fluid-structure interaction effects, which are the interactions between the blast pressure produced by the explosion and the motion of the cover plate of the cellular sacrificial cladding.
Stress-Strain Relations of Cellular Materials
Due to the difficulty of obtaining the dynamic stress-strain relations of cellular materials, the quasi-static stress-strain curve is often used to evaluate and design the cellular cladding for engineering projects. The stress-strain curve of cellular materials under quasi-static compression presents three distinct regions, namely an elastic region, a long plateau plastic region and a densification region. Several simplified idealizations were therefore proposed to characterize the collapse behavior of cellular materials. The R-PP-L idealization [7,8] is the most popular one, and it only contains two material parameters, i.e., plateau stress and locking strain. However, it is demonstrated that the R-PP-L idealization can only provide a first-order approximation for the deformation behavior of cellular materials owing to the constant locking strain assumption [2]. The densification region of the quasi-static stress-strain curve presents a strongly nonlinear plastic hardening phenomenon [12,37], which may be more directly reveal the unreasonableness of the R-PP-L idealization for cellular materials. Here, the rate-independent R-PH idealization proposed by Zheng [12] is implemented to characterize and predict the energy absorption of cellular materials, as shown in Figure 2. The R-PH idealization can depict the stress-strain curves well, and it can be given by where σ 0 (ρ) and C(ρ) are the initial crushing stress and the strain hardening parameter, respectively, which are related to the local relative density ρ(X).
Materials 2020, 13, x 4 of 20 collapse behavior of cellular materials. The R-PP-L idealization [7,8] is the most popular one, and it only contains two material parameters, i.e., plateau stress and locking strain. However, it is demonstrated that the R-PP-L idealization can only provide a first-order approximation for the deformation behavior of cellular materials owing to the constant locking strain assumption [2]. The densification region of the quasi-static stress-strain curve presents a strongly nonlinear plastic hardening phenomenon [12,37], which may be more directly reveal the unreasonableness of the R-PP-L idealization for cellular materials. Here, the rate-independent R-PH idealization proposed by Zheng [12] is implemented to characterize and predict the energy absorption of cellular materials, as shown in Figure 2. The R-PH idealization can depict the stress-strain curves well, and it can be given by where σ0(ρ) and C(ρ) are the initial crushing stress and the strain hardening parameter, respectively, which are related to the local relative density ρ(X). Norminal stress, Figure 2. The quasi-static nominal stress strain curve obtained from the cell-based finite element (FE) model [31] and the R-PP-L and R-PH idealizations.
Blast Alleviation of Graded-Uniform Cellular Sacrificial Claddings
In the case of a material subjected to an increasing velocity from zero, if the material has an upward concave stress-strain relation, a series of plastic waves initially propagate from the impact end, and transform into a shock wave later when the loading velocity is high enough [38]. According to the experience, the material at the blast end of a cellular sacrificial cladding could reach a high speed in a very short time. To pursue the authenticity of the material models and blast loading [6,23], we assume that the shock wave can be formed as soon as an increasing velocity from zero acts on the cellular sacrificial cladding. The rationality of this hypothesis will be illustrated by the cell-based finite element results in Section 5.
According to the R-PH model, the elastic wave speed is infinite, and the shock wave speed is finite and determined by the slope of Rayleigh line [8]. It indicates that the shock wave cannot catch up with the elastic wave, and the stress ahead of the shock front is equal to σ0(ρ). Thus, the physical quantities ahead of the shock front can be expressed as {vA, εA, σA} = {v, 0, σ0(ρ)}. If the velocity ahead of the shock front is determined, the physical quantities behind the shock front {vB, εB, σB} can be obtained with the aid of Equation (3) and the conservation relations of mass and momentum across the shock front [39], which are
Blast Alleviation of Graded-Uniform Cellular Sacrificial Claddings
In the case of a material subjected to an increasing velocity from zero, if the material has an upward concave stress-strain relation, a series of plastic waves initially propagate from the impact end, and transform into a shock wave later when the loading velocity is high enough [38]. According to the experience, the material at the blast end of a cellular sacrificial cladding could reach a high speed in a very short time. To pursue the authenticity of the material models and blast loading [6,23], we assume that the shock wave can be formed as soon as an increasing velocity from zero acts on the cellular sacrificial cladding. The rationality of this hypothesis will be illustrated by the cell-based finite element results in Section 5.
According to the R-PH model, the elastic wave speed is infinite, and the shock wave speed is finite and determined by the slope of Rayleigh line [8]. It indicates that the shock wave cannot catch up with the elastic wave, and the stress ahead of the shock front is equal to σ 0 (ρ). Thus, the physical quantities ahead of the shock front can be expressed as {v A , ε A , σ A } = {v, 0, σ 0 (ρ)}. If the velocity ahead of the shock front is determined, the physical quantities behind the shock front {v B , ε B , σ B } can be obtained with the aid of Equation (3) and the conservation relations of mass and momentum across the shock front [39], which are where v Sh is the shock front speed and ρ is the local relative density in cellular material at the shock front position. Several kinds of sacrificial claddings, serving as a protective function from blast or impact loads, were proposed and investigated theoretically, experimentally and numerically in the literature [5,6,[19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34], such as single, double, triple and graded cellular sacrificial claddings. However, deformation patterns of those sacrificial claddings under impact loading are different and highly dependent on the density distribution of sacrificial cladding. Under impact loading, the single, uniform cellular layer deforms layer-by-layer, and only one shock wave can be found propagating from the proximal end to the distal end. To enhance the blast/impact resistant capacity, double and triple cellular claddings [21] are proposed, and double shock waves and triple shock waves can be found respectively under impact loading. The deformation pattern of the graded cellular layer is more complicated [22,24,25,40], and a variety of forms of shock wave propagation are possible according to the density distribution in the graded cladding. Therefore, figuring out the deformation mode of the sacrificial layer is the top priority of the anti-blast analysis of sacrificial cladding.
Positively Graded-Uniform Cellular Sacrificial Cladding (PG-U Cladding)
Consider a unit strip of a sacrificial cladding comprising a positively graded cellular layer and a uniform cellular layer, as shown in Figure 3. Once a blast loading is applied to the PG-U cladding, a shock wave initiates at the proximal end and travels through the cladding with a finite speed. If there is no other shock wave, stress imbalance would appear at the distal layer. Thus, it is assumed that two plastic shock waves simultaneously initiate in a PG-U cladding, i.e., one traveling in the proximal layer and the other propagating in the distal layer, then the proximal layer and the distal layer deform simultaneously. Before the shock wave arrives at the end of the distal layer, the stress at the support end (stress transmitted to the protected structure) is limited to σ 0 (ρ 0 ), which can be set as the allowable stress of the protective structure from damage, and it can be roughly achieved when the distal layer is sufficiently thick.
Materials 2020, 13, x 5 of 20 where vSh is the shock front speed and ρ is the local relative density in cellular material at the shock front position. Several kinds of sacrificial claddings, serving as a protective function from blast or impact loads, were proposed and investigated theoretically, experimentally and numerically in the literature [5,6,[19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34], such as single, double, triple and graded cellular sacrificial claddings. However, deformation patterns of those sacrificial claddings under impact loading are different and highly dependent on the density distribution of sacrificial cladding. Under impact loading, the single, uniform cellular layer deforms layer-by-layer, and only one shock wave can be found propagating from the proximal end to the distal end. To enhance the blast/impact resistant capacity, double and triple cellular claddings [21] are proposed, and double shock waves and triple shock waves can be found respectively under impact loading. The deformation pattern of the graded cellular layer is more complicated [22,24,25,40], and a variety of forms of shock wave propagation are possible according to the density distribution in the graded cladding. Therefore, figuring out the deformation mode of the sacrificial layer is the top priority of the anti-blast analysis of sacrificial cladding.
Positively Graded-Uniform Cellular Sacrificial Cladding (PG-U Cladding)
Consider a unit strip of a sacrificial cladding comprising a positively graded cellular layer and a uniform cellular layer, as shown in Figure 3. Once a blast loading is applied to the PG-U cladding, a shock wave initiates at the proximal end and travels through the cladding with a finite speed. If there is no other shock wave, stress imbalance would appear at the distal layer. Thus, it is assumed that two plastic shock waves simultaneously initiate in a PG-U cladding, i.e., one traveling in the proximal layer and the other propagating in the distal layer, then the proximal layer and the distal layer deform simultaneously. Before the shock wave arrives at the end of the distal layer, the stress at the support end (stress transmitted to the protected structure) is limited to σ0(ρ0), which can be set as the allowable stress of the protective structure from damage, and it can be roughly achieved when the distal layer is sufficiently thick.
(a) Assuming the cover plate is a rigid body and the origin of the X coordinate is at the proximal end, as shown in Figure 3a. X1 and X2 are used to denote the Lagrangian coordinates of Shock 1 and Shock 2. Thus, the velocities of the two shock fronts S1 and S2 can be written as According to Equation (4), the mass and momentum conservation conditions across Shocks 1 and 2 are where {v1, ε1, σ1} represent the velocity, strain and stress behind Shock 1, respectively, and {v2, ε2, σ2} are those behind Shock 2, respectively. The momentum conservation of the cover plate m2 and Region 1 at time t gives and the similar momentum conservation of Region 2, middle plate m1 and Region 3 through time interval dt leads to Combining Equations (3) and (5)-(9), we have Assuming the cover plate is a rigid body and the origin of the X coordinate is at the proximal end, as shown in Figure 3a. X 1 and X 2 are used to denote the Lagrangian coordinates of Shock 1 and Shock 2. Thus, the velocities of the two shock fronts S 1 and S 2 can be written as According to Equation (4), the mass and momentum conservation conditions across Shocks 1 and and where {v 1 , ε 1 , σ 1 } represent the velocity, strain and stress behind Shock 1, respectively, and {v 2 , ε 2 , σ 2 } are those behind Shock 2, respectively. The momentum conservation of the cover plate m 2 and Region 1 at time t gives and the similar momentum conservation of Region 2, middle plate m 1 and Region 3 through time interval dt leads to Combining Equations (3) and (5)-(9), we have Thus, the governing equations of the anti-blast response of PG-U cladding at Stage I are controlled by Equations (10) and (11). However, there is no explicit solution to those differential equations. In this study, they are solved numerically with a fourth-order Runge-Kutta scheme with the initial conditions At Stage I, the velocity of Region 1 increases sharply and then decreases slowly due to the resistance of the rod for the force competition of blast loading p(t) and stress behind Shock 1, σ 1 . The common velocity v 2 of Regions 2 and 3 is increasing due to the difference of the stress ahead of Shock 1 σ 0 (ρ(X 1 )) and the stress behind Shock 2, σ 2 . When v 1 = v 2 , Shock 1 vanishes while Region 1, Region 2 and Region 3 with the cover plates m 1 and m 2 move as an entirety towards the distal end. At this moment, Stage II commences. With the application of the mass and momentum conservation relations across Shock 2, Equations (10) and (11) can be rewritten as and The deformation at Stage II is governed by Equations (12) and (13), and the initial conditions are 11 and v 12 are the Lagrangian coordinates of Shocks 1 and 2, the velocities of Regions 1 and 2 at the end of Stage I, respectively. The typical stress distribution along the cladding at Stage I and II are depicted in Figure 3b. It is obvious that the stresses at the undeformed regions do not exceed the local initial crushing stress, which illustrates that the assumption of double shock waves is self-consistent.
Negatively Graded-Uniform Cellular Sacrificial Cladding (NG-U Cladding)
It is supposed that there exist three shock fronts simultaneously propagating in the NG-U cladding at Stage I, as shown in Figure 4a. Under blast loading, Shock 1 propagates from the proximal end to the distal end, while Shock 2 and Shock 3 initiate at the interface between the graded layer and uniform layer, i.e., they are the same disturbance contributed to the crushing at the weakest point. Then Shock 2 propagates towards the proximal end, and Shock 3 propagates oppositely towards the distal end. This assumption will be validated by the FE simulations in Section 5. In the process of propagation, the compacted part of Region 1 and cover plate m 2 move forward with a velocity of v 1 , Region 2 moves with velocity v 2 and Region 3 with v 3 . These velocities satisfy the relationship numerically: Similar to the analysis in Section 3.1, the shock front velocities of S 1 , S 2 , and S 3 are where X 1 , X 2 and X 3 are the shock position measured from the proximal end of cladding. Considering the fact that Shock 1 and Shock 3 are right-spreading waves and Shock 2 is the left-spreading wave, the conservation conditions across Shocks 1, 2 and 3 take the forms: and where {v 1 , ε 1 , σ 1 }, {v 2 , ε 2 , σ 2 }, {v 3 , ε 3 , σ 3 } represent the velocity, strain and stress behind Shocks 1, 2 and 3, respectively. Further application of the momentum conservation relations of compacted Region 1 and m2, Region 2 and Region 3, respectively, we have and Therefore, substituting Equations (3), (15)-(17) into Equation (14) yields where Thus Equations (21) and (22) are the governing equations of NG-U cladding at Stage I, and its initial conditions are X 1 = 0, X 2 = X 3 = L 2 and v 1 = v 2 = v 3 = 0. It is noteworthy that the acceleration of Region 2, dv 2 /dt, is a constant determined by the density-gradient parameter γ in graded layer. Thus an initial constraint dv 1 /dt| t = 0 > dv 2 /dt| t = 0 (i.e., (P 0 − σ(ρ(0)))/m 2 > − 4γk 1 σ S ρ 0 /(2+γ)L 2 ρ S when using Equation (27)) should be satisfied to guarantee the shock propagation assumption in NG-U cladding. When speed v 1 drops to equal v 2 , Shock 1 vanishes, and Shock 2 remains to propagate towards the proximal end for the velocity difference between Region 2 and Region3, i.e., Stage II commences.
Thus, the governing equations for Stage II are The initial conditions of Stage II are X 1 = x 11 , X 2 = x 12 , X 3 = x 13 Similar to the analysis in Section 3.1, the shock front velocities of S1, S2, and S3 are where X1, X2 and X3 are the shock position measured from the proximal end of cladding. Considering the fact that Shock 1 and Shock 3 are right-spreading waves and Shock 2 is the left-spreading wave, the conservation conditions across Shocks 1, 2 and 3 take the forms: The velocity of Region 2 continues to decline until it satisfies the condition of v 2 = v 3 > 0, and then Shock 2 vanishes, i.e., Stage III commences. At this moment, Regions 1, 2, 3 and the cover plates m 1 and m 2 move as an entirety towards the distal end, as shown in Figure 4a. Similarly, the governing equations of the deformation at Stage III are The initial conditions are X 1 = x 21 , 22 and v 23 are the Lagrangian coordinates of Shocks 1, 2 and 3, the velocities of Regions 1, 2 and 3 at the end of Stage II, respectively. The governing equations of Stage I, Stage II and Stage III can also be solved by using the Runge-Kutta method. The stress distribution along the cladding at Stage I, II and III are also checked in Figure 4b, and the stresses at undeformed Regions do not exceed the local initial crushing stress.
Critical Length
The critical length, which defined as the minimum length of cellular sacrificial cladding to fully absorb the energy induced by the blast loading, is a vital target parameter to evaluate the cladding. However, the critical length of graded-uniform claddings cannot be directly obtained because it depends on the initial length L 2 and the shock propagation behavior on the uniform layer within the end time.
The end time t m can be obtained by the law of momentum conservation that the impulse induced by the blast loading should equal to the impulse absorbed by the sacrificial cladding, i.e., P 0 τ = σ 0 t m . In order to obtain the critical length of graded-uniform sacrificial cladding, two constraints [20] are introduced to optimize the design of the thickness distribution of the PG-U and NG-U claddings.
Constraint I is that the proximal layer (graded cellular layer) is fully compacted exactly when the shock waves in this layer vanish. In detail, Shock 1 just reaches the end of the proximal layer when it vanishes for PG-U cladding, and Shock 2 just reaches the stop position of Shock 1 in the proximal layer when it vanishes for NG-U cladding. Applying this condition constrains, we can determine the optimal length of the proximal layer of graded-uniform cladding.
Constraint II is that the distal layer is fully compacted at the instant when the blast loading is fully absorbed. Thus, the thickness of the distal layer can be determined by the final stop position of the shock front in the uniform layer, i.e., Shock 2 for the PG-U cladding and Shock 3 for the NG-U cladding.
By considering Constraints I and II, the critical thickness distribution of graded-uniform cladding can be determined when the blast loading and cover plates are defined.
Propagation of Shock Wave in the Cladding
It is reported that the properties of cellular material can be denoted as functions of its relative density [41][42][43][44]. The two material parameters of R-PH model can also be expressed in power-law forms related to the relative density ρ, written as σ 0 (ρ) = k 1 σ S ρ n 1 C(ρ) = k 2 σ S ρ n 2 (27) where σ S is the yield stress of the matrix materials, and k 1 , k 2 , n 1 , n 2 are fitting parameters. Recently, Cai et al. [31] obtained the coefficients in Equation (27) by fitting the quasi-static compression curves of Voronoi honeycombs (as used in this paper to model the cellular materials) as k 1 = 0.439, k 2 = 0.127, and n 1 = n 2 =2. The solid material of Voronoi honeycombs for the present study is assumed to be elastic, perfectly plastic with Young's modulus 66 GPa, Poisson's ratio 0.3, yield stress 175 MPa and density ρ s = 2700 kg/m 3 . Sacrificial claddings with specific density distribution are applied to demonstrate the propagation of shock waves in claddings for blast alleviation. Here, the mass per unit area of the cover plate is m 1 = m 2 = 2.7 kg/m 2 , the initial peak and decay time of the blast loading are P 0 = 20 MPa and τ = 0.15 ms. The density of uniform cellular cladding ρ 0 = 270 kg/m 3 , and the density-gradient parameter γ = 2/3 and −2/3 corresponding to PG-U and NG-U claddings, respectively. Thus, the critical thickness distribution of PG-U cladding can be determined as L 2 = 91.6 mm and L = 272 mm; while the critical thickness of NG-U cladding as L 2 = 140 mm and L = 298 mm. For the PG-U cladding, the time history of the velocities of Regions 1 and 2 (v 1 and v 2 ) and propagation behaviors of the shock fronts are depicted respectively in Figure 5a,b. At Stage I, the velocity of Region 1 (equals the velocity of cover plate m 2 ), v 1 , first increases and then decreases. Meanwhile, the velocity of Region 2, v 2 , increases from 0 until it reaches the same speed of velocity v 1 . Then, Region 1 and Region 2 move together, and Stage II commences. From the shock propagation, see Figure 5b, Shock 1 just reaches the end of the proximal layer when Stage I comes to an end, which satisfies the condition of Constraint I. At Stage II, there exists only one shock wave, Shock 2, propagating to the support end (protected structure end). The cover plates m 1 and m 2 are moving with the same velocity v 2 , and the position difference of those two plates is the thickness of the compressed graded layer.
Parametric Analysis
The total mass and length of sacrificial cladding are two principal target indicators, which are associated with the cladding structural parameters, such as the attached mass distribution of cover plates and the density-gradient parameter γ. Considering the case that the total mass per unit area of the two cover plates in density graded-uniform claddings, i.e., m = m1+m2, is kept constant at m = 5.4 kg/m 2 , the critical length of the cladding decreases first and followed by a rise with the increasing of |γ|. By comparison with the uniform cellular materials cladding (called as "SU cladding" for short) [2], where the cover mass is equal to m and relative density of cellular materials is ρ0, the results indicate that the critical length of the density graded-uniform cladding is shorter than that of SU cladding within a certain range of |γ| variation, as shown in Figure 6. Furthermore, there exists a minimum length for each density graded-uniform cladding under a defined loading and cover mass. Notably, the changing trend of the critical length of density graded-uniform cladding with increasing of |γ| is not coincidental under different ratio of proximal cover plate mass to the middle one, η = m2/m1. From the perspective of the minimum critical length of cladding, the case of η = m2/m1 = 100 is more suitable for engineering design. In other words, the mass per unit area of the cover plate should all be concentrated on the proximal end, which has an advantage over the design of the critical length.
In fact, the blast loading can be treated as an impulse loading, so the mass per unit area of the cover plate indirectly decides the work done by the blast loading. Thus, a direct result can be deduced that the critical length of the sacrificial cladding decreases with increasing of the mass per unit area of cover plates, and it is verified by the numerical results, as shown in Figure 6. However, the total For the NG-U cladding, three shock waves are propagating simultaneously in the cladding once the blast loads. Similarly, the duration of Stage I is determined by the condition that the increasing velocity of Region 2 reaches the decreasing Region 1 velocity, i.e., v 2 = v 1 . Then Shock 1 vanishes and Stage 2 begins. The end time of Stage II is determined by v 3 = v 2 , which is also the commencement for Stage III. At Stage III, the part behind Shock 3 moves together, and its velocity is gradually reduced to zero due to the alleviation of the undeformed uniform layer, as shown in Figure 5c. The shock front propagation behavior of NG-U cladding is depicted in Figure 5d. Three shock fronts are described by the black, red and blue lines respectively, and they disappear at the end time of Stage I, Stage II and Stage III, respectively. The sum distance swept by Shocks 1 and 2 is just the thickness of graded layer L 2 , and this is precisely caused by Constraint I as mentioned above.
Parametric Analysis
The total mass and length of sacrificial cladding are two principal target indicators, which are associated with the cladding structural parameters, such as the attached mass distribution of cover plates and the density-gradient parameter γ. Considering the case that the total mass per unit area of the two cover plates in density graded-uniform claddings, i.e., m = m 1 +m 2 , is kept constant at m = 5.4 kg/m 2 , the critical length of the cladding decreases first and followed by a rise with the increasing of |γ|. By comparison with the uniform cellular materials cladding (called as "SU cladding" for short) [2], where the cover mass is equal to m and relative density of cellular materials is ρ 0 , the results indicate that the critical length of the density graded-uniform cladding is shorter than that of SU cladding within a certain range of |γ| variation, as shown in Figure 6. Furthermore, there exists a minimum length for each density graded-uniform cladding under a defined loading and cover mass. Notably, the changing trend of the critical length of density graded-uniform cladding with increasing of |γ| is not coincidental under different ratio of proximal cover plate mass to the middle one, η = m 2 /m 1 . From the perspective of the minimum critical length of cladding, the case of η = m 2 /m 1 = 100 is more suitable for engineering design. In other words, the mass per unit area of the cover plate should all be concentrated on the proximal end, which has an advantage over the design of the critical length.
In fact, the blast loading can be treated as an impulse loading, so the mass per unit area of the cover plate indirectly decides the work done by the blast loading. Thus, a direct result can be deduced that the critical length of the sacrificial cladding decreases with increasing of the mass per unit area of cover plates, and it is verified by the numerical results, as shown in Figure 6. However, the total mass of the cladding presents an opposite law with the variation of the mass of cover plate. Under a fixed cover plate mass m, the critical length of PG-U cladding decreases first and then increases until it reaches the critical length value of SU cladding with the increasing of density-gradient parameter γ. At the same time, the total mass of PG-U cladding always shows a downward trend until it reaches the boundary of SU cladding. Therefore, the SU cladding is an excellent choice for the mass design indicator. In contrast, the PG-U cladding is more advantageous when choosing a large γ from the perspective of the critical length design indicator. There is a region in the total mass-critical length figure characterizing PG-U cladding should be preferred compared with SU cladding in critical length design. Two lines surround this region: one is indicated by SU cladding, the other is the boundary line where the critical length of PG-U cladding is equal to that of SU cladding. Similar results can be found for NG-U cladding. However, its variation curves of total mass and critical length cannot reach to the boundary of SU cladding with increasing of |γ|, and they are interrupted by the failure line which is determined by (P 0 − σ(ρ(0)))/m 2 = −4γk 1 σ S ρ 0 /(2+γ)L 2 ρ S 2 as mentioned in Section 3.2. In general, P 0 is larger than σ 0 (0), but this condition will be met when |γ| is large. Therefore, the advantageous region for NG-U cladding is surrounded by the boundary line and the failure line, as shown in Figure 7. indicator. In contrast, the PG-U cladding is more advantageous when choosing a large γ from the perspective of the critical length design indicator. There is a region in the total mass-critical length figure characterizing PG-U cladding should be preferred compared with SU cladding in critical length design. Two lines surround this region: one is indicated by SU cladding, the other is the boundary line where the critical length of PG-U cladding is equal to that of SU cladding. Similar results can be found for NG-U cladding. However, its variation curves of total mass and critical length cannot reach to the boundary of SU cladding with increasing of |γ|, and they are interrupted by the failure line which is determined by (P0 − σ(ρ(0)))/m2 = −4γk1σSρ0/(2+γ)L2ρS 2 as mentioned in Section 3.2. In general, P0 is larger than σ0(0), but this condition will be met when |γ| is large. Therefore, the advantageous region for NG-U cladding is surrounded by the boundary line and the failure line, as shown in Figure 7. Thus, partition diagrams in the total mass -critical length figure can be obtained when the minimum length of cladding is the design goal, where the blue, black and red shaded areas represent SU cladding, PG-U cladding and NG-U cladding, respectively, while the orange shaded area is the dual area for PG-U and NG-U claddings. Those shaded areas are distinguished by the lines representing SU cladding, the failure line for NG-U cladding, and two boundary lines as mentioned above. It is evident that SU, PG-U and NG-U claddings occupy different regions in the total masscritical length figure, as shown in Figure 8, and it should identify different sacrificial claddings according to the actual demand. However, the SU cladding is a superior one compared with the PG-U and NG-U claddings, when total mass is the pursuit of goals. Thus, partition diagrams in the total mass-critical length figure can be obtained when the minimum length of cladding is the design goal, where the blue, black and red shaded areas represent SU cladding, PG-U cladding and NG-U cladding, respectively, while the orange shaded area is the dual area for PG-U and NG-U claddings. Those shaded areas are distinguished by the lines representing SU cladding, the failure line for NG-U cladding, and two boundary lines as mentioned above. It is evident that SU, PG-U and NG-U claddings occupy different regions in the total mass-critical length figure, as shown in Figure 8, and it should identify different sacrificial claddings according to the actual demand. However, the SU cladding is a superior one compared with the PG-U and NG-U claddings, when total mass is the pursuit of goals.
dual area for PG-U and NG-U claddings. Those shaded areas are distinguished by the lines representing SU cladding, the failure line for NG-U cladding, and two boundary lines as mentioned above. It is evident that SU, PG-U and NG-U claddings occupy different regions in the total masscritical length figure, as shown in Figure 8, and it should identify different sacrificial claddings according to the actual demand. However, the SU cladding is a superior one compared with the PG-U and NG-U claddings, when total mass is the pursuit of goals.
Comparison with Cell-Based Finite Element Model
2D irregular honeycombs with a uniform cell-wall thickness are used and generated by utilizing the 2D Voronoi technique [45] in this study. The methodology of the Voronoi technique can generally
Comparison with Cell-Based Finite Element Model
2D irregular honeycombs with a uniform cell-wall thickness are used and generated by utilizing the 2D Voronoi technique [45] in this study. The methodology of the Voronoi technique can generally be described in four main stages [22,46]. At first, N nuclei were randomly scattered in a given region. Secondly, the nuclei were imaged to the surrounding regions to make sure the boundary density is consistent with the preconcerted design. Thirdly, Delaunay triangulation is constructed, and then the Voronoi diagram is determined. Finally, the part of the Voronoi diagram of the given region is reserved for further analysis. It is reported that 2D graded cellular structures can be realized by switching the nuclei scatter strategy in the first Stage [22], as shown in Figure 9. The distance between any two nuclei i and j is required to be where k is the cell irregularity, δ 0 (ρ) is the minimum distance between any two adjacent nuclei in a honeycomb structure with relative density ρ, ρ ij is the local relative density at the middle point between nuclei i and j, and h is a presupposed cell-wall thickness. Thus, any 2D cellular structure with continuous density variation can be constructed according to the given local relative density distribution.
In this section, we consider the case that the blast loading is P 0 = 30 MPa and τ = 0.15 ms, the mass per unit area of the cover plates are m 1 = m 2 = 2.7 kg/m 2 and the density-gradient parameter is set as γ = ±1. Thus, the critical thickness distribution of PG-U cladding can be determined as L 2 = 67.6 mm and L = 388.4 mm, while the critical thickness of NG-U cladding as L 2 = 110 mm and L = 391.7 mm. In the cell-based FE simulations, a 2D graded Voronoi honeycomb structure (γ = 1) with an area of 67.6 × 100 mm 2 , a 2D uniform Voronoi honeycomb structure with an area of 320.8 × 100 mm 2 and two rigid plates with an additional mass of 0.27 g respectively are assembled as an FE model for PG-U cladding. Similarly, a 2D graded Voronoi honeycomb structure (γ = −1) with an area of 110 × 100 mm 2 , a 2D uniform Voronoi honeycomb structure with an area of 281.7 × 100 mm 2 and two rigid plates with an additional mass of 0.27 g respectively are assembled as an FE model for NG-U cladding. The length of specimens in the out-of-plane direction is 1 mm. ABAQUS/Explicit code was used to perform the FE simulations. Cell walls of specimens were modeled with S4R shell elements. The element size was set to be about 0.6 mm in-plane and 1 mm out-of-plane through a mesh sensitivity analysis. Thus, there are about 11,350 and 16,520 elements for the FE models of PG-U cladding and NG-U cladding, respectively. General contact with a slight friction coefficient of 0.02 is employed, as used in Zheng et al. [12]. To simulate an in-plane strain state, all the nodes were constrained in the out-of-plane direction. A pressure with exponential function (P 0 = 30 MPa and τ = 0.15 ms), which can be directly called in ABAQUS/Explicit code, was used to simulate the blast load in the FE simulation.
Voronoi diagram is determined. Finally, the part of the Voronoi diagram of the given region is reserved for further analysis. It is reported that 2D graded cellular structures can be realized by switching the nuclei scatter strategy in the first Stage [22], as shown in Figure 9. The distance between any two nuclei i and j is required to be where k is the cell irregularity, δ0(ρ) is the minimum distance between any two adjacent nuclei in a honeycomb structure with relative density ρ, ρij is the local relative density at the middle point between nuclei i and j, and h is a presupposed cell-wall thickness. Thus, any 2D cellular structure with continuous density variation can be constructed according to the given local relative density distribution.
(a) (b) In this section, we consider the case that the blast loading is P0 = 30 MPa and τ = 0.15 ms, the mass per unit area of the cover plates are m1 = m2 = 2.7 kg/m 2 and the density-gradient parameter is set as γ = ±1. Thus, the critical thickness distribution of PG-U cladding can be determined as L2 = 67.6 mm and L = 388.4 mm, while the critical thickness of NG-U cladding as L2 = 110 mm and L = 391.7 mm. In the cell-based FE simulations, a 2D graded Voronoi honeycomb structure (γ = 1) with an area of 67.6 × 100 mm 2 , a 2D uniform Voronoi honeycomb structure with an area of 320.8 × 100 mm 2 and two rigid plates with an additional mass of 0.27 g respectively are assembled as an FE model for PG-U cladding. Similarly, a 2D graded Voronoi honeycomb structure (γ = −1) with an area of 110 × 100 mm 2 , a 2D uniform Voronoi honeycomb structure with an area of 281.7 × 100 mm 2 and two rigid plates with an additional mass of 0.27 g respectively are assembled as an FE model for NG-U cladding. The length of specimens in the out-of-plane direction is 1 mm. ABAQUS/Explicit code was used to perform the FE simulations. Cell walls of specimens were modeled with S4R shell elements. The element size was set to be about 0.6 mm in-plane and 1 mm out-of-plane through a mesh sensitivity analysis. Thus, there are about 11,350 and 16,520 elements for the FE models of PG-U cladding and NG-U cladding, respectively. General contact with a slight friction coefficient of 0.02 is employed, as used in Zheng et al. [12]. To simulate an in-plane strain state, all the nodes were constrained in the out-of-plane direction. A pressure with exponential function (P0 = 30 MPa and τ = 0.15 ms), which can be directly called in ABAQUS/Explicit code, was used to simulate the blast load in the FE simulation. A comparison between the cell-based FE results and the theoretical predictions based on the R-PH shock model is presented in Figure 10. The velocity curves of the cover plate obtained from the analytical predictions and FE results are in good agreement, as shown in Figure 10a,c, which correspond to PG-U cladding and NG-U cladding, respectively. The cell-based FE results demonstrate that the proximal layer and the distal layer almost deform simultaneously once the claddings are applied to the blast load, and the middle plate velocities of cell-based FE results also conform with those of theoretical results very well, which can directly illustrate the reasonable of the assumption made in Section 3. The two layers also move together when the velocity of the proximal layer decreases to that of the distal layer for both graded uniform cellular sacrificial claddings.
The support stress is an important consideration, which is related to the effectiveness of the sacrificial cladding. The results indicate that the time history of the support stress obtained from the FE results is consistent with the theoretical solutions except it is slightly higher than the theoretical predictions in the later period, as depicted in Figure 10b,d. This tiny difference may be due to the action of an elastic reflected wave from the stationary support end and within the acceptable error. If only the proximal graded cellular layer is considered for sacrificial cladding, the maximum support stress will present a rising trend, which is disadvantageous for the anti-blast design of sacrificial cladding.
Detailed deformation patterns of PG-U cladding and NG-U cladding under blast loading of P 0 = 30 MPa and τ = 0.15 ms are shown in Figure 11a,b, respectively. It can be observed that two shock fronts initiate simultaneously in PG-U cladding once the blast loads, and three shock fronts commence simultaneously in NG-U cladding. Those phenomena are consistent with the assumptions of theoretical analysis. However, the crushing behavior presents random deformation bands at the later stage of compression for both claddings, because of the decrease of impact velocity. As the velocity of the attached mass becomes low at the later stage of compression, see Figure 10a,c, the deformation mode of the cellular sacrificial cladding changes into a transition or homogeneous mode [12,15,37,45], which can be verified by the random shear collapse bands in the uniform cellular layer. Thus, the results will be more accurate for cellular sacrificial cladding when considering both dynamic and quasi constitutive relations.
analytical predictions and FE results are in good agreement, as shown in Figure 10a,c, which correspond to PG-U cladding and NG-U cladding, respectively. The cell-based FE results demonstrate that the proximal layer and the distal layer almost deform simultaneously once the claddings are applied to the blast load, and the middle plate velocities of cell-based FE results also conform with those of theoretical results very well, which can directly illustrate the reasonable of the assumption made in Section 3. The two layers also move together when the velocity of the proximal layer decreases to that of the distal layer for both graded uniform cellular sacrificial claddings. The support stress is an important consideration, which is related to the effectiveness of the sacrificial cladding. The results indicate that the time history of the support stress obtained from the FE results is consistent with the theoretical solutions except it is slightly higher than the theoretical predictions in the later period, as depicted in Figure 10b,d. This tiny difference may be due to the action of an elastic reflected wave from the stationary support end and within the acceptable error. If only the proximal graded cellular layer is considered for sacrificial cladding, the maximum support stress will present a rising trend, which is disadvantageous for the anti-blast design of sacrificial cladding.
Detailed deformation patterns of PG-U cladding and NG-U cladding under blast loading of P0 = 30 MPa and τ = 0.15 ms are shown in Figure 11a,b, respectively. It can be observed that two shock fronts initiate simultaneously in PG-U cladding once the blast loads, and three shock fronts commence simultaneously in NG-U cladding. Those phenomena are consistent with the assumptions of theoretical analysis. However, the crushing behavior presents random deformation bands at the later stage of compression for both claddings, because of the decrease of impact velocity. As the velocity of the attached mass becomes low at the later stage of compression, see Figure 10a,c, the deformation mode of the cellular sacrificial cladding changes into a transition or homogeneous mode [12,15,37,45], which can be verified by the random shear collapse bands in the uniform cellular layer. Thus, the results will be more accurate for cellular sacrificial cladding when considering both dynamic and quasi constitutive relations.
Conclusions
The dynamic responses of density graded-uniform sacrificial cellular claddings subjected to blast loading are investigated theoretically and numerically in this paper. Based on the rateindependent R-PH shock model of cellular materials and blast loading with an exponential
Conclusions
The dynamic responses of density graded-uniform sacrificial cellular claddings subjected to blast loading are investigated theoretically and numerically in this paper. Based on the rate-independent R-PH shock model of cellular materials and blast loading with an exponential attenuation function, differential equations governing the shock front propagation in density graded-uniform claddings were obtained and solved numerically with a fourth-order Runge-Kutta scheme.
Theoretical predictions reveal the characteristics of the motion law of cover plates and the shock front propagation in the claddings. There exist two shock fronts and two stages in the anti-blast response of the PG-U cladding, while there were three shock fronts and three stages in NG-U cladding. The influences of the cover plate mass and the density graded parameter γ on the critical length of sacrificial cladding were analyzed. The results illustrate that the cover plate mass distribution should concentrate on the proximal end, which is beneficial for the anti-blast design of both PG-U and NG-U claddings. Moreover, the optimal critical thickness of the graded-uniform cladding can be achieved by choosing a reasonable graded parameter when the mass ratio of the two cover plates is defined. Then, a critical length design partition diagram for SU, PG-U and NG-U claddings against the blast load is presented, and it illustrates that each cladding can contribute its talent according to the actual demand. The SU cladding is a superior choice compared with PG-U and NG-U claddings when pursuing the minimum total mass as the goal.
The numerical simulations of anti-blast behavior of sacrificial cladding are carried out by using the cell-based finite element model. The analytical predictions of graded-uniform claddings are compared with the cell-based FE results. Generally, good agreement is achieved and further confirms the correctness and effectiveness of theoretical analysis. | 2020-12-10T09:02:37.245Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "d25bf325d7b23a3f44346fc12a72410600a83e69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/24/5616/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c09e6c42265f7079053c84f4b702c4e79048256",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
145062958 | pes2o/s2orc | v3-fos-license | Perspectives on Human Attachment (Pair Bonding): Eve's Unique Legacy of a Canine Analogue
The mother-child bond is undoubtedly homologous with that of other primates (and mammals). However, the man-woman pair bond and man(to)child pair bond are not paralleled by any terrestrial primate nor many mammals. Hence, knowledge of primate behavior would not be predictive of the pan-human (i) social father and (ii) the extended pair bond between a man and woman (with the cultural overlay of marriage). It is suggested that female choice of mating partner shifted in the direction of a canid analogue in which men's motivations to share resources with the female and to exhibit paternalistic behaviors were positively selected. Accordingly, it would be predicted that, compared to other terrestrial primates, the neuro-hormonal bases for the mother-child affiliative bond would be similar, but the bases of man-woman affiliative bond and the man(to)child affiliative bond would be dissimilar.
Introduction
While "love" has been a favorite fodder for poets and playwrights, scientific efforts have been less prolific. Nonetheless, in a series of investigations, Fisher et al. (2002), inter alios, have attempted to isolate neural circuitry and brain chemistry which profiles three types of "love" (at least in adults): (i) lust or sexual the on-going social father. None of the great apes exhibits either of these two features. 3.
The man-woman pair bond
Romantic love seems to be a human universal and is found across highly variegated social structures and ecologies (Jankowiak and Fischer 1992;Jankowiak 1995). For example, Jankowiak's survey of 168 cultures found "romantic love/passion" in 148 (89%) of the cultures. Such an emotion seems a key component in the forging of an extended human pair-bond. Fisher (1983) suggests that archaic Homo had begun developing a reproductive strategy wherein females exchanged (relative) sexual exclusivity for (relatively) unique provisioning on the part of a male. This strategy would form the basis to the nascent pair-bond which has proved so successful in human bio-cultural evolution. 4. Such a shift in strategy would be aberrant for terrestrial primates in particular and mammals in general (Boesch 1994, Boesch and Boesch 1989, de Waal 1997, de Waal and Lanting 1997, Galdikas 1985, Goodall 1986, Mackinnon 1974, McGinnis 1979Nishida and Hosaka 1996, Parish 1996, Smuts and Gubernick 1992, Stanford 1996, Taub 1984, Teleki 1973; see Kleiman [1977] for discussion on mammalian and avian monogamy). Typically, these adult males compete amongst themselves to achieve greater dominance within a male hierarchy; then females mate preferentially with the more successful males. The translation of increased physical dominance to increased reproductive success can range from slight to stark, see Ellis (1995) and Dewsbury (1982) for examples.
With the putative shift, females had to evaluate, not just the physical dominance and assertiveness of the competing males (who won), but also the psychological profile of the competing males: i.e., trustworthiness in reciprocity over time. Framed a little differently, sperm is essentially infinite. Femalefemale competition over mating protocols has little pay-off for the victor. The winner would accrue no advantage. However, food is finite and is valuable. Incremental food (via the male) gained from any successful female-female competition, would have survival value for the winning female. Whereas access to sperm may be a constant, access to food is a variable.
Thus, male-male competition for mating partners incorporated an additional psychological parameter (enhanced reliability or trustworthiness), and femalefemale competition for mating partners (who would reliably share food) arose to become important. 5.
As ethnographies on both historical and contemporary cultures illustrate, males -who had been selected over millennia by females -return to the domicile and willingly and systematically share resources with the woman in the pair bond, i.e. his wife (Hewlett 1992, Human Relations Area Files 1949, Lamb 1987, Mackey 1985, Murdock 1957, 1967. The man-to-woman sharing is found across subsistence and ecological parameters viz. Amazonia (Chagnon 1977, Stearman 1989, China (Chance, 1984), Tibet (Ekvall, 1968), the Dani of New Guinea (Heider 1979), Eskimos (Chance 1966), Japan (Norbeck 1976), Australian aborigines (Hart andPilling 1960, Tonkinson 1978), the Dobe !Kung of the Kalahari desert (Lee 1984). This sharing of resources from man-to-woman is a universal; see Brown (1991) for additional human universals.
The provisioning is not totally exclusive. Systematic food sharing has been ritualized in many, if not all, societies. Rarely can a hunter claim a large kill for only his own family (Coon 1971, Lee 1982, Tonkinson 1978, Chance 1966. But, within these contexts, a man provides singular attention in terms of provisioning and protecting the legitimate children that he has fathered and his wife or wives (see HRAF, #22 -26 [1949] for examples, and see Malinowski [1927] and Hendrix [1996] for theoretical discussions).
When resources are not forthcoming from a prospective groom, brides are difficult to acquire (Cashdan 1993) and wives are difficult to keep (Betzig 1989). For example, in a sample of 50 cultures which had economic deprivation as a sanctioned reason to divorce, the wife could divorce the husband in 49 of the cultures. In one, either of the spouses could initiate the divorce. In no culture could only the husband divorce the wife on the basis of her economic deficiencies (Betzig 1989). When the pattern of male provisioning does break down across the overall society, e.g. the Ik (Turnbull 1972), the breakdown signals an overall societal disintegration and is a focused topic of the ethnographer's analysis.
The monogamous, arboreal primates
The monogamous (pair-bonding), arboreal primates -e.g. the marmosets of the New World and the gibbons of Southeast Asia -also illustrate relatively high levels of paternalistic behaviors. For example, between suckling episodes, the Evolutionary Psychology -ISSN 1474-7049 -Volume 1. 2003. marmoset's male partner (presumptively the biological sire) will carry the infant (Jolly 1985). If two offspring are still with their parents, the male gibbon will tend to the juvenile; while the female gibbon will tend to the infant (Carpenter 1940, Chiver 1977, Leighton 1986). In terms of this article, the difference between these primates and the canid and human fathers is that the canid fathers and human fathers do leave the mother and young and travel widely to procure food. Once the food is procured, these males return to the mother-young dyad to relinquish the food for the consumption of the mother and the young. None of the arboreal primates has been reported to engage in such traveling alone and then returning to actively relinquish and share food. Given the phylogenetic distance between humans and the gibbons from each other and both from the New World primates, the behavioral profiles which are similar probably reflect behavioral convergences which, in turn, reflect ecological constrictions (analogues) rather than genetic continuity (homologues).
Of additional interest is the relationship between sexual dimorphism and the four categories of humans, canids, arboreal, pair-bonding primates, and the terrestrial great apes.
Lessened sexual dimorphism in Homo
As data in the next section suggest, sexual dimorphism is lower in Homo sapiens than would be expected given our generally agreed upon ecological heritage as (i) a large, (ii) terrestrial primate which is (iii) non-obligate monogamous. Dominance displays by men which are based on their own physical/biological attributes also seem to be substantially restricted. This section seeks to address one facet by which expected dominance displays by men would have lost positive selectivity and thereby reduced in degree, if not in kind.
Although dominance, as a construct, has a rich history with variegated definitions, this section has a narrow focus. A dominance display is defined here as a behavior or a physical characteristic on the part of one adult male which is directed at other adult males to allow differential access to breeding females. Successful dominance displays by an adult male would enhance that adult male's access to breeding females. Unsuccessful dominance displays or lack of dominance displays would decrease the male's access to breeding females (see Ellis [1995] and Dewsbury [1982) for examples of the relationship -sometimes stark and sometimes slight -between increased male dominance and increased reproductive success).
Parameters of sexual dimorphism in primates
Although there are exceptions, sexual dimorphism (e.g. by weight [from Hall 1985]) tends to be greater in (semi)terrestrial primates than in arboreal primates . The argument is that, when the males exchanged the harder to scan world of the trees for the easier to scan world of the ground, they were better able to assert dominance and have multiple sexual partners. Indeed, terrestrial primates are more prone to be polygynous than are arboreal primates (Jolly 1985, Hrdy 1999. Accordingly, after it was freed from problems of fissile tree limbs and incessant gravity, additional male size would be advantageous in creating dominance for the larger male and in creating submission in the smaller male (see Fleagle [1988]; Martin, Willner and Dettling [1994]; McHenry [1991]; Richard [1985] for examples and discussion). Hence, a more effective male-to-male dominance displays/aggression could then be translated to multiple partners which would lead to a greater number of descendants who, in turn, would pass on the genetic material underpinning the physical attributes of the "successful" display. The same argument would apply to increased canine size and enhanced piloerection or other display items (e.g. manes) which could be used to gain dominance and, thereby, to gain access to more sexual partners, and, hence, to sire more descendants.
There are three givens that apply here: (a) Homo's predecessor Australopithecus did exhibit a large degree of sexual dimorphism by size (Hall 1985;Plavcan and van Schaik 1997), (b) Homo, compared to Australopithecus, gradually increased in size (Hall 1985;Aiello 1994) and (c) Homo became exclusively terrestrial. From these three givens a, not unreasonable, inferred assumption would be that Homo would follow the basic trend of maintaining or increasing sexual dimorphism. However, sexual dimorphism decreased (Arsuaga et al. 1997;Economist 1994;Lewin 1987;Lockwood et al. 1996;McHenry 1991).
In terms of height, the sexual dimorphism of contemporary humans is 107 (s.d. 1.5; n = 93 [societies], i.e. women are 94% the height of men) (Alexander et al., 1979). The human canine is virtually (sexually) isomorphic, and piloerection is not a functional human trait. In terms of weight, the sexual dimorphism of (U.S.) humans is 130.0. Since the linear correlation between the weight of primate males and the sexual dimorphism of their species is significant (r p = .569; p < .01; two-tailed; n = 47) (Hall 1985), then the sexual dimorphism of human males could be predicted from their weight. When the "sexual dimorphism ratio" is predicted from the average man's weight, the predicted value is a sex ratio for male-to-female of 187.4. This predicted value over-estimates the actual value of In other words, humans are far more sexually isomorphic than would be expected by the ecological circumstance of their phylogeny. It is argued here that there were lessened selective pressures for dominance displays in early Homo, and that an excellent candidate for one such agent which generated the negative selection is a shift in female preference toward provisioning in the definition of an appropriate mating partner. It is useful to reiterate that the canids mentioned above also tend toward minimal sexual dimorphism and facultative monogamy (Kleiman 1977). Complementary candidates to explain the reduced sexual dimorphism include the notions (from Hrdy 1999) that larger mothers -once terrestrial -were positively selected because of the sheer advantage that size has in manipulating the environment for herself and her children; (from Wrangham et al. 1999) that a derivative of a dietary shift to cooked foods and a man-woman negotiation wherein men helped protect the food source, and (from Immerman and Mackey 1999) that an infestation of sexually transmitted diseases within a troop would tend to favor monogamy and penalize promiscuity as practiced by more dominant males viz. due to increased sterility, and higher morbidity and mortality for mates and any subsequent progeny.
In gist, a shared ecological and phylogenetic heritage with the great apes would predict a large sexual dimorphic index. That is, a large, terrestrial primate is aligned with increased sexual dimorphism. An enhanced fathering index is aligned with reduced sexual dimorphism. The large, terrestrial primate -Homo sapiens -is aligned with an unexpectedly small sexual dimorphism.
The man(to)child affiliative bond
There is a cross-cultural tendency of men to associate with (their) children in public places -away from the domicile -during times when men are not precluded by work schedules or ritual (Mackey 1985(Mackey , 2001). In a cross-cultural study of 23 cultures and of over 55,000 adult-child dyads, nearly a fifth of the surveyed children who were with adults were with men -no women present (Mackey 2001). This 20+% (sd = 5.9%) is difficult to conceptualize as mere error variance. 6. Another third (sd = 14.5%) of the children with adults were with men, and women were also present. The remaining children with adults were with women, but no man was present. See Table 1. Compared to those times when men tended to be precluded by cultural norms from being with children, the percentages of children with men -both in men-only and in men and women adult groups -increased for each of the surveyed cultures (n = 17), plus increased in the aggregate, when those restrictive norms were absent. See Table Evolutionary Psychology -ISSN 1474-7049 -Volume 1. 2003.
1. There is no theory which would suggest that the men who are proximate to these children would be other than consanguine kin. Table 1 Mean percentage distribution of children in 23 cultures by adult group during times of availability of men to children at described loci (40,233 children) and during those times when cultural norms precluded availability of men to children (17 cultures, 18,637 children) (each culture is weighted equally) (adapted from Mackey [2001] Given the greater physical size and power of the man versus the child (and versus the wife/mother), it is suggested that the men were associating with the children because they -the men -chose to do so. The ethnographic literature is replete with examples of fathers being fond of their own children (Hewlett 1992, Lamb 1987, Mackey 1985. Framed differently, the data suggest an independent man(to)child affiliative bond which is part of Homo's bio-cultural heritage.
Again, these (man[to]child) behaviors would not be predicted from the primate homologues, but would be predicted by a canid analogue. The canid adult males do return from hunting/scavenging and share food with their young. Note that the canid adult males also systematically "play" with their pups, whereas the adult males of other social carnivores -lions (Guggisberg 1963, Rasa 1986, Rudnai 1973, Schaller 1972) and hyenas (Lawick andLawick-Goodall 1971, Kruuk 1972) -neither provision nor "play" with their young, and are often a physical threat to the young of the social group. (For the context of social structure and ecology, see King (1980), Lovejoy (1981, Mackey (1976Mackey ( , 1985Mackey ( , 1986, Schaller and Lowther (1969), and Thompson (1978).
with whom they could establish an extended affiliative bond (attachment) and (2) from whom reliable provisioning was predictable, they were simultaneously selecting for traits which would forge a social father: a man who would form attachments -bond -with his young and who would be psychologically willing to share resources with those young. See Tiger (1968) for discussion on the manman bond. Accordingly, a number of hypotheses would be forthcoming. The neurohormonal basis of the mother-child attachment would be expected to be similar (homologues) to the templates of the female great apes. However, the neurohormonal basis to man(to)woman attachment would be expected to differ from any affiliation template between the genders in the great apes. Given the similar behaviors of canid pair-bonding to man-woman attachment, an analogous profile with those canids would be expected.
Furthermore, the neuro-hormonal basis to man(to)child attachment would be expected to be different from any adult-male(to)young affiliation template in the great apes. Given the similar behaviors of man(to)child attachment to canid adult male(to)pup behaviors, an analogous profile with those canids would be expected.
Of course, the triangulation of the terrestrial great apes and the canids with the arboreal, monogamous primates (marmosets, tamarins, gibbons, siamangs) wherein aspects of paternal behaviors are also typical would be interesting.
Conclusion
Across cultures, men develop extended pair-bonds with women (they marry women) and provision these women. The men also nurture their own children. Within the context of these two universals, the argument is presented that the affiliation which mediates these behaviors is, in part, neuro-hormonal in character and thus part of the phylogenetic heritage of our species. The drive-wheel for these behaviors, which would not be predicted by knowledge of terrestrial primates, is argued to be based on a successful reproductive strategy of our female ancestors, a strategy analogous to that of female canids -convergent evolution -that enables them to exploit a novel resource for predictable sustenance for themselves and their offspring: men.
Notes
1. Elevated levels of the hormone prolactin have been aligned with male parenting behaviors in many birds, and rodents and the callitrichid monkeys: Callithrix jacchus and Saguinus oedipus. In birds, prolactin may be elevated in both male and female breeders during various stages of nest building, egg laying, incubating and feeding of young. (Ziegler 2000). Accordingly, prolactin is probably involved at some stage in initiating or maintaining men's paternalistic behaviors. See Storey, Walsh, Quinton, and Wynne-Edwards (2000) for clinical supporting evidence. However, prolactin is a very old hormone and is found in fish and birds as well as across the mammalian domain (Hrdy 1999). Given that prolactin is available to great apes and to humans, but the paternalistic behaviors among the two groups are different, then the surveyed behavioral differences is unlikely to be explained by the presence of prolactin per se. 2. In humans, the mating preferences -marriage partners -are subject to a wide range of social pressures spanning the spectrum from nosy neighbors to attentive kin to the mandated cultural tradition of arranged marriages (Stephens 1963, Van den Berghe 1979, see Murdock's [1967 "Ethnographic Atlas" Column #12 [Modes of Marriage] for types and frequencies of wife procurement). 3. The arboreal primates, e.g. lesser apes and marmosets, are often monogamous and provide an additional source of context for human pair-bonding. However, this exercise will focus on the context of large, terrestrial primates. 4. This exercise is not intended to re-visit the nature-nurture debate. Let it suffice that the underlying assumption to this argument is that both socialization traditions and genetic information (and their interactions) affect the trajectory and manifestation of the development of human behavior. For theoretical discussions, see Barkow (1980Barkow ( , 1989, Boyd and Richerson (1976, 1978, Dunbar, Knight, and Power (1999), Durham (1982Durham ( , 1991, Lumsden and Wilson (1985), Lumsden and Wilson (1982), and Ridley (2003). 5. Although most (approx. 85%) of known societies have allowed polygyny, most men in these societies have only one wife at any one time. With the exception of the rare polyandrous societies, virtually all women are in monogamous marriages. 6. Both age and gender of the child, plus gender of the adult, influenced the proportions of adult-child dyads, see Mackey (2001) for a more finely grained analysis. | 2018-05-08T17:40:30.333Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "02d5773e2d7e58f8ab6ff76845c07c7cfec7f7d3",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/147470490300100110",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "02d5773e2d7e58f8ab6ff76845c07c7cfec7f7d3",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
16824855 | pes2o/s2orc | v3-fos-license | MOG antibody–positive, benign, unilateral, cerebral cortical encephalitis with epilepsy
Objective: To describe the features of adult patients with benign, unilateral cerebral cortical encephalitis positive for the myelin oligodendrocyte glycoprotein (MOG) antibody. Methods: In this retrospective, cross-sectional study, after we encountered an index case of MOG antibody–positive unilateral cortical encephalitis with epileptic seizure, we tested for MOG antibody using our in-house, cell-based assay in a cohort of 24 consecutive adult patients with steroid-responsive encephalitis of unknown etiology seen at Tohoku University Hospital (2008–2014). We then analyzed the findings in MOG antibody–positive cases. Results: Three more patients, as well as the index case, were MOG antibody–positive, and all were adult men (median age 37 years, range 23–39 years). The main symptom was generalized epileptic seizure with or without abnormal behavior or consciousness disturbance. Two patients also developed unilateral benign optic neuritis (before or after seizure). In all patients, brain MRI demonstrated unilateral cerebral cortical fluid-attenuated inversion recovery hyperintense lesions, which were swollen and corresponded to hyperperfusion on SPECT. CSF studies showed moderate mononuclear pleocytosis with some polymorphonuclear cells and mildly elevated total protein levels, but myelin basic protein was not elevated. A screening of encephalitis-associated autoantibodies, including aquaporin-4, glutamate receptor, and voltage-gated potassium channel antibodies, was negative. All patients received antiepilepsy drugs and fully recovered after high-dose methylprednisolone, and the unilateral cortical MRI lesions subsequently disappeared. No patient experienced relapse. Conclusions: These MOG antibody–positive cases represent unique benign unilateral cortical encephalitis with epileptic seizure. The pathology may be autoimmune, although the findings differ from MOG antibody–associated demyelination and Rasmussen and other known immune-mediated encephalitides.
antibodies play a direct pathogenetic role in the animal model of inflammatory demyelinating disease, although previous studies designed to detect MOG antibody with the ELISA or Western blotting in human inflammatory demyelinating diseases have failed to reveal any characteristic findings in patients. 3,6,7 However, recent studies have demonstrated that conformation-sensitive MOG antibody can be detected by cell-based assays (CBAs) in patients without multiple sclerosis (MS), such as those with pediatric acute disseminated encephalomyelitis (ADEM), aquaporin-4 (AQP4)-immunoglobulin G (IgG)-negative neuromyelitis optica spectrum disorders (NMOSD), optic neuritis (ON), and longitudinally extensive transverse myelitis (LETM). 2,3,[8][9][10][11][12] These findings suggest that the MOG antibody may serve as a biomarker to define a spectrum of inflammatory demyelinating diseases, and extensive studies of MOG antibody-positive cases may identify new clinical phenotypes directly or indirectly associated with this myelin antibody.
In the present study, we encountered an index case of MOG antibody-positive Table 1 Clinical features of 4 patients with unilateral cortical encephalitis positive for the MOG antibody benign unilateral cerebral cortical encephalitis manifesting with generalized epileptic seizure and then investigated the presence of MOG antibody in an adult cohort of patients with steroid-responsive encephalitis of unknown etiology to identify any unique features of encephalitis in MOG antibody-positive cases.
METHODS Patients, sera, and CSF. We encountered an adult patient (index case, case 1) with unique benign unilateral cerebral cortical encephalitis manifesting with generalized epileptic seizure and seropositivity for MOG antibody in 2014. To explore any other cases with similar features, we identified 24 consecutive patients diagnosed with steroid-responsive encephalitis of unknown etiology seen at Tohoku University Hospital from 2008 to 2014. The patients were older than 20 years and were followed for more than 19 months. We defined steroid-responsive encephalitis of unknown etiology as cases with encephalopathy (epileptic seizure, abnormal behavior, disturbance of consciousness, or focal brain symptoms) that responded to corticosteroid therapy and could not be explained by fever, systemic illnesses, or postictal symptoms. Additional criteria included abnormal brain MRI and CSF findings during the acute phase that were compatible with encephalitis and not indicative of alternative CNS diseases. Sera and CSF were collected during the acute phases and were stored at 280°C. In some cases, sera obtained during remission phases were also stored.
Assays for autoantibodies. We conducted live CBA for MOG antibody based on our previous reports with modification (we used anti-human IgG1 as the secondary antibody to avoid nonspecific binding 8,10 ). Briefly, full-length MOG-expressing or MOG-nonexpressing stable cell lines were incubated with a 1:16 dilution of serum and then incubated with a 1:400 dilution of Alexa Fluor 488 mouse anti-human IgG1 antibody (A10631; Thermo Fisher Scientific, Rockford, IL). After cell immunostaining, 2 investigators (R.O. and T.T.), who were blinded to patients' data, judged MOG antibody positivity by comparing the staining results of MOG-expressing and MOG-nonexpressing cells. In MOG antibody-positive samples, the antibody titers were calculated by consecutive twofold dilutions to ascertain the maximum dilution with positive staining. Simultaneously, M23-AQP4 antibody in the serum was tested by live CBA using Alexa Fluor 488 goat anti-human IgG (A11008, Thermo Fisher Scientific) as the secondary antibody. Anti-NMDA receptor (NMDAR) antibody, anti-a-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor (AMPA) antibody, anti-leucine-rich gliomainactivated protein 1 (LGI1) antibody, anti-contactin-associated protein 2 (CASPR2) antibody, and anti-g-aminobutyric acid receptor type B receptor (GABA B ) antibody in the CSF were tested by indirect immunofluorescence using commercially available kits (Euroimmun, Lübeck, Germany).
Standard protocol approvals, registrations, and patient consents. This study was approved by the institutional ethics committee, and all patients provided written informed consent.
they were positive for MOG antibody in the CSF. On brain MRI examination, all 4 cases showed unilateral hemispheric cortical hyperintense lesions in fluid-attenuated recovery (FLAIR) imaging (figure 2A and figure 3, A-C, F-H, J-R, and T). None of the MOG antibody-negative patients in our cohort of encephalitis patients showed such FLAIR-hyperintense cortical lesions. The 4 MOG antibody-positive patients were treated with high-dose IV corticosteroids and antiepilepsy drugs, and they fully recovered. We also screened for other encephalitis-related autoantibodies, including AQP4, NMDAR, AMPA, LGI1, CASPR2, and GABA B antibodies, but negative results were obtained for all of the cases. He presented with a right relative afferent pupillary defect and color vision defect in the right eye. Fundus examination revealed optic disc swelling in the right eye. Regarding visual evoked potentials (VEP), the amplitude of P100 was reduced with prolonged latency in the right eye (120.6 ms). This patient was diagnosed with idiopathic ON (figure 1, A and B) and treated with high-dose IV methylprednisolone (HIMP) (1,000 mg/d for 3 days) followed by an oral prednisolone (PSL) taper. His visual symptoms greatly improved soon after HIMP. Seven months later, the patient acutely developed loss of consciousness and generalized tonic seizure and was admitted to our hospital. On admission, he was alert, but his left hand was weak due to Todd palsy. Complete blood cell counts (CBC) and biochemistry were normal. A CSF study showed mild pleocytosis and elevated interleukin-6 (IL-6; 72.6 pg/mL, normal ,4.0 pg/mL). Glutamic acid decarboxylase (GAD) antibody, thyroid peroxidase (TPO) antibody, and thyroglobulin (Tg) antibody results were negative in the serum, but the MOG antibody test was positive in the serum (1:512) and in the CSF (1:32) (table 1). EEG showed rhythmic slow waves in the right cerebral hemisphere but no epileptic discharge in the interictal stage. Brain MRI scanned on the day of epilepsy onset showed FLAIR hyperintensity in the right hemispheric cortical region, and the cortical layer was mildly swollen (figure 3, A-C) but did not show gadolinium enhancement on T1-weighted imaging (GdT1WI). Slight hyperintensity in the regions of diffusion-weighted imaging (DWI) and T2-weighted imaging (T2WI) were seen but were less evident than in the FLAIR image (figure 2, A-F). Brain SPECT showed hyperperfusion in the region (figure 3D). Whole-body PET-CT scans showed no malignancy or inflammation. We started carbamazepine (400 mg/d) and lamotrigine (25 mg/d), but 1 week later, the patient developed delirium, paranoia, hallucination, and anorexia. His symptoms worsened despite risperidone administration. Subsequently, we made a presumptive diagnosis of autoimmune encephalitis to start HIMP therapy, and his symptoms disappeared within a few days. Eight weeks after admission, he was asymptomatic and therefore discharged. He continued oral prednisolone (15 mg/d, then gradually tapered to 4 mg/d in 18 months), carbamazepine, and lamotrigine and experienced no relapse thereafter. At 26 months after discharge, his serum MOG antibody titer had decreased substantially (1:16). Brain MRI showed no residual lesions 30 months after discharge (figure 3E). Case 2. A 36-year-old man let out a strange noise and lost consciousness for several minutes, resulting in a one-car accident when he was driving a car. He was admitted to a local hospital that day and was treated with carbamazepine (400 mg/d). Brain MRI taken on admission showed a FLAIR-hyperintense area in the right parietal cortex. After admission, he twice developed a generalized tonic seizure, right eye pain with visual loss, and dysuria. He was then transferred to our hospital. Neurologic examination showed impaired right VA and dysuria but no signs of meningeal irritation. His VA was normal, but visual field testing showed central scotoma. The CFF was 25 Hz, and VEP revealed prolonged P100 latency (128.4 ms). CBC and blood biochemistry results were normal, while the MOG antibody test was positive in the serum (1:2,048) and in the CSF (1:4) (table 1). The GAD antibody, TPO antibody, and Tg antibody tests were negative in the serum. The CSF study showed mild pleocytosis and moderately elevated IL-6 (840 pg/mL). The EEG results were normal when the patient was in the interictal stage. On MRI examination 3 weeks after onset, the right optic nerve was swollen, short T1 inversion recovery hyperintense, and gadolinium-enhanced (figure 1, C and D). FLAIR hyperintensity in the right hemispheric cortex was seen, and the corresponding cortical layer was slightly swollen (figure 3, F-H) and partially gadolinium-enhanced. Meanwhile, hyperintensity was less evident in DWI and T2WI in the cortical region, but brain SPECT showed hyperperfusion in the region (figure 3I). Whole-body PET-CT showed no abnormal uptake suspicious of malignancy or inflammation. With a presumptive diagnosis of autoimmune encephalitis, HIMP was administered, and both neurologic and ocular symptoms fully resolved. Four weeks after admission, the patient was discharged without any symptoms. Oral PSL (25 mg/d, gradually tapered off in 2 years) and carbamazepine (400 mg/d) was continued, and he did not experience any relapse. At 40 months after discharge, the MOG antibody titer was reduced (1:128) and brain MRI showed no residual lesions (figure 3J).
Case 3. A 23-year-old man with involuntary movement of the left hand was diagnosed with epilepsy and was treated with carbamazepine (400 mg/d) with no apparent effect. One month later, he developed a generalized tonic seizure that lasted for 1 hour. The following month, he was admitted to our hospital for a severe headache. But he did not complain of visual impairment. Upon neurologic examination, he was disoriented without neck stiffness. Although the CBC and blood biochemistry results were normal, the MOG antibody test was positive in the serum (1:256) and in the CSF (1:16) (table 1). The GAD antibody, TPO antibody, and Tg antibody tests were negative in the serum. The EEG examination revealed rhythmic slow waves in the right hemisphere, especially in the right parietal region, but no epileptic discharge was seen in the interictal state. Brain MRI scanned 1 month after the onset of epilepsy showed FLAIR hyperintensity in the right hemispheric cortical region ( figure 3, K-N). Abnormalities in the region in DWI, T2WI, and GdT1WI were equivocal. Whole-body CT showed no malignancy or inflammation. VEP was not examined. Tests for cytomegalovirus antigen in the blood and Mycobacterium tuberculosis (QuantiFERON) were negative, and PCR for herpes simplex virus (HSV), gram stains, and culture results were negative in the CSF. However, because we could not rule out CNS infectious disease, we initially treated the patient with IV ceftriaxone, isoniazid, ethambutol, acyclovir, fluconazole, and dexamethasone (33 mg/d). His symptoms disappeared soon after the treatment, and we suspected autoimmune encephalitis rather than CNS infection. Four weeks after admission, he was discharged with no symptoms, but oral prednisolone (15 mg/d, gradually tapered off in a year) and carbamazepine (600 mg/d) were continued. Eighteen months later, he had not experienced a relapse, and the MOG antibody was undetectable in the serum. No brain MRI lesions were seen 23 months after discharge ( figure 3O).
Case 4. A 38-year-old man was admitted to a local hospital with headache and abnormal behavior (he was unable to dress himself). After admission, he experienced a generalized tonic seizure, and he was transferred to our hospital. Because the cause of the generalized tonic seizure was unknown despite a diagnostic workup, he was discharged 1 week later (carbamazepine was continued). However, 35 months after the first admission, he developed a generalized tonic seizure, which initially involved the right hand. He also had aphasia and right hemiparesis and was readmitted to our hospital. Neurologic examination revealed delirium, emotional incontinence, aphasia, and mild right hemiparesis. He did not show ocular symptoms during the course of disease. The CBC and blood biochemistry were normal, but the MOG antibody test was positive in the serum (1:1,024) (table 1). Serum GAD antibody, TPO antibody, and Tg antibody tests were negative. Brain MRI scanned 4 days after the second episode of epilepsy showed FLAIR hyperintensity in the left hemispheric cortical region (figure 3, P-R). Brain SPECT demonstrated hyperperfusion in the region ( figure 3S). Whole-body PET-CT findings showed no malignancy or swollen lymph nodes. VEP was not done. PCR for HSV, gram stains, and culture results were negative in the CSF. After admission, his symptoms became worse (agitation and violent behavior) despite the administration of sedatives. We suspected autoimmune encephalitis and started HIMP, after which time he became asymptomatic and was discharged. He continued carbamazepine (300 mg/d) and experienced no relapse. At 84 months after discharge, he was MOG antibodynegative. A brain MRI taken 72 months after discharge was normal ( figure 3T). DISCUSSION In the index case (case 1), we tested for MOG antibody because the patient had unilateral benign ON rather than unilateral cortical encephalitis with epileptic seizure. Then, we tested for MOG antibody in our cohort of 24 consecutive adult cases of corticosteroid-responsive encephalitis of unknown etiology and identified 3 additional patients with MOG antibody positivity. Unexpectedly, these 3 MOG antibody-positive patients also had unilateral cortical encephalitis with epileptic seizure as seen in the index case, and there were no cases of unilateral cortical encephalitis with epileptic seizure without MOG antibody positivity in our cohort. The unilateral cortical lesions best depicted by FLAIR images were unique and appeared distinct from brain lesions previously described in MOG antibody-positive diseases including ADEM. 13 The unilateral cortical lesions in our cases 1-4 needed to be differentiated from seizure-induced brain MRI abnormalities. 14 Such brain MRI abnormalities induced by epileptic seizure are localized in the cortical/subcortical regions, hippocampus, basal ganglia, white matter, or corpus callosum, and they are readily visible on DWI due to cytotoxic changes. 15,16 However, the MRI findings in our cases were much more clearly seen in FLAIR images than in DWI and ADC findings (figure 1, A-F). Moreover, pleocytosis in the CSF and a favorable response to HIMP suggested that the unique unilateral cortical lesions were inflammatory, and hyperperfusion on SPECT corresponding to the cortical FLAIR hyperintensity supported the inflammatory nature and epileptogenicity of the swollen cortical lesions in the acute phase.
We also ruled out a variety of autoantibodymediated or immune-mediated encephalitides (table 2) before we concluded that the unilateral cortical encephalitis with epileptic seizure in our cases was unique. Rasmussen encephalitis (RE) is described as unilateral cerebral cortical encephalitis, similar to that observed in our patients. However, RE is clinically characterized by focal epilepsy, progressive hemiplegia, and cognitive decline with unilateral hemispheric focal cortical atrophy in the chronic stage, and corticosteroid and other anti-inflammatory therapies are only partially effective. 17 Our cases did not share these features of RE or fulfill the diagnostic criteria. The lesion distribution in our 4 patients was also dissimilar to the brain MRI abnormalities in cases of encephalitis with seizure associated with NMDAR antibody, VGKC antibody, GAD antibody, and antithyroid antibodies, [18][19][20][21][22] and our patients were negative for those autoantibodies. Likewise, the clinical and neuroimaging features of our cases were distinct from limbic encephalitides with positivity for GAD, LGI1, GABA B , or AMPA antibodies 18 and from the brain syndrome previously described in NMOSD. 10,23,24 FLAIR-hyperintense lesions localized at the cerebral cortex or sulcus, similar to the findings observed in the present cases, can develop in various CNS diseases including meningitis, subarachnoid hemorrhage, leptomeningeal metastasis, acute infarction, and moyamoya disease. 25 In a review of such MRI abnormalities, the left temporo-occipital cortical FLAIR-hyperintense lesions in a 23-year-old man with the diagnosis of meningitis appeared to be similar to the brain MRI findings in our cases. More recently, Numa et al. 26 reported a case of a 37-year-old woman who was diagnosed with ADEM when she was 4 years old and developed ON followed by recurrent ADEM 33 years later. She was MOG antibody-positive, and brain MRI showed unique cortical FLAIR-hyperintense lesions in the left temporal and frontal lobes. Thus, unilateral cortical encephalitis in MOG antibody-positive patients, as in the 4 cases in our study, may have been previously unnoted as a distinct phenotype. The relationship between MOG antibody and the unilateral cerebral cortical encephalitis observed in our cases remains unclear. Two of our patients had benign unilateral ON, in which MOG antibody is often detected, while cases 3 and 4 lacked such characteristics of CNS diseases such as ON, LETM, NMOSD, or ADEM. 2,3,[7][8][9][10]12,13,[27][28][29] Thus, unilateral cerebral cortical encephalitis may be another characteristic manifestation of MOG antibody-positive patients. Although some cases of MOG antibody-associated diseases fulfill the diagnostic criteria of seronegative NMOSD, 23 the spectrum of MOG antibody-associated diseases is obviously wider than NMOSD. In the near future, MOG antibodyassociated diseases may be recognized as a distinct clinical entity of inflammatory demyelinating diseases of the CNS. 30 There is some evidence to support the pathogenic potential of MOG antibody. Experimental studies have shown that MOG antibody can affect oligodendrocytes and myelins. 31,32 In addition, in a few brainbiopsied cases of tumefactive brain lesions with MOG antibody positivity, pathologic examinations revealed active inflammatory demyelination with deposition of immunoglobulins and complement 33,34 or MS type II pathology. 35,36 Moreover, we recently reported high CSF-MBP levels without elevated CSF glial fibrillary acidic protein levels, an astrocytic damage marker, in MOG antibody-positive patients. 37 These findings suggest that MOG antibody may directly contribute to inflammatory demyelination in anti-myelin antibody-associated CNS diseases. However, in 3 of our MOG antibody-positive cases whose CSF-MBP levels were measured during the acute phase, there was no elevation in CSF-MBP despite the extensive cortical involvement and CSF pleocytosis. Thus, it is also possible that MOG antibody itself may not be directly associated with the unilateral cerebral cortical encephalitis with epileptic seizure in our patients and that another autoimmune disorder coexisting with MOG antibody positivity might be responsible for the encephalitis. In fact, MOG antibody can be detected in some patients with other autoantibodyassociated encephalitides such as NMDAR antibodypositive encephalitis. 38 In addition, a pathogenic autoantibody may be generated years before the clinical onset of disease, as seen in AQP4 antibody-positive NMOSD. 39 Accordingly, CNS lesions associated with MOG antibody may possibly develop later in the disease course of cases 3 and 4. Therefore, an unknown autoantibody might be associated with the unilateral cerebral cortical encephalitis with epileptic seizure in a fraction of MOG antibody-positive cases although we need to perform immunohistochemistry or immunofluorescence with rodent brain tissue slices and the sera and CSF from the patients as an attempt to see whether there are any antibody reactivities to the cerebral cortical tissues.
Our study is retrospective and has some limitations. Because our patient cohort was small and derived from a single university hospital, the results should be verified in prospective, larger-scale, multicenter studies. In addition, we analyzed only adult patients in the present study, and it is important to determine whether MOG antibody-positive unilateral cerebral cortical encephalitis with epileptic seizure also occurs in children. Therefore, at this point, it is premature to discuss the frequency of MOG antibody-positive unilateral cerebral cortical encephalitis in corticosteroid-responsive encephalitis of unknown etiology. However, since we experienced 6 cases of NMDAR antibody-associated encephalitis and 1 with VGKC antibody-associated encephalitis during the same period (2008-2014), unilateral encephalitis with MOG antibody may not be so uncommon.
Taken together, we report a form of benign unilateral cerebral cortical encephalitis with epileptic seizure in 4 adult patients with MOG antibody positivity. The pathogenesis of this condition appears to be immune-mediated or autoantibody-mediated, although the clinical, MRI, and laboratory features differ from those in previously described MOG antibody-associated CNS diseases 3,9,10,13,40 and known autoantibody-mediated encephalitides. 18 Another autoantibody that coexists with MOG antibody may be responsible for this type of encephalitis.
AUTHOR CONTRIBUTIONS
R.O. analyzed the data and wrote the paper, substantial contribution to the study conception, acquisition, analysis, and interpretation of data for the work, writing the manuscript, drafting and correction of all versions of the manuscript including figures, tables, and references, completion of the work to be submitted, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. I.N. substantial contribution to the conception and design of the work, as well as supervision of the acquisition, analysis, and interpretation of data for the work, revised several versions of the manuscript critically for important intellectual content, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. T.T. substantial contribution to the conception and design of the work, as well as supervision of the acquisition, analysis, and interpretation of data for the work, revised several versions of the manuscript critically for important intellectual content, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. K.K. contribution to the plan of the work, acquisition, analysis, interpretation of data for the work, and drafting the original manuscript related to the case, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. T.A. acquisition, analysis, and interpretation of data for the work, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. Y.T. acquisition, analysis, and interpretation of data for the work, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. D.K.S. acquisition, analysis, and interpretation of data for the work, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. S.N. acquisition, analysis, and interpretation of data for the work, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. T.M. acquisition, analysis, and interpretation of data for the work, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. H.K. substantial contribution to the conception and design of the work, as well as supervision of the acquisition, analysis, and interpretation of data for the work, revised several versions of manuscript critically for important intellectual content, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. M.A. substantial contribution to the conception and design of the work, as well as supervision of the acquisition, analysis, and interpretation of data for the work, provided final approval of the version to be published, agreed to be accountable for all aspects of the work. K.F. substantial contribution to the conception and design of the work, as well as supervision of the acquisition, analysis, and interpretation of data for the work, supervision of the manuscript preparation, revised several versions of the manuscript critically for important intellectual content, final responsibility and approval of the version to be published, agreed to be accountable for all aspects of the work. | 2018-04-03T05:47:31.532Z | 2017-01-16T00:00:00.000 | {
"year": 2017,
"sha1": "bf3dd9e344ee6ef2326b9318add038be21432ed0",
"oa_license": "CCBYNCND",
"oa_url": "https://nn.neurology.org/content/nnn/4/2/e322.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf3dd9e344ee6ef2326b9318add038be21432ed0",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253708028 | pes2o/s2orc | v3-fos-license | The Runner-up Solution for YouTube-VIS Long Video Challenge 2022
This technical report describes our 2nd-place solution for the ECCV 2022 YouTube-VIS Long Video Challenge. We adopt the previously proposed online video instance segmentation method IDOL for this challenge. In addition, we use pseudo labels to further help contrastive learning, so as to obtain more temporally consistent instance embedding to improve tracking performance between frames. The proposed method obtains 40.2 AP on the YouTube-VIS 2022 long video dataset and was ranked second place in this challenge. We hope our simple and effective method could benefit further research.
Introduction
Video instance segmentation aims at detecting, segmenting, and tracking object instances simultaneously in a given video. It has attracted considerable attention since first defined [18] in 2019 due to the huge challenge and the wide applications in video understanding, video editing, autonomous driving, augmented reality, etc. Current VIS methods can be categorized as online or offline methods. Online methods [3,5,8,9,[17][18][19] take as input a video frame by frame, detecting and segmenting objects per frame while tracking instances and optimizing results across frames. Offline methods [1,2,7,10,15,16], in contrast, take the whole video as input and generate the instance sequence of the entire video with a single step.
In this challenge, the videos are longer with the lower sampling rate, which masks algorithm easily lose targets ID due to the large movement of the objects and accumulate errors during the longer tracking process. In addition, due to the extremely low sampling rate, the appearance similarity of objects between adjacent frames are smaller. All these characteristics degrade the performance of previous algorithms significantly and make this dataset very challenging. * Work done during an internship at ByteDance.
To handle longer videos, as well as perform more robust tracking on low sample rate video frames, we use IDOL [17] as our baseline algorithm. IDOL is a online video instance segmentation method based on contrastive learning, which is able to ensure, in the embedding space, the similarity of the same instance across frames and the difference of different instances in all frames, even for instances that belong to the same category and have very similar appearances. It provides more discriminative instance embeddings with better temporal consistency, which guarantees more accurate association results. To improve the temporal consistency of instance embeddings between low sampling rate frames, we propose to pre-train the model on COCO [11] with pseudo-labels, which further improves the performance of IDOL.
Our method achieves the second place in the ECCV 2022 YouTube-VIS Long Video Challenge, with the score 53.6 AP on the public validation set, and 40.2 AP on the private test set. We believe the simplicity and effectiveness of our method shall benefit further research.
Video Instance Segmentation
We take IDOL [17] as our baseline method. Following most previous online VIS models [18,19] which utilize additional association head upon on instance segmentation models [6,14], IDOL takes DeformableDETR [20] with dynamic mask head [14] as its instance segmentation pipeline.
Given an input frame x ∈ R 3×H×W of a video, a CNN backbone extracts multi-scale feature maps. The Deformable DETR takes the feature maps with additional fixed positional encodings [4] and N learnable object queries as input. The object queries are first transformed into output embeddings E ∈ R N ×C by the transformer decoder. After that, they are decoded into box coordinates, class labels, and instance masks following SeqFormer [16] as shown in Fig 1. Then it calculate pair-wise matching cost which takes into account both the class prediction and the similarity of predicted and ground truth boxes.
An extra light-weighted FFN is employed as a con- Figure 1. The training pipeline of IDOL. Given a key frame and a reference frame, the shared-weight backbone and transformer predict the instance embeddings on them respectively. The embeddings on the key frame are used to predict masks, boxes, and categories, while the embeddings on the reference frame are selected as positive and negative embeddings for contrastive learning.
trastive head to decode the contrastive embeddings from the output embeddings. Given a key frame for instance segmentation training, a reference frame from the temporal neighborhood is selected for contrastive learning. For each instance in the key frame, their output embedding with the lowest cost are decoded to the contrastive embedding v. If the same instance appears on the reference frame, we select positive and negative samples for it according to the cost with predictions. The contrastive loss function for a positive pair of examples is defined as follows: where k + and k − are positive and negative feature embeddings from the reference frame, respectively. Finally, the whole model is optimized with a multi-task loss function Given a test video, we initialize an empty memory bank for it and perform instance segmentation on each frame sequentially in an online scheme. Assume there are N instances predicted by the model with N contrastive embeddings, and M instances in the memory bank. We compute similarity score f between predicted instance i and memory instance j, and search for the best assignment for instance i by:ĵ = arg max f(i, j), ∀j ∈ {1, 2, ..., M }.
If f(i,ĵ) > 0.5, we assign the instance i on current frame to the memory instanceĵ. For the prediction without an assignment but has a high class score, we start a new instance ID in the memory bank.
COCO image
Reference Frame
Pseudo Key-reference Frame Pair
Since the videos of this challenge have lower sampling rate, the gap in the appearance similarity of objects between the two frames is larger, which requires contrastive embeddings with higher temporal consistency. To achieve this, we randomly and independently crop the image from COCO twice to form a pseudo key-reference frame pair, which is used to pre-train the contrastive embedding of our models. As shown in Fig 2, if we randomly crop twice on a coco image, we can get two sub-images on which the same object appears in different positions. They can be regarded as two adjacent key-reference frame pair with camera motion. Pre-training IDOL on these pseudo frames requires that the embeddings belonging to the same horse are as close as possible in the embedding space. This forces the model to learn a position-insensitive contrastive embedding that relies on appearance of the object rather than the spatial position.
Implementation Details
We use the same setting for Deformable DETR and the dynamic mask head following IDOL [17]. The models are first pre-trained on the MS COCO 2017 [11] dataset for instance segmentation and then pre-trained on pseud key-reference frames from COCO. Finally, the models are trained on YouTube-VIS 2022 training set for 12000 iterations, the learning rate is decayed by a factor of 0. We evaluate the performance of our method by participating in the ECCV 2022 YouTube-VIS Long Video Challenge. As shown in Table 1, our method achieves 40.2 AP on the test set and achieve second place.
Ablation Study
In this section we study how we achieve the final results as show in Table 2. The baseline is with ResNet-50 backbone and single-scale testing. Integrated with the Swin Transformer backbone [12], our method achieves a much higher AP of 48.4. The pseudo key-reference frame pair pre-train improves the AP from 48.4 to 50.7. After that, we utilize multi-scale testing for further boosting performance. Different from image instance segmentation, the IoU computation is carried out in both spatial domain and temporal domain. Multi-scale testing can further improve the result from 50.7 to 52.0. Finally, by ensembling Swin-L and ConvNext-L [13] in the same way, we achieve the 53. 6
Conclusions
In this work, we adopt IDOL for the ECCV 2022 YouTube-VIS Long Video Challenge. In addition, we use pseudo labels to further imporve contrastive learning, so as to obtain more temporally consistent instance embedding to improve tracking. We believe the simplicity and effectiveness of our method shall benefit further research. | 2022-11-21T06:42:36.147Z | 2022-11-18T00:00:00.000 | {
"year": 2022,
"sha1": "1a0b21625a7b970cf94cc00e4b626f433d2c653b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1a0b21625a7b970cf94cc00e4b626f433d2c653b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54484845 | pes2o/s2orc | v3-fos-license | Patients with Acute Myeloid Leukemia Admitted to Intensive Care Units: Outcome Analysis and Risk Prediction
Background This retrospective, multicenter study aimed to reveal risk predictors for mortality in the intensive care unit (ICU) as well as survival after ICU discharge in patients with acute myeloid leukemia (AML) requiring treatment in the ICU. Methods and Results Multivariate analysis of data for 187 adults with AML treated in the ICU in one institution revealed the following as independent prognostic factors for death in the ICU: arterial oxygen partial pressure below 72 mmHg, active AML and systemic inflammatory response syndrome upon ICU admission, and need for hemodialysis and mechanical ventilation in the ICU. Based on these variables, we developed an ICU mortality score and validated the score in an independent cohort of 264 patients treated in the ICU in three additional tertiary hospitals. Compared with the Simplified Acute Physiology Score (SAPS) II, the Logistic Organ Dysfunction (LOD) score, and the Sequential Organ Failure Assessment (SOFA) score, our score yielded a better prediction of ICU mortality in the receiver operator characteristics (ROC) analysis (AUC = 0.913 vs. AUC = 0.710 [SAPS II], AUC = 0.708 [LOD], and 0.770 [SOFA] in the training cohort; AUC = 0.841 for the developed score vs. AUC = 0.730 [SAPSII], AUC = 0.773 [LOD], and 0.783 [SOFA] in the validation cohort). Factors predicting decreased survival after ICU discharge were as follows: relapse or refractory disease, previous allogeneic stem cell transplantation, time between hospital admission and ICU admission, time spent in ICU, impaired diuresis, Glasgow Coma Scale <8 and hematocrit of ≥25% at ICU admission. Based on these factors, an ICU survival score was created and used for risk stratification into three risk groups. This stratification discriminated distinct survival rates after ICU discharge. Conclusions Our data emphasize that although individual risks differ widely depending on the patient and disease status, a substantial portion of critically ill patients with AML benefit from intensive care.
Methods and Results
Multivariate analysis of data for 187 adults with AML treated in the ICU in one institution revealed the following as independent prognostic factors for death in the ICU: arterial oxygen partial pressure below 72 mmHg, active AML and systemic inflammatory response syndrome upon ICU admission, and need for hemodialysis and mechanical ventilation in the ICU. Based on these variables, we developed an ICU mortality score and validated the score in an independent cohort of 264 patients treated in the ICU in three additional tertiary hospitals. Compared with the Simplified Acute Physiology Score (SAPS) II, the Logistic Organ Dysfunction (LOD) score, and the Sequential Organ Failure Assessment (SOFA) score, our score yielded a better prediction of ICU mortality in the receiver operator Introduction AML is the most common type of acute leukemia in adults, accounting for approximately 2.8% of all cancer worldwide [1]. Without treatment, AML is fatal, but therapy and prognosis have improved in recent decades [2]. Intensive chemotherapy, either alone or followed by autologous or allogeneic stem cell transplantation, has the potential to cure AML, and long-term survival of patients is increasing [3]. However, such intensive treatment is associated with potentially life-threatening toxicities, particularly in elderly patients. Although decision tools might aid decisions regarding which elderly patients are amenable to intensive treatment [4,5], approximately 13% of patients with AML require ICU treatment [6]. Several studies have described clinical outcomes and prognostic factors for patients with or without other hematological malignancies. The majority of these studies are of limited value because they were based on small cohorts, not validated in an independent cohort, included unselected patients with solid cancer and hematological malignancies, did not distinguish between ICU and hospital mortality, did not analyze survival and/or were solely focused on complications [6][7][8][9][10][11][12][13][14][15][16].
The decision to admit a patient to the ICU is often an ethical dilemma and is based on the individual clinician's decision, which is loosely based on established scores (e.g., SOFA, LOD, APACHE II, SAPS II) [17][18][19][20]. Because these scores are based on and designed for analysis of unselected patients admitted to the ICU, patients with cancer, particularly those with AML, are underrepresented. For example, APACHE II and SAPS II consider malignancy as a risk factor without further differentiation of the type or disease status of the malignancy. APACHE II considers only five points for immunocompromised, non-operative patients, while allowing a total range between 0 and 71.
The objective of this study was to establish and validate risk factors associated with mortality during and after ICU treatment based on a large and multicenter cohort and to establish potential risk scores.
Patients and treatment
The AML in ICU score was established in a cohort of 187 adults with AML admitted to the ICU at the University Hospital of Muenster, representing all patients with a diagnosis of AML admitted to the ICU between 11/2004 and 09/2011. Data were collected retrospectively from patient records and follow-up physicians. Prior to analysis, the data were anonymized and deidentified. Approval for this investigation was obtained from the Ethics Board of Westfalian Wilhelms-University Muenster and the Physicians Chamber of Westfalia-Lippe, Germany (2015-688-f-S).
Validation was performed on a cohort of 264 patients with AML admitted to the ICU at the University Hospital of Grosshadern in Munich, the University Hospital of Cologne and the Municipal Hospital of Augsburg (all located in Germany) between 01/2004 and 02/2010.
Intensive induction treatment consisted of any of the following: "7+3" (cytarabine 100 mg/ m² once daily on days 1-7 as a 24-h intravenous infusion and daunorubicin 45 mg/m² once daily on days 3-5, intravenous infusion; "7+GO" (cytarabine 100 mg/m² once daily on days 1-7 as a 24-h intravenous infusion and gemtuzumab ozogamicin 6 mg/m² once on day 1 and 4 mg/m² once on day 8, intravenous infusion); "TAD" (tioguanine 100 mg/m² twice per day on days 3-9, orally; cytarabine 100 mg/m² on days 1-2 as an intravenous infusion and 100 mg/m² twice daily on days 3-8, intravenous infusion; and daunorubicin 60 mg/m² once daily on days 3-5, intravenous infusion); "HAM" (high-dose cytarabine 3 g/m² twice daily on days 1-3, intravenous infusion, and mitoxantrone 10 mg/m² once daily on days 3-5, intravenous infusion; patients >60 years of age received only 1g/m² cytarabine); and "S-HAM" (high-dose cytarabine 3g/m² twice daily on days 1-2 and 8-9, intravenous infusion, and mitoxantrone 10 mg/m² once daily on days 3-4 and 10-11, intravenous infusion; patients >60 years of age received 1 g/m² cytarabine). With the exception of dose-dense S-HAM induction, patients <60 years of age routinely received two induction courses, whereas patients aged 60+ received a second induction course only in case of persisting bone marrow blasts on day 15 after the start of treatment. Postremission treatment consisted of high-dose cytarabine in case of "7+3" or "7 +GO" induction (patients <60 years received three courses of cytarabine 3g/m² twice daily on days 1, 3 and 5, intravenous infusion; those 60 years or older only received two courses of cytarabine 3g/m² twice daily on days 1, 3 and 5, intravenous infusion). Postremission treatment also consisted of TAD consolidation followed by prolonged monthly maintenance therapy in case of "TAD(-HAM)", "HAM(-HAM)" or "S-HAM" induction or was followed by autologous SCT in younger patients. Details of the treatment protocols have been published elsewhere [21][22][23]. According to the patient's risk of relapse, allogeneic stem cell transplantation was performed alternatively to the scheduled postremission treatment in patients in first remission or after relapse when possible. Conditioning protocols varied and depended on the remission status and age of the patients [24][25][26].
Only disease status was included in the analysis. Owing to the broad heterogeneity of the applied protocols, a detailed analysis of chemotherapy regimen related to death in the ICU and survival after ICU discharge could not reasonably be performed. (Re)induction protocols are applied in active disease or at primary diagnosis, and consolidation regimens are administered in cases of complete remission. Owing to this strong correlation, information about chemotherapy protocols is already included in the variable disease status.
Endpoints
Complete remission was defined as hematological recovery with at least 1,000 neutrophils per μl, at least 100,000 platelets per μl, and < 5% bone marrow blasts. ICU mortality was defined as death at any time during the course of the ICU stay. Overall survival after ICU discharge was survival from the day of ICU discharge until death from any cause, and censoring of patients known to be alive at the time of last follow-up was performed. The term mortality, when given as a percentage, was defined as the number of deaths per number of ICU stays observed.
Definition of variables
Arterial oxygen partial pressure (paO2) was evaluated at the time of ICU admission, irrespective of oxygen supply. Active AML at the time of ICU admission included 1, patients with primary diagnosed AML; 2, patients with relapsed AML before or during reinduction therapy or before evaluation of the remission status at the time of ICU admission; and 3, patients with persisting disease after induction or reinduction therapy at the last evaluation of disease status preceding ICU admission. Advanced AML at the time of ICU discharge was defined as refractory or relapsed AML status. Serve infections as reason for ICU admission required at least two of the following criteria: temperature >38°C (100.4°F) or <36°C (96.8°F), tachycardia (>90 bpm), tachypnea >20/min or clinically proved infectious complications (like microbiologically documented infection or radiographic signs of an infection). To avoid affections by the AML and/or associated therapy, leukocyte count was not considered. Days until ICU admission were the days between hospital admission and ICU admission. The Glasgow Coma Scale (GCS) was applied as previously described [27]. The Simplified Acute Physiology Score (SAPS) II was routinely determined in all ICU patients [28]. Additionally, the LOD and SOFA score were calculated in the training cohort to enhance comparability to other ICU scores [19,20,29]. Variables used for calculation of all scores were determined at the time of admission. As prothrombin values were not available in most cases and to avoid potential bias, LOD was thoroughly calculated without prothrombin values. Cytogenetic and molecular genetic risk were classified according to European LeukemiaNet (ELN) guidelines 2010 [30].
Statistical Analysis
Correlations with ICU mortality were evaluated using the Chi-square test for categorical data and the Mann-Whitney U test for continuous variables. Only variables with less than 10% missing values in the training cohort were considered. Missing values were replaced by the median value of the variable. Continuous variables were categorized where appropriate (S1 Table). Variables with p<0.1 in the univariate evaluation were selected for a multivariate binary logistic regression with stepwise backward selection and a threshold value of 0.05 for inclusion and 0.1 for exclusion. The final model of the variable selection process was used as the scoring model. Survival analyses were performed by Kaplan Meier estimates. Correlations with survival were evaluated by the log-rank test; parameters with p<0.1 in the log-rank test were evaluated in multivariate Cox regression analysis with stepwise backward selection and a threshold value of 0.05 for inclusion and 0.1 for exclusion. Unless otherwise stated, the significance level was alpha = 0.05 in all analyses.
All statistical analyses were performed in cooperation with the Institute of Biostatistics and Clinical Research of the University Hospital of Muenster, Germany, and were computed with IBM SPSS Statistics for Windows, Version 22.0 (IBM Corp., Armonk, NY).
Patient characteristics
Patient characteristics are presented in Table 1. At the time of ICU admission, the median age of the patients was 59 years. Compared with the training cohort, the validation cohort included more patients with newly diagnosed AML and fewer patients in remission. The training cohort also had lower paO2 and hematocrit at the time of ICU admission. Among patients surviving the ICU, the median duration of treatment was three days in the validation cohort and four days in the training cohort. Age and sex, the combined cytogenetic and molecular risk profile according to ELN2010 classification, the proportion of patients who had previously undergone allogeneic stem cell transplantation, the reason for ICU admission, and the proportion of patients requiring mechanical ventilation or dialysis in the ICU were distributed similarly between the training and validation cohorts (Table 1). Information about type of AML (de novo versus secondary), time interval between hospital admission and ICU admission, mean arterial pressure at ICU admission, diuresis and GCS at the time of ICU admission were not available for the validation cohort.
Prognostic factors influencing ICU mortality
An overview of all parameters selected for analysis and their classification is presented in S1 Table. Independent prognostic factors for ICU mortality in a multivariate model, displayed in Fig 1, were as follows: paO2 <72 mmHg at ICU admission, active AML in the ICU (relapsed, refractory, newly diagnosed), severe infection at ICU admission, and need for hemodialysis and mechanical ventilation. On the basis of these five variables, a logistic model was generated for the prediction of mortality in the ICU. The fitted logistic model is described by formulas 1 and 2, which can be used to calculate the predicted ICU mortality of each patient. Formula 1: Mechanical ventilation: need for mechanical ventilation = 1; no need for mechanical ventilation = 0 Fig 2 shows the goodness of fit of the predicted ICU mortality as well as the SAPS II, the LOD, and the SOFA score compared with the observed ICU mortality. Fig 2A presents the receiver operator characteristics (ROC) analysis, and Fig 2B depicts a plot of the predicted versus observed mortality rates. The predicted ICU mortality had an AUC value of 0.913 (95% CI: 0.873-0.954), compared with an AUC of 0.721 (95% CI: 0.646-0.796) for the SAPSII Score in the ROC of the training cohort (Fig 2A). The AUC analysis of the LOD and the SOFA score showed similar values compares to the SAPS II Score (AUC = 0.708 [LOD] and 0.770 [SOFA]). In patients with a predicted ICU mortality of <50% (median 18%, range 2-48%), 19% (15 of 81) died in the ICU (Fig 2B). By contrast, an ICU mortality of 88% (93 of 106) was observed in the patients with a predicted ICU mortality of >50% (median 92%, range 51 to 99%) (Fig 2B).
Prognostic factors for survival after ICU discharge
Seventy-nine patients (42%) in the training cohort survived their ICU stay. Table 1 displays the characteristics of these ICU survivors. The projected 3-year survival of this cohort from the time of ICU discharge was 64% (95% CI: 51-77%) after a median follow-up of 1.6 years. The parameters selected for the analysis of association with prognosis after ICU discharge are also listed in S1 Table. In multivariate Cox regression analysis, the following parameters were identified as independent prognostic factors for decreased survival after ICU discharge (Fig 4): advanced disease (relapsed or refractory); previous allogeneic stem cell transplantation (alloSCT); fewer days between hospital admission and ICU admission; more days spent in the ICU; impaired diuresis <1,000 ml/24 hours at ICU admission; GCS <8 at ICU admission; and a hematocrit of !25% at ICU admission. Based on the Cox regression model, the risk score of each patient was calculated using formula 2 and formula 3: Days spent in ICU: Number of days between ICU admission and ICU discharge Decreased urine production: urine production <1,000 ml/24 hours at ICU admission = 1; urine production >1,000 ml/24 hours at ICU admission = 0 Decreased GCS: GCS <8 at ICU admission = 1; GCS !8 at ICU admission = 0 Decreased hematocrit: hematocrit of <25% at ICU admission = 1; hematocrit of !25% at ICU admission = 0 Stratifying the ICU survivors into three risk groups according to the risk score calculated using formula 3 revealed marked differences in survival after ICU discharge. Patients with the lowest risk (X values <0.23, n = 15) displayed one-year survival after ICU discharge of 100%. Patients with intermediate risk (X values between 0.23 and 2.33, n = 34) exhibited 1-year survival of 82% (95% CI: 68-97%), and patients with the highest risk (X value >2.34, n = 30) exhibited 1-year survival of 42% (95% CI: 22-63%) (Fig 5A).
Applying the same stratification to the validation group (median follow-up of 1.4 years) revealed comparable differences in survival from the time of ICU discharge, although the survival rates were lower. One-year survival after ICU discharge was 69% (95% CI: 55-81%) in the 59 patients with the lowest risk, 51% (95% CI: 41-60%) in the group of 131 patients with intermediate risk, and 19% (95% CI: 4-33%) in the 42 patients with high risk (Fig 5B).
Discussion
Despite encouraging survival rates of ICU survivors compared to non-ICU patients [31], assumed high mortality represents a major reason for the widespread hesitation to refer AML patients for treatment in the ICU. In addition, outcome prediction instruments are not valid in individual patients, and ICU scoring systems are only capable of describing the severity of illness of ICU cohorts. The two most commonly used scores, SAPS II and APACHE II, were established based on large numbers of unselected patients [18,28]. Because AML is rare in ICU patients, patients with AML were clearly underrepresented in the establishment of these scores, and both disease status and the impact of AML-specific procedures (such as an allogeneic stem cell transplantation) were not considered in the design of global scoring systems, thus limiting their applicability to patients with AML.
Sculier et al. published a report stating that neither SAPS II nor APACHE II are sufficiently accurate to be used in the routine management of cancer patients requiring ICU treatment [32]. They evaluated the prognostic value of these two scores for mortality both during the hospital stay and after discharge in 261 cancer patients admitted to the ICU. No major difference was observed between the two scoring systems, but outcome could not reliably be predicted. Subgroup analyses of patients with hematological malignancies or patients with AML were not performed in this study.
Based on the data for 451 patients with AML receiving available intensive care, the largest cohort of AML patients analyzed to date, we were able to specifically analyze prognosis in this defined patient population. Several risk predictors for ICU outcome as well as subsequent survival were identified, and we established a score predicting ICU mortality in patients with AML. This score outperformed the established SAPS II, LOD, and SOFA scores in the training and the validation cohort with respect to the area under the curve in the ROC analysis, regardless of hospital, treating physician, or treatment. Although the potential to discriminate was Patients were classified according to their individual predicted ICU mortality (below versus !50%; boxes represent the interquartile range (IQR); whiskers indicate the minimum and maximum values but are not longer than 1.5 times the length of the corresponding box; values outside this range are represented by separate dots), which is plotted against the actual mortality rate for the three groups.
higher in the training cohort it was still evident in the validation cohort. The ability of our score to accurately predict ICU mortality in this independent cohort supports the reliability of the score. The results for patients with a low mortality risk may encourage clinicians to initiate Risk Prediction for AML Patients in Need of ICU or extend intensive care to AML patients. However, the decision to pursue ICU treatment for an AML patient requires an interdisciplinary approach that includes hematologists, intensive care physicians, and consideration of the patients' wishes and expectations. Thus, such a decision can never be based solely on the results of a score.
Originally all three scores, the SAPS II, LOD, and SOFA score, were generated on the basis of the "worst" values in the first 24-hour period after admission [20,28,29]. To analyze their applicability as a prognostic tool for the clinician, all scores in this manuscript are based on data collected at the time of admission. This is an important difference to the preexisting scores and underlines the relevance of the developed scores as prognostic tool in clinical use.
Previous studies have defined single parameters predicting ICU or hospital mortality in cohorts including or comprising patients with AML, such as use of mechanical ventilation, low fibrinogen [31], use of vasopressors [8,9], increased creatinine, number of failing organ systems [8], illness severity [9], mechanical ventilation [13], sepsis, and length of hospital stay prior to ICU admission [10]. Most of these factors were verified independently by our risk factor analysis.
The mortality rate in the ICU was 58% in the training cohort and 36% in the validation cohort. The reason for this discrepancy is unclear because the differences in the baseline characteristics (Table 1) were not sufficient to provide an explanation. Admission criteria for ICU patients vary from hospital to hospital. Nevertheless, these findings are comparable to recent studies reporting mortality rates of 28-84% [6][7][8][9]15,31,33]. However, direct comparison with published mortality rates is complicated by the use of different parameters: death in ICU, death in hospital, or death after 90 days and/or one year.
In addition to predictors of ICU mortality, we also identified prognostic factors for ICU survival by AML patients. Not surprisingly, advanced disease status was a strong negative prognostic factor for survival, and impaired immune responses to pathogens, particularly in the early phase after allogeneic SCT, and severe acute and extensive chronic graft versus host disease are clearly associated with infectious complications [34].
Days spent in the hospital before ICU admission negatively influenced outcomes after ICU discharge, whereas days spent in the ICU before ICU discharge had an opposite prognostic influence on future survival. Azoulay et al. reported that fewer days in the hospital before ICU admission was associated with improved hospital survival [7]. However, owing to the retrospective nature of this analysis and the possible presence of unknown confounding factors, a recommendation for early admission to the ICU cannot be based on the present data. Although a low hematocrit value does have a positive influence on the prognosis of ICU patients [35], we identified low hematocrit at ICU admission as an independent risk factor for survival after ICU stay but not for ICU mortality. Patients admitted to the ICU for infectious complications benefit from a restrictive transfusion policy with tolerated hemoglobin values of 7.0 g/dl or below [36]. However, as stated above, our retrospective observation is insufficient to recommend a restrictive transfusion policy with a hematocrit goal of <25%.
ELN low risk was significantly associated with survival after ICU discharge, whereas ELN high risk exhibited only borderline significance. However, in contrast to the intention of ELN risk classification, these cohorts do not represent homogeneous populations of patients with untreated, newly diagnosed disease. The significant influences of disease status and previous allotransplant on prognosis after ICU discharge suggest that ELN risk is diluted by disease status.
Several limitations must be addressed. Our survival index distinguished three separate prognostic groups in the test as well as in the validation cohort, but patient survival was inferior in the validation cohort compared to the training cohort in every risk category. Missing values in the validation cohort (days in hospital before ICU admission, diuresis, and GCS) and imbalances with respect to the proportion of remission status and lower paO2 (see Table 1) are possible explanations, we cannot rule out the possibility that this prognostic index performs worse in independent cohorts than in the training cohort, even with all available variables. Second, only crude paO2 values and not the amount of oxygen support was available at the time of ICU admission. Thus, the paO2:FiO2-ratio (ratio of paO2 to the fraction of inspired oxygen) was not incorporated into the scores. Finally, due to an inverse correlation between hyperleukocytosis and paO2, paO2 values in hyperleukocytic AML may be in spuriously low. [37] Peripheral capillary oxygen saturation (SpO2) might reflect gas exchange in this constellation more accurate.
Conclusions
Our study indicates promising survival rates for patients with AML requiring intensive care treatment. Based on data from a large multicenter cohort, we identified and validated relevant risk predictors, which provided a basis for two scores distinguishing between survival differences both in the ICU as well as after ICU discharge. However, while these scores might aid the prognostication of patients with AML treated in the ICU, decisions about initiating or pursuing intensive treatment must not rely solely on the results of these scores.
This study should encourage further prospective analyses.
Supporting Information S1 Table. Overview of all parameters selected for analysis and their classifications. (DOCX)
Author Contributions
Conceptualization: UK.
Formal analysis: DG UK MP.
Project administration: UK WEB BH.
Resources: JB CS MK KAK PL CR CMT MS TB GS MH JW WH WEB BH UK TK.
Supervision: UK BH. | 2018-04-03T06:06:34.021Z | 2016-08-30T00:00:00.000 | {
"year": 2016,
"sha1": "6d6dd83c41cf371edb2eadedaed3f4d786395baf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0160871&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d6dd83c41cf371edb2eadedaed3f4d786395baf",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15948081 | pes2o/s2orc | v3-fos-license | Biologically Based Therapy for the Intervertebral Disk: Who Is the Patient?
The intervertebral disk (IVD) is a fascinating and resilient tissue compartment given the myriad of functions that it performs as well as its unique anatomy. The IVD must tolerate immense loads, protect the spinal cord, and contribute considerable flexibility and strength to the spinal column. In addition, as a consequence of its anatomical and physiological configuration, a unique characteristic of the IVD is that it also provides a barrier to metastatic disease. However, when injured and/or the subject of significant degenerative change, the IVD can be the source of substantial pain and disability. Considerable efforts have been made over the past several decades with respect to regenerating or at least modulating degenerative changes affecting the IVD through the use of many biological agents such as growth factors, hydrogels, and the use of plant sterols and even spices common to Ayurvedic medicine. More recently stem/progenitor and autologous chondrocytes have been used mostly in animal models of disk disease but also a few trials involving humans. At the end of the day if biological therapies are to offer benefit to the patient, the outcomes must be improved function and/or less pain and also must be improvements upon measures that are already in clinical practice. Here some of the challenges posed by the degenerative IVD and a summary of some of the regenerative attempts both in vitro and in vivo are discussed within the context of the vital question: “Who is the patient?”
Over the past 20 years, there has been an explosion in the biotechnology sector concerning the use of recombinant proteins such as growth factors for the treatment of injury/ disease, (such as the use of bone morphogenic protein in the management of complex fractures). Furthermore, the recent advances in the use of stem/progenitor and induced pluripotent stem cells have offered the possibility that true regenerative medicine could someday become more than a catchy phrase. Biological therapy has been postulated as a potential "game changer" for the management of disk disease since at least 1991 as presented in the seminal paper by Thompson et al. 1 However, despite over 700 published papers, 22 years after the report by Thompson et al the use of biological agents in the management of disk disease is in, at best, its infancy.
There is only one phase 1 clinical trial involving the use of growth differentiation factor-5 (GDF-5) underway for the treatment of disk disease; however, several trials using human stem or porcine stem cells have been undertaken. [2][3][4] With respect to biological agents and disk disease, the important unanswered (perhaps "elephant in the room") question still remains: Who is the patient?
Intervertebral Disk Compartments
The intervertebral disk (IVD) is a unique organ that modulates complex, enormous applied loads to the spine, protects the spinal cord and exiting nerve roots, functions as a major axial support system for the body, and acts as a barrier to metastatic Keywords ► intervertebral disk ► regenerative medicine ► biological therapy ► growth factors ► stem cells
Abstract
The intervertebral disk (IVD) is a fascinating and resilient tissue compartment given the myriad of functions that it performs as well as its unique anatomy. The IVD must tolerate immense loads, protect the spinal cord, and contribute considerable flexibility and strength to the spinal column. In addition, as a consequence of its anatomical and physiological configuration, a unique characteristic of the IVD is that it also provides a barrier to metastatic disease. However, when injured and/or the subject of significant degenerative change, the IVD can be the source of substantial pain and disability. Considerable efforts have been made over the past several decades with respect to regenerating or at least modulating degenerative changes affecting the IVD through the use of many biological agents such as growth factors, hydrogels, and the use of plant sterols and even spices common to Ayurvedic medicine. More recently stem/progenitor and autologous chondrocytes have been used mostly in animal models of disk disease but also a few trials involving humans. At the end of the day if biological therapies are to offer benefit to the patient, the outcomes must be improved function and/or less pain and also must be improvements upon measures that are already in clinical practice. Here some of the challenges posed by the degenerative IVD and a summary of some of the regenerative attempts both in vitro and in vivo are discussed within the context of the vital question: "Who is the patient?" disease. These functions are fulfilled as a consequence of the IVD's central location within the spine and its anatomical configuration and biomechanical properties. The disk is composed of several subcompartments, notably the cartilaginous end plates, the annulus fibrosus, and the nucleus pulposus, with each compartment composed of cells that have differentiated to tolerate the unique requirements of the specific compartment. The cartilaginous end plates are composed of chondrocytic cells embedded within a hyaline-like extracellular matrix (ECM) integrated with the vertebral bodies. The functional linkage of disk and vertebral body creates a permissive though delicate portal whereby the diffusion of nutrients, gases, and waste products subserves IVD homeostasis. 5,6 It has been reported that the vertebral body capillary networks centered over the nucleus pulposus (NP) are much denser than those overlying the annulus, a feature of biological importance with respect to the metabolic demands of the cells and tissues within these compartments. [5][6][7][8] The cells of the annulus fibrosus are a combination of fibroblastic and chondrocytic cells embedded within an ECM that results in a structure that acts like a ligament, conferring strong compressive and concentric biomechanical resistance acting in concert with the inner nucleus pulposus and cartilage end plates. The nucleus pulposus represents what may be considered the lynchpin of IVD function due to its central, confined location within the center of the disk and its vital contribution to the biomechanical properties of load dispersion and contribution to neuromuscular reflexive activity. 9,10 Significant degradation of the essential cellular and structural aspects of any of the compartments of the disk contributes to breakdown of the entire organ often leading to pain and disability.
Biology of Disk Degeneration
Degeneration of the ICD is a complex process, and although considerable progress has been made with respect to the mechanisms involved in the degenerative cascade, much remains to be learned. Several catabolic cytokines, such as interleukin (IL) 1-β, act in concert with other inflammatory cytokines, such as IL-6, IL-8, prostaglandin E2, nitric oxide, a variety of matrix metalloproteinases, and ADAMTS (a disintegrin and metalloproteinase with thrombospondin motifs) 4/5 enzymes as well as the death-inducing ligand Fas, and result in degradation of the ECM. [11][12][13][14][15][16][17][18][19][20][21][22][23][24] As the degenerative process continues, the viability of NP cells progressively declines with both IL-1β and Fas ligand figuring prominently in these mechanisms as well as increased degradation of the annulus fibrosus. [25][26][27] The expression of certain genetic anomalies, in particular ones involved with the ECM, may predispose the disk to accelerated/pathological degeneration, which when coupled with environmental/occupational risk factors may result in clinically significant signs and symptoms of spinal and related pain. Finally, although up to 30% of patients demonstrating signs of degenerative disk disease (DDD) on magnetic resonance imaging (MRI) will be asymptomatic, it has been reported that DDD is associated with back pain and more so with increasing evidence of DDD and older age. 21 Although it is widely accepted that degenerative changes seen on MRI cannot determine the disk as a source of pain and that many people in midlife demonstrate degenerative changes on imaging such as MRI, it should be emphasized that not all degenerative changes are benign. There are several reports demonstrating the capacity for degenerative disks to be potent sources of pain; the problem is that contemporary imaging and diagnostic practices may not be sophisticated enough to determine the pain generators leading to sometimes ill-defined therapeutic goals and methods. 25,28,29 Challenges and Obstacles to Disk Repair It has been well characterized that pathological changes affecting the vertebral end plates may compromise the already precarious nutritional status of the IVD leading to degenerative change. 5,6,8,[30][31][32] Impaired diffusion leads to decreased pH and oxygen concentration, progressive cellular death (such as via apoptotic mechanisms), and impaired cell-ECM interaction, resulting in a progressively biochemically and biomechanically impaired IVD. 8,33 The peripheral annulus is vascularized separately and receives barely any diffusible nutrition; therefore, impaired nutrient/gas diffusion may have less impact in this region. However, clefts and fissures occur within the annulus fibrosus, allowing the ingrowth of blood vessels and nociceptive-capable neurons into the NP simultaneously with degeneration of the nucleus. 28 In the case of degenerative disease, this process is gradual and requires years to occur, with symptoms that may or may not declare themselves until the process has become more advanced. However, when symptoms of spinal pain and/or radiculopathy become evident, contemporary diagnostic methods that can define the disk as a significant source of pain lack sufficient precision/sophistication. A common clinical vignette would be that of a patient exhibiting longstanding spinal pain who displays common characteristics of pain emanating from the IVD that has proven refractory to conservative care. Another common presentation is the patient with a history of frequent acute low back pain episodes with antalgia with or without leg pain who develops an acutely herniated disk commonly following relatively trivial or even no trauma. Both of these patients are often considered not to be good surgical candidates, and if/when the patient's condition does not respond to nonoperative measures, what should be done? Epidemiological evidence indicates that many cases of disk herniation resolve with time as the disks' inherent healing properties allow some degree of repair and sciatic pain (if present) and the condition resolves. 34 However, many cases continue to be symptomatic long after the initial symptoms have improved with many of these patients finding themselves labeled as chronic pain patients receiving ill-defined treatments for ill-defined reasons. [34][35][36] It has been demonstrated that many patients with chronic spinal pain suffer from cognitive-related aspects to their pain and do well with activity and cognitive behavior-based therapy. 37 However there is very likely a subcategory of patients who are inappropriately diagnosed with chronic pain but who actually suffer from biologically mediated pain that may well Global Spine Journal Vol. 3 No. 3/2013 emanate from the disk; these patients will not respond to exercise and cognitive behavior interventions if their source of pain is biological at its source. 14,28 Therefore, it may be more accurate to characterize patients suffering from an acute herniated disk whose symptoms settle as having experienced resolution from their acute symptoms rather than healing, as many if not most continue to suffer from relapses of axial pain/sciatic pain with variable intervening asymptomatic episodes for many years. It is a rarity for a patient suffering from such an injury or degenerative condition to ever achieve a long-lasting asymptomatic status.
Biological Attempts at Disk Repair: Results to Date
A recent PubMed search using "biological repair of the intervertebral disk" as search term yielded 52 published manuscripts and using "growth factors and intervertebral disk" as search terms yielded 785 hits. The majority of the articles identified in these searches presented work that was based initially upon in vitro evidence of the anabolic/reparative effects of growth factors upon NP cells that led to subsequent in vivo testing in a variety of animal models. Reviews by Masuda et al and Yoon and Patel have summarized much of the work using growth factors for the treatment of disk disease that developed initially from the cartilage literature and was first published by Thompson et al using a canine disk model in 1991. 1,38,39 Since the early reports of the use of growth factors, several methods of enhancing the delivery of these molecules have been developed, ranging from direct injection of recombinant proteins to viral transfection methods whereby cells of the NP could be modified (such as by the use of viral vectors) to secrete increased amounts of a specific growth factor. 40,41 A more recent attempt to augment the effects of growth factor injection is through the use of platelet-rich plasma (PRP). PRP is a method of using autologously derived growth factors released from highly condensed blood plasma whereby the platelets release a host of (as yet, incompletely characterized) growth factors and cytokines. A recent study has reported that PRP releasate has the capacity to induce a reparative response when injected into rabbit IVDs injured by scalpel stab. 42 This PRP study reported that the PRP releasate-treated disks demonstrated a significant restoration of disk height but no statistically significant change in disk NP hydration as evaluated by MRI, but there were more chondrocyte-like cells within the PRP-treated disks compared with controls. 42 With respect to the use of growth factors in human disk disease, there is at least one ongoing phase 1 clinical trial using a specific growth factor (GDF-5) based primarily upon rabbit IVD models of disk injury, with results pending. There have been several other mediators of disk degeneration proposed as potential therapies in addition to direct injection of recombinant proteins (growth factors) or gene therapy such as caspase inhibitors, IL-1 receptor antagonists, and plant sterols such as resveratrol. 43,44 Even the Indian spice used in Ayurvedic medicine curcumin has been reported to have beneficial activity when evaluated in vitro with some of these interventions tested with in vivo studies. 43,45 Beyond the use of modulators of the degenerative process (growth factors and anti-cell death strategies), there have also been attempts at cellular replacement using autologous chondrocytes and a variety of stem cells. [46][47][48][49][50] Some of these attempts have met with early indications of success such as the apparent restoration of hydration as evidenced by T2-weighted MRI imaging as well as immunohistochemical and RNA evidence of anabolic repair. 47,49,50 These results have by and large been confined to in vitro studies as well as small-animal models of disk disease using rat tail, rabbit disk puncture models, and some limited canine species. 12,50-53 One recent pilot study by Orozco et al using autologously derived human stroma mesenchymal stem cells (MSCs) claim to have demonstrated reduced pain and the restoration of some hydration in the treated disks. 4 It is curious that these investigators cite spinal fusion as the gold standard for the treatment of DDD because there are several high-quality studies citing the controversy surrounding spinal fusion and other studies including a recent systematic review indicating that spinal fusion may not be better than appropriate nonoperative management. [54][55][56][57] Nonetheless, the Orozco study is an interesting preliminary step involving the use of MSCs for the treatment of human disk disease. However, several areas remain to be understood involving the use of stem cells. For example, MSCs are known to be growth factor "factories," which may be responsible for some initial benefits so long as the cells survive-a factor not addressed by these authors but noted in the review by English. 58 Prockop et al furthered this discussion by illustrating that some effects seen with stem cell transplants are in the form of immune modulation and anti-inflammatory effects conferred upon the transplant milieu by the transplanted MSCs. 59 Furthermore, transplanted stem cells such as MSCs may modulate repair by virtue of their effects upon the endogenous stem cells at the repair site and as conferred by the secretion of growth factors and antiapoptotic and immune modification effects and in so doing act as a kind of repair "booster" rather than effecting repair themselves. 59 Also, the patients involved with the Orozco study averaged 35 years of age-an age considerably younger than expected for significant degenerative disease as opposed to disk injury. The inclusion criteria in this study do not provide any details to this effect or the history of the patients involved other than "degenerative disk disease" or what failure of conservative management of the disorder entailed. Interestingly these patients all received diskography prior to stem cell transplantation-a technique reported by Carragee et al to result in accelerated degeneration. 60 With respect to biological therapies, it may be that the repair response induced by the intervention (growth factors, anti-cell death strategies, stem cells, or some combination) overwhelms any potentially injurious effects associated with the insertion of a needle into the disk as reported by Carragee et al. Numerous studies that purposely induce disk damage by the use of relatively large needles with respect to the size of the disk (up to 18-gauge needles for rat disk injuries) have demonstrated impressive recovery of T2-weighted signal on MRI, suggesting that biological agents may be able to induce profound healing even after relatively severe injury. 61,62 With this in mind, it should be noted that the Carragee study preceded publications concerning probable genetic Although perhaps not strictly classified as a biological therapy such as the use of growth factors or cellular replacement strategies such as stem cells, the use of hydrogels as a treatment that could enhance the biological properties of the IVD have been more recently studied with several publications detailing their potential utility. 63,64 The publication by Reitmaier et al compared the reimplantation of a removed nucleus to several hydrogel constructs in an ovine ex vivo study and found that none of the interventions were able to restore biomechanical properties of the intact disk. The conclusions of this study suggest that the use of implantable hydrogels may not be able to restore nucleus functionality in particular without anchorage to the surrounding disk structures such as the inner annulus and end plates. 64 Nonetheless, this area of investigation continues and may yet provide better utility with further research.
What Kind of Therapeutic Intervention?
The complex biochemical and cellular processes that lead to disk degeneration are driven by mitigating factors such as genetic influences, age, metabolic status, history of injury, occupational activity, and lifestyle habits such as smoking. 23,65,66 The net result for persons affected by significant DDD is pain and disability and the need for therapeutic intervention. Therefore if biological therapy is considered to be an option for disk disease, the essential questions are what kind of therapy, for which patient, and how best to deliver the therapy in question?
The degenerative disk that has reached what could be characterized as the point of no return with extensive disk collapse, few remaining viable cells, and extensive annular tears and fissures would likely not be a good candidate for the delivery of anabolic/ECM-protective factors (growth factors, anticatabolic factors) and/or stem/progenitor cells. More likely, the disk that is more upstream in the degenerative process would be an appropriate target for regenerative therapies. As disks degenerate, the cell viability within the nucleus decreases such that advanced degenerative disks have virtually no viable cells remaining. Therefore, it would stand to reason that anabolic/anticatabolic intervention(s) ought to occur before cell viability decreases beyond what may be required to stimulate an anabolic response. As the field of stem cell biology has evolved, such a strategy is probably more complex than originally anticipated. A cellular replacement strategy-perhaps originally thought to provide simply increased numbers of cells that may assume the role of degenerative/dead cells-may actually perform far more complex functions, providing of course that they can survive transplantation. It has been reported that MSCs secrete significant amounts of growth factors, and it may be that at least part of the efficacy that such interventions might offer could be the result of not only cellular replacement, but also the longer-term release of necessary growth and other factors within the degenerative disk. 3 In any event, whatever the actual intervention may be-whether it is a cocktail of proteins, progenitor cells, or some combination of thesethe active ingredients must be delivered into the disk; the question remains, what is the best method? Direct injection has obvious advantages but would require a needle puncture.
Models of Disk "Disease"
Several methods of inducing disk "disease" have been reported, all of which require damaging the disk to effect a secondary reparative response, which is interpreted experimentally as DDD. 51,52,[67][68][69] The degree to which the particular interventions (gene therapy, direct injection of growth factors/stem cells) are able to improve upon the natural history of the acute disk injury ostensibly validates the effectiveness of the therapeutic intervention. To this end, a suitable animal model of disk degeneration continues to be elusive in the field of disk biology research, although carefully specific hypothesis testing through the use of animal models has shed light on certain aspects of disk disease and the potential for biological therapies. It is therefore important to consider that most if not all reports concerning biological treatments of disk disease to date are based upon disk injuries that do not mimic the human condition. They are performed on acutely injured (mostly needle puncture or scalpel stab injuries), young and otherwise healthy animals with disks that resemble that of a human of childhood age. 61,62,69 The disks are highly gelatinous and the cellular contents are largely notochordal as well as the ill-classified "nucleus pulposus cell" and a population of stem/progenitor cells reported to be of 1% of the total cellular volume of the NP. 70 The injurious stimulus either via direct relatively largebore needle puncture (up to 40% of the disk height), scalpel stab, aspiration/denucleation of the disk, or enzymatic digestion imposes an extremely violent injurious stimulus upon an otherwise healthy, youthful disk to challenge the inherent repair properties of the tissue as compared with the delivered therapeutic vehicle. Nonetheless, it should be recognized that most existing animal models and the conclusions drawn from these studies actually represent repair from "disk injury" as opposed to repair of disk degeneration that is representative of the human condition and results of such studies need to be interpreted with this caveat.
As discussed earlier with respect to stem/progenitor cells, it may be that they confer their regenerative/reparative effects after transplantation in more ways than just a source of cellular replacement. Several studies involving animal models have been performed concerning the potential use of stem cell transplants to the IVD with mixed results. 53,71,72 One study involving xenographic transplantation of human MSCs (MSCs) into minipig IVDs reported that all disks injured and treated in this study demonstrated signs of degeneration on MRI, but the disks treated with a gel carrier þ human MSCs had less degeneration. 2 Furthermore, human MSCs were detectable up to 6 months posttransplant with evidence of ECM protein secreted by these cells. The investigators in this study suggested that for most optimal cell-based transplant therapies, it may be better to use differentiated cells rather than MSCs. On the other hand, another recent large-animal model using pigs reported that 12 months after the transplantation of MSCs, little if any proteoglycan matrix was established, and rather than evidence of implantable cells, there remained only scar tissue composed of a mixture of type I/II collagen. 73 A report using chondrodystrophic canines and stem cell transplants reported that 10 6 transplanted MSCs resulted in an astonishing 94% cell survival, suggesting that a Pfirrmann grade II to III might represent the best candidate level of pathology for such cellular transplant technology. 53 Very likely these impressive and demanding large-animal studies reach such divergent results with respect to cell survival and transplant effectiveness due a host of technical, procedural, and biological reasons. Nonetheless, the results of these investigations need to be interpreted cautiously and carefully.
Who Is the Patient?
At the end of the day with respect to biological treatments, we are still left with the question of who the patient is. Would growth factors with or without adjuvants delivered via a suitable carrier be the best treatment for a more acute disk injury in a younger person? Should this treatment be delivered percutaneously, or would it be best performed at the time of diskectomy, perhaps within a slow-release formulation? Should this same patient have a combination of cell-based therapy plus growth factors? What would be the best treatment for the longer-term degenerative disk thought to be the source of pain? What kind of cells should be delivered and at what cost? Should the patency of the vertebral end plate be a mitigating factor in the decision to apply biological therapy and if so how would this be assessed? Finally and perhaps most importantly, would such biological treatments make any impact on the pain and disability suffered by the patient?
Although considerable progress has been made with respect to understanding the biology of disk degeneration and the generation of pain, all of the previous questions should be borne in mind as potential biological therapies are developed. Although the effectiveness of many proposed biological therapeutics have met with mixed results, thus far these studies must be interpreted as small steps along a very long road in an as of yet somewhat ill-defined direction. It is likely that if and when biological therapy for the IVD does achieve meaningful clinical standards, one size will not fit all. Biological therapies will need to be optimized and matched for their best possible effectiveness and efficacy that is commensurate with specific pathology. To realize this goal, biology, biomechanics, engineering, and epidemiology-although oftentimes strange bedfellows-will need to achieve an uneasy truce and together determine how, when, if, and most importantly who may be the best candidate for their ministrations. Until then, we are really in our infancy in the pursuit of this fascinating and challenging enterprise.
Disclosures
William Mark Erwin, Research Support: AOSpine and UHN Foundation; Institutional Support: 50% salary support. | 2016-05-12T22:15:10.714Z | 2013-04-25T00:00:00.000 | {
"year": 2013,
"sha1": "404c85c5b4e5bcc8ab5667cc151e8f29e96e9b15",
"oa_license": null,
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1055/s-0033-1343074",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "48a9ddd16c47d5665f5760f8b3be8b72cb7e043d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
112708 | pes2o/s2orc | v3-fos-license | Quercetin reduces oxidative stress and inhibits activation of c-Jun N-terminal kinase/activator protein-1 signaling in an experimental mouse model of abdominal aortic aneurysm
Oxidative stress is becoming increasingly linked to the pathogenesis of abdominal aortic aneurysms (AAAs). The antioxidant activity of flavonoids has attracted attention for their possible role in the prevention of cardiovascular diseases. The purpose of this study was to determine whether an antioxidant mechanism is involved in the aneurysm formation inhibitory effect afforded by quercetin. Male C57/BL6 mice received quercetin continuously from 2 weeks prior to and 6 weeks following the AAA induction with extraluminal CaCl2. Quercetin treatment decreased AAA incidence and inhibited the reactive oxygen species generation, nitrotyrosine formation and lipid peroxidation production in the aortic tissue during AAA development. In addition, quercetin-treated mice exhibited significantly lower expression of the p47phox subunit of nicotinamide adenine dinucleotide phosphate oxidase and inducible nitric oxide synthase, as well as coordinated downregulation of manganese-superoxide dismutase activities and glutathione peroxidase (GPx)-1 and GPx-3 expression. Quercetin also blunted the expression of c-Jun N-terminal kinase (JNK) and phospho-JNK and, in addition, diminished activation of the activator protein (AP)-1 transcription factor. Gelatin zymography showed that quercetin eliminated matrix metalloproteinase (MMP)-2 and MMP-9 activation during AAA formation. In conclusion, the inhibitory effects of quercetin on oxidative stress and MMP activation, through modulation of JNK/AP-1 signaling, may partly account for its benefit in CaCl2-induced AAA.
Introduction
An abdominal aortic aneurysm (AAA) is a localized, permanent dilatation of the aorta that affects ~8% of males >65 years-old (1). At present, elective surgery is the major therapeutic option for AAA, however, this is not applicable to small aneurysms despite the reported growth rate of small aneurysms ranging between 1.5 and 3 mm per year, which leads to a higher risk of rupture (2). With increased knowledge of aneurysm pathophysiology, it is possible that aneurysm growth may be retarded with medical therapy.
The role of inflammation in the pathogenesis of AAA is well established. Infiltrating inflammatory cells enter the aorta, release cytokines and proteases, inducing apoptosis of vascular smooth muscle cells and ultimately, lead to destruction of the vascular wall (1). Moreover, the inflammatory microenvironment generates a large quantity of oxidant species, largely through upregulation of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase in vascular cells (3). Furthermore, emerging evidence indicates that oxidative stress within the aortic wall is closely involved in the pathogenesis of AAA. Oxidative stress facilitates leukocyte recruitment into the vasculature by modulating adhesion molecules and chemotactic cytokines (4). In addition, reactive oxygen species (ROS) may alter the balance between destruction and regeneration of the aortic wall by enhancing matrix proteolysis through upregulation of matrix metalloproteinases (MMPs) (5). MMPs are the predominant extracellular proteinases that participate in the degradation process of structural proteins (1). Although human data remains limited, several studies indicate that antioxidant therapy may be effective in experimental AAA models (6)(7)(8).
Quercetin (3,5,7,3'4'-Pentahydroxy flavon), a typical member of the flavonoid family, is one of the most widely recognized dietary polyphenolic compounds. It is ubiquitously present in foods and is claimed to exert beneficial effects on vascular disease (9), which has been largely associated with its antioxidant and anti-inflammatory properties. Within the flavonoid family, quercetin is proven to be the most potent scavenger of free radicals (10). There is evidence that quercetin reduces low-density lipoprotein oxidation (11) and prevents the development of atherosclerotic lesions (12), in which oxidative stress is assumed to have a pivotal role. Although atherosclerosis and AAA are separate diseases, they have certain similar pathological characteristics, including inflammation and proteolysis (13). It is also reported that quercetin in vitro inhibits the production of O 2 •in the rat aorta and decreases protein expression of the NADPH oxidase subunit, p47phox (14,15). A previous study from our research group indicated that quercetin treatment inhibits inflammation and prevents CaCl 2 -induced aneurysmal dilation in a mouse AAA model (16). The present study was designed to test the hypothesis that an antioxidative mechanism is also involved in the protection afforded by quercetin.
Materials and methods
Pharmacological treatments. Quercetin was purchased from Sigma-Aldrich (Q4951; Shanghai, China). Drug solutions were prepared by suspending the compound in 0.5% carboxymethyl cellulose sodium. Animals were gavaged daily with 0.1 ml solution of quercetin (60 mg/kg) or vehicle alone, which began 2 weeks prior to AAA induction and continued for 8 weeks. The dose regimen for quercetin was based on previous studies demonstrating beneficial effects of the drug in mouse models of aortic atherosclerosis (12).
Animal groups and the AAA model. A total of 60 male C57BL/6 wild-type mice (age, 6-7 weeks) were obtained from Vital River Laboratory Animal Technology (Beijing, China). All animals were treated and cared for in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Washington DC, 1996) and the experimental protocols were approved by the Animal Care and Use Committee (Nanjing University, Nanjing, China). The mice were randomly assigned to one of four groups (n=15 in each group): Vehicle treatment plus sham operation control (VC), vehicle treatment plus AAA (VA), quercetin treatment plus AAA (QA) and quercetin treatment plus sham operation control (QC). AAA was induced in the infrarenal abdominal aorta (age, 8 weeks) by periaortic application of CaCl 2 , as previously described (16). NaCl (0.9%) was substituted for CaCl 2 in sham operation animals. Six weeks later, the mice were laparotomied and the aortic diameters (ADs) were measured; the abdominal incision was carried upwards as a thoracoabdominal incision, the animals were then sacrificed by left-heart injection of potassium chloride and the aortic tissues were collected. An aneurysm was defined as an increase in the AD of >50% of the original AD.
In vivo hemodynamic measurements. A computerized, non-invasive tail-cuff system with a four-channel mouse plat-form (BP-2000; Visitech Systems, Inc., Apex, NC, USA) was used to measure blood pressure and heart rate. To train mice, daily measurements were performed for five consecutive days prior to the actual recorded measurements. Hemodynamic parameters were measured one day pre-and 6 weeks post-AAA induction. The first 10 of 30 values recorded at each session were disregarded and the remaining 20 values were averaged and used for analysis, according to the manufacturer's instructions.
ROS analysis, lipid peroxidation determination and manganese-superoxide dismutase (Mn-SOD) activity assay. Dihydroethidium (DHE) oxidative fluorescence dye was used to evaluate in situ production of ROS (17). DHE stock solution was prepared by dissolving DHE (D7008; Sigma-Aldrich) in dimethylsulfoxide at a concentration of 5 mM. The stock solution was stored in the dark and diluted in phosphate-buffered saline (PBS) to a final concentration of 5 µM immediately prior to use. The abdominal aorta was harvested and the aortic segment (10 mm) was embedded in Tissue-Tek OCT compound (Sakura Finetech Japan, Tokyo, Japan) and snap-frozen. DHE working solution (200 µl) was topically applied to the aortic sections and the slides were subsequently incubated at 37˚C in the dark for 30 min. Excess DHE was rinsed off twice with PBS and the images were immediately captured with a fluorescent microscope (BX51; Olympus, Tokyo, Japan) at excitation and emission wavelengths of 520 and 610 nm, respectively.
A portion of the snap-frozen aortic tissue (n=5 per group) was crushed in a prechilled mortar and resuspended in PBS at a concentration of 50 mg/ml. The homogenate was centrifuged at 10,000 x g for 10 min at 4˚C to collect the supernatant. The lipid peroxidation product, malondialdehyde (MDA), was assessed using the thiobarbituric acid reactive substances (TBARS) assay kit (A003; Jiancheng Bioengineering, Shanghai, China). Briefly, 100 µl supernatant was added to 100 µl sodium dodecyl sulfate (SDS) lysis solution and mixed thoroughly. Following the addition of 250 µl thiobarbituric acid (TBA) reagent, samples were incubated at 95˚C for 1 h and centrifuged at 1,500 x g at room temperature for 15 min. The absorbance of each supernatant was measured at 532 nm using a spectrophotometer (BioPhotometer; Eppendorf, Hamburg, Germany). Values of TBARS are expressed as nmol equivalents of MDA per mg protein. Mn-SOD activity was measured using an assay kit (A001-2; Jiancheng Bioengineering) according to the manufacturer's instructions. Assay conditions were 65 µmol phosphate buffer (pH 7.8), 1 µmol hydrochloric hydroxylamine, 0.75 µmol xanthine and 2.3x10 -3 IU xanthine dismutase. The supernatant (50 µl) was incubated in the system for 40 min at 37˚C and terminated with 2 ml 3.3 g/l p-aminobenzene sulfonic acid and 10 g/l naphthylamine. For inhibition of CuZn-SOD activity, the assay was conducted in the presence of 10 mm KCN following preincubation for 30 min. The supernatant was transferred to a microplate (Eppendorf) for determination of the absorbance at 550 nm and 1 unit SOD was defined as the quantity of enzyme required to produce 50% dismutation of superoxide radical. Mn-SOD activity was calculated by subtraction of CuZn-SOD activity from total SOD activity. The standard curves were created as described in the manufacturer's instructions. Images were assessed by Image J 1.44 software (National Institute of Health, Bethesda, MD, USA).
Histological analysis. The infrarenal abdominal aorta (n=5 per group) was dissected and fixed in 10% neutral-buffered formalin. Specimens were dehydrated through graded ethanols, embedded in paraffin and sliced into 4-6-µm sections. Immunohistochemical staining with a rabbit polyclonal anti-nitrotyrosine antibody (1:500; 06-284; Millipore, Temecula, CA, USA) was used as an indicator of peroxynitrite formation (18). Briefly, the slides were incubated in 3% hydrogen peroxide for 5 min to quench endogenous peroxidase activity and were then incubated with primary antibody overnight at 4˚C. Subsequently, slides were washed with PBS and incubated (15 min; 37˚C) with peroxidase-conjugated goat anti-rabbit IgG (AP132P; Millipore). Finally, the slides were incubated with diaminobenzidine and counterstained with hematoxylin.
Reverse transcription-polymerase chain reaction (RT-PCR).
RT-PCR was used to define the expression of glutathione peroxidase (GPx)-1, GPx-3, inducible nitric oxide synthase (iNOS) and p47phox NADPH oxidase mRNA. Total RNA was prepared with the TRIzol total RNA extraction kit ( . GAPDH mRNA was also amplified to serve as an internal control. The resultant PCR products were detected using an MSF-300G Scanner (Microtek Lab, Carson, CA, USA) and expressed as the ratio to GAPDH.
Western blotting. Total protein was extracted from the supernatants of tissue homogenate with T-PER tissue protein extraction reagent (Pierce Biotechnology Inc., Rockford, IL, USA) and stored at -80˚C. Equal quantities (30 µg) of total protein were separated on 10% polyacrylamide gels and transferred to nitrocellulose membranes using a semidry transfer cell (#164-5052; Bio-Rad, Hercules, CA, USA) at 10 V for 40 min. The membranes were blocked for 60 min with 5% nonfat milk in Tris-buffered saline with Tween-20 (TBST) and subsequently washed. Primary antibodies for p47phox NADPH oxidase (sc-14015), c-Jun N-terminal kinase (JNK; sc-571) and phosphorylated JNK (sc-6254) (all Santa Cruz Biotechnology, Santa Cruz, CA, USA) were added at a 1:500 dilution and incubated overnight at 4˚C. Additionally, all blots were incubated with the anti-β-actin antibody (1:5,000; 4970; Cell Signaling Technology, Beverly, MA, USA) to confirm protein loading levels. Membranes were washed with TBST, incubated with horseradish peroxidase-conjugated species-appropriate secondary antibodies (Santa Cruz Biotechnology) for 1 h at room temperature and developed using an enhanced chemiluminescence kit (Pierce, Rockford, IL, USA). Quantification of images was performed by scanning densitometry with Image J 1.44.
Electrophoretic mobility shift assay (EMSA). Nuclear protein lysates were harvested using NE-PER nuclear and cytoplasmic extraction reagents (Pierce Biotechnology, Inc.) according to the manufacturer's instructions. Activator protein (AP)-1 DNA-binding activities were analyzed using Gel Shift Assay systems (Promega Corporation, Madison, WI, USA), according to the instructions previously described (16). Briefly, the AP-1 consensus oligonucleotide probe (5'-CGC TTG ATG AGT CAG CCG GAA-3') was end-labeled with [γ-32 P]-ATP (Furui Biotech, Beijing, China). The extracted nuclear proteins (10 µg) were incubated for 20 min at 37˚C with the 32 P-labeled oligonucleotide (0.30 pmol) in a binding buffer. Reaction products were then separated in a 4% polyacrylamide gel, followed by autoradiography. The reactive bands were quantified as described in western blot analysis.
Gelatin zymography. Protein extracts (10 µg) were mixed with SDS buffer and separated by electrophoresis on 10% SDS-polyacrylamide gels containing 1.0% gelatin. Following electrophoresis, the gels were renatured in renaturing buffer (LC2670; Invitrogen Life Technologies, Carlsbad, CA, USA) and incubated with developing buffer (LC2671; Invitrogen Life Technologies) for 30 min at room temperature. Subsequently, the gel was incubated in fresh developing buffer overnight at 37˚C. The gel was stained with 0.5% Coomassie blue R-250 for 30 min and destained with destaining solution containing 10% acetic acid and 40% methanol. The relative molecular weight of each band was determined using protein standards (Pierce Biotechnology, Inc.). Areas of protease activity appeared as unstained bands against a blue background. Images were assessed by Image J 1.44.
Statistical Analysis. All values are expressed as the mean ± standard deviation. Statistical analyses were performed with SPSS for Windows version 17.0 (SPSS, Inc., Chicago, IL, USA). Within-group comparisons of hemodynamic parameters at various intervals were performed using paired Student's t tests. Between-group comparisons were performed using the Fisher's exact test or analysis of variance. P<0.05 was considered to indicate a statistically significant difference.
Results
AAA incidence. No significant difference was found in AD at the time of surgery among the groups (data not shown). Six weeks later, the VA mice showed a marked increase in AD following CaCl 2 treatment with 10/15 (66.7%) developing aneurysms. Only 3/15 (20%) of the aortas became aneurysmal in QA mice; this difference in aneurysm incidence was considered to be highly significant (P<0.05). No aneurysm formation was observed in the VC or QC mice. Quercetin treatment had no effect on mean arterial pressure or heart rate when measured pre-and post-surgery (Table I).
ROS generation, nitrotyrosine formation and lipid peroxidation production.
To evaluate the effect of quercetin on ROS generation, aortic sections were exposed to DHE, which is transformed to the highly fluorescent molecule, oxyethidium, in the presence of superoxide (17). As shown in Fig. 1A, ROS production was extremely low in aortas from VC and QC mice. At 6 weeks post-AAA induction in VA mice, oxyethidium fluorescence was higher, being significantly enhanced throughout the vascular wall ( Fig. 1A and B). However, it was attenuated in QA mice, indicating decreased ROS production due to quercetin treatment.
As increased production of ROS may lead to further peroxynitrite accumulation, which induces protein damage by formation of nitrotyrosine (18), immunohistochemistry was performed with a polyclonal antibody against nitrotyrosine in aortic cross sections. Staining appeared only weakly in the aorta of VC and QC animals, however, VA mice revealed marked brown nitrotyrosine staining in the aortic wall. By contrast, a decreased immunoreactivity was observed in QA mice ( Fig. 1A and C). Similar observations were noted for lipid peroxidation production (MDA) levels (TBARS) in aortic tissues. The TBARS concentration in QA mice was found to be significantly lower than that in the VA mice (Table II).
Endogenous vascular antioxidant defense systems.
Increased levels of Mn-SOD activity were observed at AAA regions of the VA mice, while quercetin significantly decreased its activity in QA mice (Table II). In addition, quercetin caused a relative decrease in mRNA expression of GPx-1 and GPx-3, which are involved in antioxidative status (Fig. 2).
Expression of iNOS and p47phox NADPH oxidase. Expression of iNOS and the NADPH oxidase subunit, p47phox, was also examined. Compared with the VA group, QA mice showed relative decreases in iNOS and p47phox mRNA levels (Fig. 2) and this result was confirmed by western blot analysis of p47phox ( Fig. 3A and B).
JNK/AP-1 signaling pathway and enzymatic activities of
MMPs. Since quercetin reduced oxidative stress, the specific contribution of quercetin to the regulation of AP-1 activation in experimental AAA was examined. Levels of AP-1 DNA binding activity in QA mice, determined by EMSA, were significantly inhibited by quercetin when compared with controls in the VA group ( Fig. 4A and B). Western blotting showed that phosphorylated-JNK was significantly upregulated in VA mice and downregulated following treatment with quercetin. In addition, QA animals had significantly less total JNK than VA controls (Fig. 3A and B).
Gelatin zymography revealed that MMP-2 and -9 activities were elevated in VA mice, however, quercetin treatment resulted in a marked decrease in MMP-2 and -9 activities (Fig. 5).
Discussion
The CaCl 2 -induced AAA model has been widely employed to gain further understanding of the mechanisms involved in aneurysm development, in order to identify potential novel medical treatments (19). As the results of this study showed, the development of CaCl 2 -induced AAA in mice was accompanied by elevated aortic ROS levels, increased nitrotyrosine formation and lipid peroxidation products, indicating an enhancement in overall oxidative stress. Previous studies have demonstrated that markers of oxidative damage are present in human (20) and animal (21) aneurysmal lesions. However, these oxidative stress markers were significantly inhibited by Table I. Heart rate and mean arterial pressure prior to and following NaCl/CaCl 2 treatment.
A B C
supplementation of quercetin, a dietary antioxidant with a polyphenolic structure. The antioxidant activity of polyphenols has attracted much attention in relation to their possible role in the prevention of chronic diseases (22). In particular, it was previously reported that resveratrol, another polyphenolic compound, counteracts systemic (23) and local (24) oxidative stress and limits experimental AAA progression. Moreover, a variety of medications and interventions have been proven to successfully suppress experimental aneurysm formation through a ROS-based mechanism (6-8). Thus, it was hypothesized that the aneurysm-inhibitory effect of quercetin in the present study may, in part, associate with its lower oxidation-reduction potential.
Oxidative stress is the result of a redox imbalance between the generation of ROS and the secondary response from the endogenous antioxidant network. Results from the present study indicate a local upregulation of the endogenous antioxidant system, including Mn-SOD and GPxs during CaCl 2 -induced AAA formation. Mn-SOD and GPxs are key scavengers of ROS, for example, H 2 O 2 and lipid hydroperoxides (25,26). Therefore, increases in Mn-SOD and GPxs may be a compensatory response for an increase in ROS in
A B A B
the mouse aorta following exposure to CaCl 2 . Quercetin, by restraining ROS levels, prevents the elevation of those antioxidant enzymes, coinciding with other study results (27)(28)(29), which have reported the protective effect of quercetin on organ injury. The enhanced expression of NADPH oxidase, an enzyme that catalyzes the production of O 2 •from oxygen and NADPH, is a major pathway of ROS formation in the vascular wall (3). Inhibition of ROS production by oral administration of apocynin, a specific inhibitor of NADPH oxidases, attenuates AAA formation in a murine model (8). It was also reported that quercetin prevented the increase in aortic O 2 •production through downregulation of p47phox expression in vivo and in vitro (14,15). Furthermore, Thomas et al (30) have shown that p47phox deficiency reduced oxidative stress and markedly attenuated AAA formation. The present study found that quercetin treatment significantly eliminated gene and protein expression of p47phox NADPH oxidase and these data, together, demonstrate that quercetin is able to reduce ROS formation via modulation of the p47phox subunit during AAA development.
It is noteworthy that, in a previous study, iNOS deficient mice were partly resistant to aneurysm induction by CaCl 2 (8). Nitric oxide synthase (NOS) is also a source of ROS and increased cellular expression of iNOS is specifically associated with large quantities of nitric oxide produced during chronic inflammation (31). The present study found that expression of iNOS in the aortic wall was inhibited following quercetin treatment. It has been reported that under inflammation-mimicking conditions, quercetin may inhibits iNOS expression in cultured monocytes (32,33). This result indicates another mechanism through which quercetin may impact ROS generation.
More importantly, oxidative stress has been reported to activate MMPs (34,35), a family of enzymes with the capacity to cleave several components of the extracellular matrix, including elastin and collagen. It is generally hypothesized that MMPs are putative therapeutic targets in the prevention of AAA (1). Our study group has previously reported that treatment of mice with quercetin prevents aortic wall destruction in the CaCl 2 -induced AAA model, which is associated with a reduction in the expression of MMP-2 and -9 (16). In the current study, similar results were observed when determining the enzymatic activities of MMPs in vitro by gelatin zymography. MMPs are primarily regulated at the gene transcriptional level by various factors, including cytokines, growth factors, ROS and reactive nitrogen species (RNS). AP-1, a major downstream target of JNK, is an essential transcription factor for MMP expression (36)(37)(38). The present study shows that quercetin treatment significantly inhibited AP-1 activation, accompanied by decreased phosphorylation of JNK in AAA tissues. JNK, also known as stress-activated protein kinase, is hypothesized to be involved in a number of cellular stress responses. It is well established that ROS produced from NADPH oxidase and RNS are potent inducers of JNK (39). Moreover, the existing evidence indicates that JNK has an important role in AAA. Yoshimura et al (40) have demonstrated that pharmacological inhibition of JNK reduces MMP levels and prevents the development of AAA. Furthermore, JNK inhibition caused regression of established aneurysm in CaCl 2 -and angiotensin II-infusion-induced AAA models. Activation of JNK leads to modulation of other kinases, their nuclear translocation and subsequent phosphorylation of a number of transcription factors, including AP-1 (41). Thus, data from the present study indicated that quercetin reduces oxidative stress and blocks aneurysm formation, which may occur via the mediation of the JNK/AP-1 pathway and MMP modulation.
In conclusion, the present study demonstrated that an antioxidative mechanism is involved in the preventive action of quercetin on CaCl 2 -induced AAA. This is notable as AAA is a chronic and serious condition for which no medical treatment currently exists. In addition, the compound has been observed to be effective in reducing the risk factors of cardiovascular disease that often occur simultaneously with AAA (1,9). Although it is unclear whether these experimental observations extend to aneurysmal degeneration as it occurs in humans, it is likely to be a point of interest to explore in future investigations. | 2017-06-09T17:12:07.365Z | 2013-12-06T00:00:00.000 | {
"year": 2013,
"sha1": "c590a328eff8a131150bff8077338efd25a1c8f4",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/mmr/9/2/435/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c590a328eff8a131150bff8077338efd25a1c8f4",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254270071 | pes2o/s2orc | v3-fos-license | Clinical Implication of DNA Damage Response Genes in Advanced Gastric Cancer Stage IV and Recurrent Gastric Cancer Patients After Gastrectomy Treated Palliative Chemotherapy
Purpose: This study aimed to investigate the relationship between DNA damage response (DDR)-related protein expression and the clinical outcomes of patients with gastric cancer stage IV and recurrent advanced gastric cancer patients after gastrectomy treated with palliative first-line chemotherapy. Materials and Methods: A total of 611 gastric cancer patients underwent D2 radical gastrectomy at Chung-Ang University Hospital between January 2005 and December 2017, of which 72 patients who received gastrectomy treatment with palliative chemotherapy were enrolled in this study. We performed the immunohistochemical assessment of MutL Homolog 1 (MLH1), MutS Homolog 2 (MSH2), at-rich interaction domain 1 (ARID1A), poly adenosine diphosphate-ribose polymerase 1 (PARP-1), breast cancer susceptibility gene 1 (BRCA1), and ataxia-telangiectasia mutated (ATM) using formalin-fixed paraffin-embedded samples. In addition, Kaplan-Meier survival analysis and Cox regression models were used to evaluate independent predictors of overall survival (OS) and progression-free survival (PFS). Results: Among the 72 patients studied, immunohistochemical staining analysis indicated deficient DNA mismatch repair (dMMR) in 19.4% of patients (n = 14). The most common DDR gene with suppressed expression was PARP-1 (n = 41, 56.9%), followed by ATM (n = 26, 36.1%), ARID1A (n = 10, 13.9%), MLH1 (n = 12, 16.7%), BRCA1 (n = 11, 15.3%), and MSH2 (n = 3, 4.2%). HER2 (n = 6, 8.3%) and PD-L1 (n = 3, 4.2%) were expressed in 72 patients. The dMMR group exhibited a significantly longer median OS than the MMR proficient (pMMR) group (19.9 months vs. 11.0 months; hazard ratio [HR] 0.474, 95% confidence interval [CI] = 0.239-0.937, P = 0.032). The dMMR group exhibited a significantly longer median PFS than the pMMR group (7.0 months vs. 5.1 months; HR= 0.498, 95% CI = 0.267-0.928, P = 0.028). Conclusions: Of stage IV gastric cancer and recurrent gastric cancer patients who underwent gastrectomy, the dMMR group had a better survival rate than the pMMR group. Although dMMR is a predictive factor for immunotherapy in advanced gastric cancer, further studies are needed to determine whether it is a prognostic factor for gastric cancer patients treated with palliative cytotoxic chemotherapy.
Introduction
Gastric cancer is the fifth most common cancer with the fourth most cancer-related mortality rate worldwide [1]. In patients with human epidermal growth factor receptor 2 (HER2) negative stage 4 gastric cancer, the standard first-line treatment is fluoropyrimidine plus platinum-based chemotherapy [2]. These patients have a poor prognosis with a survival period of less than one year. The recent CHECKMATE 642 study reported that a programmed death (PD)-1 inhibitor, nivolumab, combined with chemotherapy provided superior overall survival (OS) than chemotherapy alone in previously untreated advanced gastric cancer patients [3]. However, despite its potential relevance, no clinically relevant survival prognostic marker for gastric cancer has been identified yet.
The DNA damage response (DDR) is activated within a cell cycle checkpoint when DNA damage occurs [4]. Cancer cells with a deficiency in DDR have continuous growth and survival. In a previous study, targeting the DDR pathway of gastric cancer had survival benefit [5]. DDR-related proteins, such as ataxia-telangiectasia mutated protein (ATM), breast cancer susceptibility gene (BRCA1) [6], poly [ADP-ribose] polymerase 1 (PARP-1) [7], AT-rich interaction domain 1A (ARID1A) [8], and MutL Homolog 1 (MLH)1 and MSH2 [9] facilitate cancer cell survival and proliferation and the evasion of physiological cell cycle checkpoints.
PARP1 is a polymerase that functions in the DNA repair response to DNA damage by conjugated ADP from NAD+ to target proteins such as p53 and histones [10]. Especially, when there are defects in DNA repair caused by mutations of BRCA1/2, the inhibition of PARP1 results in unrepairable DNA and the apoptosis of cancer cells [11]. ATM is a phosphatidylinositol 3-kinase-like kinase family member. Together with another kinase, ataxia telangiectasia mutated (ATM), it acts as a central regulator of the cellular response to DNA damage. ARID1A regulates cellular responses through interactions with ATM [12].
DDR expression is involved in the progression of cancers and resistance to anti-cancer drugs [13]. It was suggested that DDR expression could be a therapeutic target for the treatment of malignant tumors. In the phase 3 GOLD study, the amount of ATM was confirmed by an ATM immunohistochemical (IHC) assay using formalin-fixed, paraffin-embedded gastric carcinoma tissues. However, this trial did not show a survival advantage in gastric cancer after first-line chemotherapy with olaparib [14].
DDR expression was correlated with an improved response to cisplatin-based chemotherapy in other cancers [15]. Genomic alterations in the DNA response and repair-associated genes predicted responses and clinical benefits after cisplatin-based chemotherapy. MLH1, MSH2, PMS2, and MSH6 are defined as mismatch repair (MMR) expression proteins. MSI-H/dMMR gastric cancers were associated with a better OS compared with proficient MMR (pMMR) in a recent meta-analysis [16,17].
In this study, we investigated the relationship between the expression of DDR and survival to determine the survival-associated prognostic potential of DDR-related proteins in stage IV gastric cancer patients and recurrent advanced gastric cancer patients after gastrectomy treated with palliative first-line chemotherapy.
Patients
A total of 611 gastric cancer patients underwent D2 radical gastrectomy at Chung-Ang University Hospital between January 2005 and December 2017, of which 72 who received gastrectomy and were treated with palliative chemotherapy were enrolled. The inclusion criteria were as follows: a pathologically confirmed gastric adenocarcinoma, recurrent or metastatic gastric cancer; treated at least one cycle of first-line palliative chemotherapy. Exclusion criteria were immunohistochemical staining cannot be performed. The clinical data of patients were collected from their medical records, including sex, age, chemotherapy regimen, and survival. The cancer staging was performed according to the 7th edition of the American Joint Committee on Cancer. This study was approved by the Institutional Review Board of Chung-Ang University Hospital (IRB number: 1981-005-382).
Immunohistochemistry
We performed the immunohistochemical assessment of MLH1, MSH2, ARID1A, PARP-1, BRCA1, and ATM using formalin-fixed paraffin-embedded samples. The mismatch repair proteins MLH1 and MSH2 were scored based on the following threshold: positive when staining was detected in 10% or more of tumor cell nuclei; negative when staining was detected in less than 10% of tumor cell nuclei.
PARP-1 staining was scored based on the staining intensity as follows: 0 (negative), 1 (weak), 2 (moderate), and 3 (strong). The percentage of staining distribution of each marker within the tumor cells was recorded. A histochemical (H) score was then calculated as follows: (1 percentage weak), (2 percentage moderate), and (3 percentage strong). The H-score is representative of the overall staining intensity with a range from 0 to 300. PARP-1 staining was scored as follows: positive or high expression, staining achieving H-scores of more than 175; negative or low expression, staining achieving H-scores of less than 175. Receiver operating characteristic (ROC) curve analysis in other study included gastric cancer was performed to determine an optimal cutoff H score of 175 for PARP-1 expression. [18].
ARID1A staining was scored as follows: negative, undetectable; positive, no loss and focal loss. BRCA1 staining was scored as follows: negative, staining in less than 5% of tumor cell nuclei; positive, staining in more than 5% of tumor cell nuclei. An ATM assay was evaluated based on the nuclear signal, with the percentage of weakly stained cells over a range of 0-300. A dichotomous classification system was devised whereby the cases were classified as follows: negative, intensity staining in ≤ 10% of cancer cells (H-score ≤ 10); or positive, staining in more than 10% of cancer cells.
Statistical analyses
All statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 26.0 (IBM Corp., Armonk, NY, USA). We analyzed the patients' clinicopathological features and prognoses by SPSS. In addition, Kaplan-Meier survival analysis and Cox regression models were used to evaluate independent predictors of OS and progression-free survival (PFS). Hazard ratios (HRs) and their corresponding 95% confidence intervals (CI) were stratified using a Cox proportional hazards regression model. P-values <0.05 were considered statistically significant. GraphPad Prism 9.0 was used to generate the survival curves.
Patients characteristics
The baseline characteristics of the patients are shown in Table 1
Correlation between DDR-related protein expression and survival
The univariate OS analysis of the potential prognostic impact of the clinicopathological parameters identified dMMR (HR = 0.474, P = 0.032) as a significant predictor for OS (Table 2). In the multivariate OS analysis, dMMR (HR = 0.395, P = 0.029) was the only significant prognostic factor. There was no significant correlation between OS and age, sex, ARID1A, PARP-1, BRCA1, ATM, adjuvant chemotherapy, and palliative chemotherapy regimen (FOLFOX (fluorouracil, leucovorin, and oxaliplatin)/ CAPOX (capecitabine and oxaliplatin) vs. S-1 (oral prodrug of 5-fluorouracil) vs. other regimens). Similar to the analysis of OS, dMMR was revealed to be a possible prognostic factor of the clinicopathological parameters by the univariate PFS analysis (HR = 0.498, P = 0.028) ( Table 3). In the multivariate PFS analysis, dMMR (HR= 0.365, P = 0.012) was the only significant prognostic factor. There was no significant correlation between PFS and age, sex, ARID1A, PARP-1, BRCA1, ATM, adjuvant chemotherapy, and palliative chemotherapy regimen.
We conducted additional an analysis in the dMMR group in order to identify the predictors of immune checkpoint inhibitors. However, we are unable to draw any conclusions because the patient numbers were too low.
Discussion
This study presented the results of the immunohistochemical assessment of the expression of DDR protein in 72 gastric cancer patients with stage IV and recurrent gastric cancer after gastrectomy. We analyzed the DDR gene expression in surgical samples from patients who underwent gastric cancer surgery, and confirmed a relationship with the expression of each gene examined. We also analyzed the relationship between survival and chemotherapy regimen. The results showed that the dMMR group was associated with a good prognosis.
According to statistics from The Cancer Genome Atlas project, the rate of MSI-H stomach cancer is 22%, which is comparable to the dMMR gene expression (19.4%) in our study [19]. MSI-H gastric cancer patients had a good prognosis. Other studies reported approximately 5% of patients had dMMR gene expression [20]. However, in another study, patients who had metastasis at the time of surgery were excluded and they reported the dMMR gastric cancer stage IV rate was low, and for those with stage I to III, patients had a dMMR gene expression of 22%-43%. In our study, of the MMR protein expressions we wanted to measure (MLH1, MSH2, MSH6, and PMS2), MSH6 and PMS2 could not be tested. Thus, the estimated dMMR expression in this study might be lower than expected. However, a study of other cancer types such as colon cancer, only measured MLH1 and MSH2 [21]. MSH6 and PMS2 were present at a low frequency, suggesting there was no significant difference in dMMR protein expression.
Our research indicated that patients with dMMR gene expression had a superior OS and PFS to patients with pMMR gene expression. In the era of immunotherapy using cancer drugs, the dMMR group is known to respond well to immune checkpoint inhibitors. In the KEYNOTE-061 study, the 12-month OS rate for pembrolizumab in MSI-H tumors was 73% (95% CI = 44%-89%) compared with 25% (95% CI = 6%-50%) for chemotherapy alone. As with the 12-month OS rate, the median PFS in MSI-H tumors was 17.8 months (95% CI = 2.7 months to not reached) for the pembrolizumab group vs. 6.6 months (95% CI = 4.4-8.3 months) for the cytotoxic chemotherapy group [22,23]. In the CHECKMATE-649 study, the objective response rate of nivolumab combined with chemotherapy was higher (58%) in comfort patients with a combined positive score (CPS) > 5 or higher than 48% in the chemotherapy only group [3].
Cohen et al. emphasized the importance of platinum-based adjuvant chemotherapy in colon cancer stage 3 patients with MSI-H, as the oxaliplatin combined fluoropyrimidine chemotherapy regimen significantly improved the OS after performing adjuvant chemotherapy [15]. A meta-analysis of gastric cancer showed that adjuvant chemotherapy performed in patients with gastric cancer with dMMR/MSI-H significantly improved their DFS and OS [17]. Based on this, our study also confirmed that dMMR patients had a significantly improved OS and PFS compared to the pMMR group, regardless of the palliative first-line cytotoxic chemotherapy regimen.
This study reports a significant improvement in the OS and PFS in the oxaliplatin-based chemotherapy group compared to the dMMR and pMMR groups. However, there was no difference in the OS and PFS according to the chemotherapy regimen in the multivariate analysis. This suggests that gastric cancer patients with dMMR have a better prognosis than patients with pMMR, regardless of the chemotherapy regimen.
Conclusions
Of stage IV gastric cancer and recurrent gastric cancer patients who underwent gastrectomy, the dMMR group had a better survival rate than the pMMR group. Although dMMR is a predictive factor for immunotherapy in advanced gastric cancer patients, further studies are needed to determine whether it is a prognostic factor for gastric cancer patients treated with palliative cytotoxic chemotherapy. | 2023-05-06T15:03:31.249Z | 2023-05-05T00:00:00.000 | {
"year": 2023,
"sha1": "5163139fc20ed405d87a85fc71fe606bc87d9ef2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7150/jca.81632",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80565dfee5a8fb3824a49e7fb58611a12fe61cb8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210494158 | pes2o/s2orc | v3-fos-license | IDENTIFYING THE CONGRUENCE AMONG THE PRESUMED, COMMUNICATED AND PERCEIVED BRAND POSITIONING STRATEGIES OF INDIAN AUTOMOBILE BRANDS
The incidence of congruence among presumed, communicated and perceived positioning strategies off our Indian car brands (WagonR, Santro, Spark and Figo) is examined. The triangulation research methodology (pilot survey, secondary data and content analysis) is applied to examine the three populations (expert views, advertising strategies and consumer perceptions) considered in the study. The results reveal the popularity of two positioning strategies i.e. “Visual Artistic” and “Basic Features” in case of three car brands whereas Figo stands on the positioning of “Cost and Finance.” Regarding the congruence among experts’ presumed strategies and the communication tactics employed by selected car brands, it is observed that all the communicated strategies are not presumed and thus ambiguities prevail in the process. This paper also calls the corporate attention to the fact that even positioning strategies perceived by the customers are not presumed by experts and hence there is need to re-assess the brand positioning strategies to curtail these ambiguities. However, the positioning activities highlighted in communications are successfully recognized by the target customers to some extent. The paper concludes with the discussion on managerial implications and limitations.
Introduction and background of the Study
The development of positioning strategy is of utmost importance in marketing strategy formulation as it makes a product clear, distinctive and desirable in customers' minds and leverage the strategic advantage in the market. Arnott (1992) describes positioning as 'the deliberate, proactive, iterative process of defining, measuring, modifying, and monitoring consumer perceptions of a marketable object…' Literature reveals the significant contribution made by numerous authors (Lawler -Wilson & Fenwick, 1978;Aaker & Shansby, 1982;Park et al., 1986;Ries & Trout, 1986;Kotler, 1988;Dovel, 1990;Hooley & Saunders, 1993;Blankson & Kalafatis, 2004;Janiszewska & Insch, 2012;Chowdhury & Marketer, 2013;Kachersky & Carnevale, 2015;Shams, 2015;Jun & Park, 2017;Park et al., 2017;Wang, 2017;Fayvishenko, 2018) in this direction. They attempt to highlight the impact of positioning in modern marketing management. The essence of positioning is not what you do to the product. It is about what you do to the mind of the customer. It is not to create different but to change or manipulate what you already have (Ries and Trout, 1986).While formulating the positioning strategies, the marketers decide the appropriate position of their brands within the category and relative to the competition. Once the positioning strategy is decided, it is communicated through various media. Positioning and advertising are grilled together to create value and gain the competitive advantage as is also evidenced from the writings of Ries and Trout (1981); Aaker and Shansby (1982); Batra et al.(1996); Alden et al. (1999); Fill (1999); and Blanks on (2004). According to these authors, 'advertisements today, have an explicit objective i.e. the establishment, reinforcement or modification of the positioning of an offering in consumer's mind. Whether a brand would succeed or fail in the marketplace depends to a great extent on its positioning in the eyes of customers. Therefore, a task of evaluating the effectiveness of this positioning exercise is of paramount significance. It is also supported by Seggev (1982); Rossiter and Percy (1997) ;Alden et.al. (1999); Blankson (2004);and Blank son et al. (2014;2017) who further indicate that since the ultimate objective of products'/ services'/ brands' (offerings') advertising emanate from positioning activities, the main concern of marketers and researchers must be the assessment of the degree to which desired positioning has been accomplished. Accordingly, an attempt is made to evaluate the offering's positions in market place in case of four popular Indian car brands. This evaluation is useful for managers in knowing the degree to which the desired position has been accomplished.
The rationale of this study is that in Indian scenario, the increasing rate of per capita income, changing social needs of people, technological advancements and innovations, availability of car loans and affordable rates of interests and the deductions offered to customers by the retailers have paved the way for tough competition in passenger car segment. Today, attracting and retaining a customer is just like winning a battle and thus positioning of car brands and measurement of its effectiveness becomes an important concern of research. Further, the determination of congruence in positioning activities is important to know the effectiveness of companies' efforts (Rossiter & Percy, 1997;Diwan & Bodla, 2014-15;Lee et al., 2018) Without evaluating the programs, the gaps and ambiguities could not be estimated and corrected. So, in the light of this, the present article attempts to investigate the positioning activities of four Indian car brands (WagonR, Santro, Spark and Figo). These competitive brands are chosen purposively as these belong to the similar price range and chase the similar target segment.
Research Objectives
The present study aims to achieve the following objectives: 1. To determine the customers' perceptions of brand positioning strategies of selected car brands (perceived positioning); 2. To study the brand positioning strategies employed in the marketing communications of car brands (communicated positioning); 3. To understand the brand positioning strategies presumed to be followed by experts of car brands (presumed positioning); and 4. To test the congruence among perceived, communicated and presumed positioning strategies of selected car brands.
Research Design
To accomplish the above objectives, a comprehensive research design was formulated which is discussed as under: As a first step, to know the customers' perceptions of brand positioning strategies of selected car brands, a questionnaire was designed. The newly developed positioning typology (Diwan & Bodla, 2011) was incorporated in the questionnaire (Appendix 1) and respondents were asked to respond on 7-point Likert scale where 1 stood for very irrelevant and 7 for very relevant. Further, a sample of 400 respondents was taken with convenience and snowball sampling techniques. These respondents were the customers who own the selected car brands. The justification of the selection of this sample unit is in line with Fletcher and Bowers (1991) who stated that in positioning research, the important requirement is to conduct the research among people who already use the product or who are likely to use it. In all, 400 respondents were sent questionnaires, out of which 209 filled questionnaires (52 percent response rate) were received back after second reminder to non-respondents. Further, these filled questionnaires were checked for completeness and incomplete 16 questionnaires were discarded. In all, 193complete questionnaires were considered yielding an effective 48 percent response rate. The customers' responses were assigned the codes and data was analyzed with ANOVA.
The research design for second objective (communicated positioning strategies) constitutes the collection of advertisements of selected car brands. For this purpose, 145 advertisements were collected in all from newspapers (102), pamphlets (20), magazines (4) and T.V. (19) concerning recent years.
Further, to study the presumed positioning strategies by experts of selected car brands, the secondary data was collected from web-sites of the respective companies and articles in car magazines such as Overdrive, Top Gear, and Autocar. This data consisted of the information about experts' presumptions regarding their respective car brands. Finally, the congruence or link among perceived, communicated and presumed positioning strategies was examined.
Measurements
As mentioned above, this paper adopts a newly developed positioning typology to measure the employment of the positioning strategies of selected car brands (Appendix 1). The justification of adopting this typology is that it is designed for automobile industry and it is consumer generated. One another strong factor is the criticisms leveled against existing positioning typologies on the basis of their difficult operationalization (Crawford, 1985, Easingwood andMahajan, 1989) and authors of the selected typology claim that it is easy to operationalize. This adopted positioning typology has eight dimensions which are collectively measured as summated scales of 52 items. Each step in this typology relates to consumers' perceptions and is practically grounded. The decision of selecting this typology is also supported by Park et. al (1986) and Hooley and Saunders (1993) who say that if positioning strategies are based on examination of consumers' perceptions then these are successful strategies and could last longer. Moreover, marketers may face difficulties in applying positioning strategies, if these are not consumer-generated (Piercy, 1991;de Chernatony, 1994;Piercy, 2005).
The data of target group's perceptions of brand positioning strategies in case of each car brand is analyzed with analysis of variance (ANOVA) at a 5% level of significance throughout the study. The applied null hypothesis is that the responses follow a uniform pattern (Childers et al., 1980;Hornik, 1982;Hawes et al., 1987;Jobber, 1989;Blankson & Kalafatis, 2007). In case, where the null hypothesis is rejected, the data is analyzed with two multiple comparison tests i.e. Scheffe as well as Tukey (Malhotra & Dash, 2009). However, the results from both tests are same with no variation but following the good practices of literature, Scheffe's multiple comparison test is adopted in the present study (Blank son, 2004).
Similarly, to know the communicated positioning strategies, the data of advertisements from each media is analyzed with content analysis. This technique is used for summarizing any form of content by counting its various aspects and interprets meanings from the contents of the text data. In this study, each ad copy is analyzed in the light of above discussed positioning typology. Frequency system is adopted as a coding procedure. Code 1 is given in the case where there was occurrence of a specific strategy out of the designed typology as suggested by Frankfort-Nachmias & Nachmias (1996). As highlighted in literature (Martenson, 1987;Stern and Resnik, 1991), each ad copy is coded using the scale items of all the eight positioning strategies designed for the study. If any of the scale items out of the construct is present, the presence of the particular positioning strategy is considered. This is also clarified from the fundamental principle of summated scales. So, multiple counting of the single strategy is not present. The data collected with content analysis is subjected to ANOVA, which examines the overall variance in means for two or more populations. A 5% level of significance is adopted in the whole study. Separate ANOVA test is carried out for each media and the results of positioning strategies are presented in respective tables. In cases, where the null hypothesis is rejected i.e. significant difference between positioning strategies in different ads are detected, data is analyzed with the help of Scheffe's multiple comparison test, which is used to make pair-wise comparisons of all treatment means (Malhotra & Dash, 2009). The good reason for using this technique in case of communications is found in Cochran (1950), Hsu and Feldt (1969) and Blank son and Kalafatis (2004Kalafatis ( , 2007.
Results and Discussion
The data of brand positioning strategies of each car brand is analyzed with ANOVA and Scheffe's multiple comparison test. "x" indicates the strategies in the subset associated with the highest values. A 5% level of significance is adopted throughout the study. In this section, only the summarized results are presented. The complete ANOVA results are presented in Appendix 2.
Positioning Strategies: WagonR
It is world's first tall-boy van styled passenger car of Maruti. As far as presumed positioning strategies of this car are concerned, Table I reveals two strategies presumed to be pursued by experts i.e. "Visual Artistic" and "Basic Features". It is also supported by one official saying "The product gels with people lifestyle, reflecting their confidence and multifaceted personality. By sheer excellence of engineering it enables them to be whatever they choose to be -and that is what makes the buyers interested"(W1).
In communications, company is trying to employ multiple positioning strategies as is evidenced from the presence of five strategies (Table I) in pamphlets. T.V. advertisements besides highlighting "Basic Features" and "Contemporary Features", also work on "Visual Artistic" part and try to build brand personality. Newspapers project "Promotional Campaigns". It is also supported by a caption for one of the print advertisements which goes like "It defied convention and changed the way people looked at a car. It is simply unmatched in performance and comfort. Test-drive one and you'll never settle for anything ordinary again". It is also supported by other like "There are some people who tower over the others. They follow their hearts, create their own rules and lead much fuller lives. The WagonR is for such people. The result of inspired engineering, it defied convention and changed the way people looked at a car... " (W1).
In target group's perceptions, there is presence of five positioning strategies (Table I) out of which "Visual Artistic" and "Basic Features" are presumed as well as communicated. This car is from the house of Maruti and brand image of this company is very good as it is edging the competition on the basis of quality products in medium price range and strong dealer network and services. Customers have strong bent of mind in favor of Maruti and hence also perceive other strategies like "Brand Image", "Dealer Network and Services" and "Cost and Finance", however these strategies are not communicated by the company. Table II indicates the positioning strategies of Santro, an entry level compact car by Hyundai. The presumed strategy of Santro is only "Visual Artistic". This is because Santro has been repositioned from 'family car' to 'sunshine car' (a smart car for young people). This repositioning is to give this car a younger look and to keep the excitement about the Santro alive in the tough competition. It is also substantiated by one expert by saying… "the average age of a first -time car buyer in India has declined from around 30-35 three years ago to 25-30, primarily because of changing lifestyles and cheap and easily available finance. And the company wants to catch the young" (W2).
Positioning Strategies: Santro
As the communicated strategies point out, with the presumed strategy, differentiation is tried to make in messages and as a result overall five strategies are communicated. As far as the target group's perceptions are concerned, there is evidence of multiple strategies but significant matching among presumed, communicated and perceived strategies lies only in case of "Visual Artistic". In case of the rest of the strategies, the ad campaigns and customers perceptions are somewhat similar.
Positioning Strategies: Spark
Experts express that Spark being a compact car is positioned on style and features. As one expert states … "its design was meant to appeal to young car buyers in urban markets, with an emphasis on fuel economy and value" (W3). Most ads of this car are found in newspapers and television. A lot of "Promotional Campaigns" take place to make this car penetrate in the market. The newspaper ads not only focus upon its "Basic Features" but low price is also promoted. The perceived strategy of "Cost and Finance" is found in line with communicated but not with presumed practice. The congruence can be seen in Table III in two positioning strategies of Spark namely "Visual Artistic" and "Basic Features".
Positioning Strategies: Figo
The positioning strategies of this small hatchback car are highlighted in Table IV. Experts followed the strong price positioning for this small atom bomb. Ford, in its portfolio, has the entire sedan cars at higher price range for India but as 70% segment of Indian market is of hatchback cars, so, Ford planned to introduce Figo in this segment at strong price positioning. The price positioning is used to survive the competition with other major players of this segment like Maruti, Hyundai and Tata.
The communicated strategies of Figo carry multiple messages. Price positioning is communicated but additionally other copy points are also touched in advertisements. With price, "Visual Artistic", "Basic Features" and "Brand Image" are conveyed to the target segment. "Security Measures" are flavored only in pamphlets as these carry the detailed specifications and have limited reach. Target group perceptions reveal that the communicated strategies are also perceived by the target customers. People cherish by having a stunning car at price of just 3.5 lakhs embedded with all the features and that also from the house of Ford. But as far as congruence among the three practices in this study is concerned, it is found only in case of "Cost and Finance".
Congruence Among Positioning Activities
The results of the congruence (agreement or resemblance) among brand positioning strategies of selected car brands are presented in Table V. These results reveal the presence of 59 percent congruence between experts' presumed positioning strategies and actual communicated strategies, while in 46 percent of the cases there is reflection of disagreement. On the other hand, the presence of agreement between actual communicated strategies and perceived strategies is 62 percent and 37 percent disagreement appears in the results. The "Visual Artistic" and "Basic Features" are clearly the most popular positioning strategies employed by three car brands except Figo. However, the ambiguity lies between communicated and perceived strategies of WagonR, Santro and Spark as "Brand Image" and "Dealer Network and Services" strategies are well perceived by target group but these are neither presumed by experts nor communicated in advertisements. This finding clearly highlights the tru stworthiness and reliability of these brands which is well accepted without any management efforts in this direction. So, it is hereby suggested to the marketers to employ these strategies to gain competitive advantage. Findings also screen out that all the communicated strategies are not perceived by the customers meaning hereby that these efforts are in vain and there is no return of such advertising expense.
In this study, the most important part that needs managers' attention or concern is the investigation of congruence among positioning activities. As the results depict, "CP' (communicated strategies do not reflect the presumptions/perceptions) is of foremost concern followed by "NE" (strategy is not employed) and "PWC" (strategies present in presumptions/perceptions despite or without communication). So, it calls the need for the managers to be more practical on ground and reconsiders the positioning activities to curtail these ambiguities.
Conclusion, Managerial Implications and Limitations
Positioning is exercised by every company but the more important aspect is to evaluate the degree to which the desired position is accomplished. In the light of this, the present paper has identified the positioning strategies employed by four wellknown Indian brands in the passenger car segment and the presence of congruence in their positioning activities. The results reveal the presence of two most popular positioning strategies "Visual Artistic" and "Basic Features" in case of three brands however Figo clearly stands on "Cost and Finance". No doubt, the results of Figo are more alarming as customers perceive three other strategies which are not even presumed by experts. So, managers need to reassess the activities in this direction to fill the gap. In the cases, where brands are sharing the same positioning, it theoretically appears to be a strategic mistake as positioning is concerned with the point of difference in customer's mind. The reason may be that the difference among these cars is cosmetically created as these all relate to the middle class target segment and are in low price range. Despite of this, when the cosmetically created differences are measured on typology, the comprehensive strategy comes out to be "Visual Artistic" and "Basic Features" as these dimensions consists of 15 and 12 items respectively. This finding highlights the extent of the critical brand tactics in positioning of the brands.
As far as managerial implications are concerned, this study serves as an insight for marketing managers, brand managers and advertising executives involved with car brands. They may consider the identified positioning strategies in this study and may consult the suggestions to reassess their positioning activities. This study is perhaps the first to operationalize the concept of positioning in automobile industry. There are evidences from literature that managers face difficulties in the application of the positioning concept in absence of guidelines (Pollay, 1985;de Chernatony, 1994;Piercy, 1991Piercy, , 2005. So, this study makes the managers available with the strategies and approaches for assessing the congruence among positioning activities followed by companies. Further, this research contributes in the positioning literature in general and also tests a newly developed customer derived positioning typology in automobile sector. Hence to some extent, it responds to the suggestions made by Hooley et. al (2001) for methods to assess brands' competitive positions and their implementation.
Like every research, this study too is not free from limitations. The use of convenience and snowball sampling, subjective as well as content analysis and disparity in dates of data collection of advertisements and low response rate are some of the drawbacks of this research. Moreover, this study is focused on a single automobile sector and specific car brands and hence these findings may not be generalized. Further, the interpretations may be influenced by writings of various authors cited in this study (e.g. Aaker & Shansby, 1982;Kirk & Miller, 1986;Johar & Sirgy, 1989;Batraet al, 1996;Frankfort-Nachmias &Nachmias, 1996 andBlank son, 2004), as this study is in line with these directions.
Brand Image
Brand name, Reputation, Name of company, Popularity, Leader in market, Reliability and Image consistency
Dealer Network and Services
After sale services, Dealer network and Spare parts availability
Promotional Campaign
Trustworthy ads, Promotional activities and Celebrity in advertisements
Cost and Finance
Financing period, Interest for finance and Price Source: Diwan and Bodla, 2011 | 2019-11-14T17:07:56.709Z | 2019-11-13T00:00:00.000 | {
"year": 2019,
"sha1": "b82bde0cbdd4fac4ba5ee0682f65044da89158c5",
"oa_license": "CCBY",
"oa_url": "http://jbs.sljol.info/articles/10.4038/jbs.v6i1.40/galley/69/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2cf8a44eb57f77792cb56085bef4e7bd595d1f3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
391998 | pes2o/s2orc | v3-fos-license | Intracranial subdural hematoma as a cause of postoperative delirium and headache in cervical laminoplasty: A case report and review of the literature
Objectives To describe a rare case of acute intracranial subdural hematoma as a cause of postoperative delirium and headache following cervical spine surgery. Summary of Background Data Headache is uncommon following spinal surgery, but can be observed in cases of accidental tearing of the dura during surgery. The causes of headache after surgery are thought to include dural tear and CSF leakage. On the other hand, intracranial subdural hematoma can be a cause of headache and cognitive dysfunction. However, only 4 cases as a postoperative complication of spinal surgery have been reported in the literature. Methods A 55-year-old man underwent re-explorative surgery due to postoperative hematoma causing hemiplegia following cervical laminoplasty. During this operation, accidental dural tear occurred and induced CSF leakage. On the following day, headache and delirium were noted. CSF leakage continued despite intraoperative repair of the dural laceration. Cranial CT at that time clearly demonstrated subdural hematoma. Results We reexplored the surgical site and attempted to stop the CSF leakage with meticulous suturing of the dural sac under microscopic observation. The intracranial subdural hematoma was carefully observed under consultation with a specialist neurosurgeon. Following this reexploration, the headache and delirium gradually improved, with spontaneous resolution of intracranial hematoma over a two-month period of observation. Conclusions We have reported a rare case of acute intracranial subdural hematoma caused by CSF leakage following cervical spine surgery. This report demonstrates the possibility of intracranial hematoma as a cause of postoperative cognitive dysfunction or headache, especially when accidental tearing of the dura has occurred in spinal surgery.
Postoperative cognitive dysfunction affects a significant number of patients and may significantly hamper postoperative rehabilitation. It tends to be associated with advanced age, time of operation, method of general anesthesia used, and type of surgery. [1][2][3] Headache after spinal surgery is uncommon but can be observed in patients with accidental tear of the dura. Thus, in patients complaining of headache after spinal surgery, dural tear and cerebrospinal fluid (CSF) leakage must be considered.
Intracranial subdural hematoma can be a cause of headache and cognitive dysfunction. However, it is seldom ex-perienced as a cause of postoperative symptoms after spinal surgery. We encountered a case of intracranial subdural hematoma as a cause of postoperative headache and delirium after cervical laminoplasty. Decrease in intracranial pressure induced by CSF leakage appeared to be related to the formation of intracranial hematoma in this case.
The purpose of this article is to describe the details of this rare case and focus on the importance of screening for intracranial hemorrhage in patients with headache and cognitive dysfunction after spinal surgery.
Case report
Dysfunction of the hand and disturbance of gait developed in a 55-year-old man after initial numbness and pain in his right upper extremity. Cervical magnetic resonance imaging showed multiple compressions of the cervical spine on his visit to our clinic ( Fig. 1). Physical examination at the time of admission showed spastic gait with exaggerated deep tendon reflexes and pathologic reflexes in both upper and lower extremities. In addition, mild motor weakness and sensory disturbance in both upper extremities were observed.
The patient did not abuse alcohol and was taking no medications. Preoperative laboratory examination showed nearly normal findings, including bleeding time.
Cervical open-door laminoplasty was performed because of the progression of neurologic findings. The surgery was performed without problems. The time of operation was 200 minutes, and the amount of blood loss was 500 mL. Two hours after the conclusion of surgery, the patient complained of severe pain from the cervical region to the left upper extremity, which progressed to right-sided hemiplegia. On the basis of the diagnosis of epidural hematoma based on computed tomography (CT) findings, re-exploration of the cervical spine was performed; it revealed wide hematoma from C4-C7 and deformation of the dural sac. A point of active bleeding was recog-nized and stopped on the left side around the C3 lateral mass. However, a tear of the dura was unintentionally produced when the C6 lamina was being lifted and resected. The tear was sutured with No. 6-0 nylon, and the wound was closed after insertion of suction drainage.
Despite improvement of neurologic findings in the patient's upper and lower extremities, he began to complain of severe headache. Cognitive dysfunction was subsequently observed. During this time period, CSF leakage continued on suction drainage. Because of progression of headache and cognitive dysfunction, cranial CT was performed and clearly showed intracranial subdural hematoma (Fig. 2). After consultation with a specialist neurosurgeon, we reexplored the surgical site and attempted to stop the CSF leakage with meticulous suturing under microscopic observation. The CSF leakage discontinued after this procedure.
Under careful conservative observation of this subdural intracranial hematoma, the patient's symptoms gradually improved. CT at 2 months after re-exploration showed resorption of the hematoma (Fig. 3).
At 2 years after cervical surgery, the patient was asymptomatic and did not complain of difficulty in activities of daily living.
Discussion
Postoperative delirium has been reported in elderly patients as a side effect of general anesthesia. Bruce et al 4 reported that delirium occurs more commonly after hip fracture surgery than elective surgery. For spine surgery, however, there have been few reports of postoperative delirium. Kawaguchi et al 1 pointed out that postoperative delirium occurred in 12.5% of patients aged 70 years or older undergoing surgery.
On the other hand, headache sometimes occurs after spine surgery, especially when CSF leakage occurs accidentally or is purposely induced during surgery. Headache can persist while CSF leakage continues. 5 Intracranial subdural hematoma can be a cause of cognitive dysfunction and headache. 6 -9 However, this lesion has not been considered a sequela of spinal surgery. Thus far, to our knowledge, only 4 cases of intracranial subdural hematoma after spinal surgery have been reported.
The age at the time of onset ranged from 25 to 59 years, with a mean of 46 years. Headache was the most common clinical symptom. CSF leakage occurred in the lumbar spine in 3 cases and in the thoracic spine in 1 case. Although no cases of cervical spine surgery have previously been reported, the effects of CSF leakage in the cervical spine on intracranial structures would be greater.
The mechanism of formation of intracranial subdural hematoma is still unknown. However, the possibility has been suggested that bridging veins can be lacerated by the decrease in CSF pressure caused by CSF leakage. 10 -12 Many structural features detected mainly on electron microscopic examination suggest that bridging veins are more fragile in their subdural portion than in the subarachnoid space. Bridging veins are likely to rupture at their weakest point in the subdural space. 13 When enlargement of a subdural hematoma occurs, symptoms gradually progress from headache to paralysis and disturbance of consciousness with development of cerebral herniation. Early detection of hematoma is thus important.
In our patient careful observation followed by early detection of intracranial subdural hematoma yielded a satisfactory clinical outcome. The usual course of treatment for a hematoma whose thickness is more than 1 cm or a case in which a mass effect, such as a midline shift, has taken place is to perform a hematoma evacuation with craniotomy; however, we performed conservative treatment because in this case, neither was found. The possibility of intracranial hematoma should be considered in patients with postoperative cognitive dysfunction or headache, especially when accidental tearing of the dura has occurred in spinal surgery. | 2016-05-17T06:39:32.525Z | 2011-03-01T00:00:00.000 | {
"year": 2011,
"sha1": "41b40e7bba5ff97d2df6f5c310c09d07b0ebd554",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4365617?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbcc378df78ce4a722bce95e2ffcccdd5dd13b71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4918679 | pes2o/s2orc | v3-fos-license | Coenzyme M biosynthesis in bacteria involves phosphate elimination by a functionally distinct member of the aspartase/fumarase superfamily
For nearly 30 years, coenzyme M (CoM) was assumed to be present solely in methanogenic archaea. In the late 1990s, CoM was reported to play a role in bacterial propene metabolism, but no biosynthetic pathway for CoM has yet been identified in bacteria. Here, using bioinformatics and proteomic approaches in the metabolically versatile bacterium Xanthobacter autotrophicus Py2, we identified four putative CoM biosynthetic enzymes encoded by the xcbB1, C1, D1, and E1 genes. Only XcbB1 was homologous to a known CoM biosynthetic enzyme (ComA), indicating that CoM biosynthesis in bacteria involves enzymes different from those in archaea. We verified that the ComA homolog produces phosphosulfolactate from phosphoenolpyruvate (PEP), demonstrating that bacterial CoM biosynthesis is initiated similarly as the phosphoenolpyruvate-dependent methanogenic archaeal pathway. The bioinformatics analysis revealed that XcbC1 and D1 are members of the aspartase/fumarase superfamily (AFS) and that XcbE1 is a pyridoxal 5′-phosphate–containing enzyme with homology to d-cysteine desulfhydrases. Known AFS members catalyze β-elimination reactions of succinyl-containing substrates, yielding fumarate as the common unsaturated elimination product. Unexpectedly, we found that XcbC1 catalyzes β-elimination on phosphosulfolactate, yielding inorganic phosphate and a novel metabolite, sulfoacrylic acid. Phosphate-releasing β-elimination reactions are unprecedented among the AFS, indicating that XcbC1 is an unusual phosphatase. Direct demonstration of phosphosulfolactate synthase activity for XcbB1 and phosphate β-elimination activity for XcbC1 strengthened their hypothetical assignment to a CoM biosynthetic pathway and suggested functions also for XcbD1 and E1. Our results represent a critical first step toward elucidating the CoM pathway in bacteria.
Coenzyme M (2-mercaptoethanesulfonate, CoM 2 ) was once thought to be exclusive to methanogenesis in archaea, where it functions as a C1 carrier and plays a key role in the biosynthesis of methane gas (1)(2)(3)(4)(5). In the late 1990s, CoM was discovered to serve as a C3 carrier in a bacterial pathway for alkene metabolism in the proteobacterium Xanthobacter autotrophicus Py2 (6). Since then, other bacterial species from the Actinobacteria phylum have also been shown to use CoM in the metabolism of alkenes such as ethylene (6 -11). In X. autotrophicus Py2, propylene is converted to acetoacetate, which is subsequently funneled into the tricarboxylic acid cycle in the form of two molecules of acetyl-CoA, where it serves as a carbon source for growth. The pathway begins with the epoxidation of propylene by an NADH-dependent alkene monooxygenase, forming propylene oxide. This is followed by nucleophilic attack by CoM to break the epoxide ring and form an enantiomeric mixture of R-and S-hydroxypropyl-CoM, catalyzed by epoxyalkane: CoM transferase (12)(13)(14)(15)(16)(17). CoM then functions as a carrier, orienting R-and S-hydroxypropyl groups as oxidation substrates for a pair of stereoselective short-chain dehydrogenases, yielding 2-ketopropyl-CoM (16 -18). In the final step of the pathway, CoM again serves to orient the 2-ketopropyl group for reductive cleavage and carboxylation, forming acetoacetate and free CoM (16, 17, 19 -22). CoM, in essence, serves an analogous role in propylene metabolism as it does in methanogenesis, in promoting the proper orientation of these small organic substrates (16).
Methanogens have two known pathways for CoM synthesis, in which the carbon backbone for CoM is derived either from phosphoenolpyruvate (PEP) or L-phosphoserine (Scheme 1). The PEP-dependent pathway is initiated by a phosphosulfolactate synthase (ComA), which catalyzes the nucleophilic addition of sulfite to PEP (23,24). The phosphosulfolactate product subsequently undergoes oxidative dephosphorylation to yield sulfopyruvate (25)(26)(27)(28). Decarboxylation of sulfopyruvate, yielding sulfoacetaldehyde, is presumably followed by reduction and thiol addition to generate CoM (29,30). The L-phosphoserine-dependent pathway begins with the concerted elimination of phosphate and addition of sulfite to generate L-cysteate. L-Cysteate is subsequently transaminated to form a common intermediate with the PEP dependent pathway, sulfopyruvate (31), and after that stage, the pathways are presumed to follow the same chemical steps as CoM.
Although it is essential for alkene metabolism, a bacterial pathway for CoM synthesis has never been described. The genes responsible for alkene metabolism in the bacterium X. autotrophicus Py2 are located on a 320-kb linear megaplasmid (pXAUT01) (Fig. 1) (32,33). Proteomic experiments showed that proteins encoded in a gene cluster adjacent to the alkene-metabolizing genes were co-expressed with enzymes involved in propylene metabolism under conditions of propylene-dependent growth (14,32). These experiments also revealed additional copies of the same genes. Only the cluster directly adjacent to the alkene-metabolizing genes is considered in this work, and these are designated xcbB1, C1, D1, and E1 (XAUT_RS24680, RS24685, RS24690, and RS24695, respectively).
One of the genes in the cluster (xcbB1) is a homolog of comA, suggesting that the bacterial pathway for CoM biosynthesis is PEP-dependent. Additional genes within the cluster, however, are not similar to those from either methanoarchaeal pathways, suggesting that the bacterial CoM is synthesized via a chemically distinct series of reactions. The adjacent xcbC1 gene encodes a member of the aspartase/fumarase superfamily (AFS). Enzyme families within this superfamily include the argininosuccinate lyases/␦2-crystallins, with which the XcbC1 gene product has the closest similarity, as well as class II fuma-rases, aspartases, and adenylosuccinate lyases. All of these AFS members catalyze -elimination reactions that result in the formation of an unsaturated organic product (34).
In this work, we demonstrate that the ComA homolog XcbB1 catalyzes the conversion of PEP to phosphosulfolactate (Scheme 2). We also show that XcbC1 then catalyzes -elimination using phosphosulfolactate as the substrate, releasing phosphate and sulfoacrylic acid as the analogous unsaturated product. This is a new activity for a member of the AFS and a highly unusual mechanism for biological dephosphorylation. Based on the confirmation of these two activities, bioinformatic analyses of the sequences for XcbD1 and E1, and partial biochemical characterization of XcbE1, we can now propose a complete biosynthetic pathway for CoM biosynthesis in bacteria.
Sequence analyses identify gene families and suggest possible roles for putative CoM biosynthetic genes
The four putative CoM biosynthetic genes under study have deduced amino acid sequence similarities to characterized enzymes, providing clues regarding the likely chemical steps associated with bacterial CoM biosynthesis. The xcbB1 gene is homologous to comA, which encodes the phosphosulfolactate synthase that initiates the PEP-dependent CoM biosynthetic pathway in methanogens. The pathway in X. autotrophicus Py2 is consequently presumed to be PEP-dependent. The adjacent xcbC1 and xcbD1 genes encode members of the AFS. Enzyme families within this superfamily catalyze reversible -elimina-Scheme 1. The PEP-dependent and L-phosphoserine-dependent pathways to CoM in methanoarchaea. Pathway I and pathway II are depicted. Both pathways culminate in production of sulfopyruvate, which undergoes decarboxylation and reduction plus thiol addition to yield CoM.
Using these preliminary sequence analyses, hypothetical roles for the enzymes in CoM biosynthesis have been proposed (Scheme 2). Homology between XcbB1 and ComA suggested that the pathway may begin with the addition of sulfite to PEP to form phosphosulfolactate. Conversion of this pathway intermediate to CoM requires net dephosphorylation, decarboxylation, and thiolation steps. The annotation of XcbC1 and XcbD1 as members of the AFS provides clear clues about the most logical trajectory by which these steps might occur. Within the AFS, XcbC1 has the strongest homology to the arginosuccinate lyases ( Fig. 2). Enzymes in this family catalyze the reversible -elimination of argininosuccinate through general base proton abstraction from the C  of the succinate moiety, yielding arginine and fumarate (35). Phosphosulfolactate is a loose structural analog of arginosuccinate, containing a potential proton abstraction site at the C  position relative to the phosphoryl group. A proton abstraction analogous to that catalyzed by argininosuccinate lyases would lead to elimination of phosphate and formation of a carbon-double bond, yielding sulfoacrylic acid. This product, in turn, is a structural analog of fumarate, the coproduct of the -elimination of arginine.
The subsequent decarboxylation and thiolation steps are less clear, although the annotations are again enlightening of at least likely elements of the next steps. XcbD1 groups most closely with members of the adenylosuccinate lyase family within the AFS (Fig. 2), which catalyze the reversible elimination of AMP from adenylosuccinate to form fumarate (52). We propose that the most likely role for XcbD1 is therefore in catalyzing an analogous reaction in the addition direction, in which a substrate is added across the double bond. Attempts to use AMP as a substrate have not resulted in formation of adenylated product (data not shown). Addition of H ϩ and an as yet undetermined co-substrate across the sulfoacrylic acid double bond (Scheme 2) would yield the putative substrate for XcbE1, the final enzyme in the pathway.
XcbE1 is homologous to pyridoxal phosphate (PLP)-dependent D-cysteine desulfhydrases that catalyze the ␣,-elimination of D-Cys to yield H 2 S, pyruvate, and ammonia (54). We tested XcbE1 for desulfhydrase activity via H 2 S formation assays and found that both D-and L-Cys isomers were effective substrates, where XcbE1-specific activity was 27 Ϯ 2 nmol H 2 S/ min Ϫ1 mg-XcbE1 Ϫ1 for L-Cys and 5 Ϯ 2 nmol H 2 S/min Ϫ1 mg XcbE1 Ϫ1 for D-Cys (Fig. S1). It is attractive to propose, therefore, that XcbE1 supplies cysteine-derived sulfur that is ultimately incorporated into CoM. However, without knowing the product of the XcbD1 reaction, the cosubstrate for XcbE1 the mechanism of thiolation cannot yet be determined. In addition, the substrate of XbcE1 will also require decarboxylation to arrive at the CoM product. PLP-dependent enzymes catalyze a number of reactions, including decarboxylation, transamina- Figure 1. The 320-kb linear megaplasmid of X. autotrophicus Py2 contains the genes for the putative CoM biosynthetic pathway (purple) immediately downstream of the genes that encode the enzymes responsible for propylene metabolism. Alkene monooxygenase subunits are shown in yellow, and the remaining four enzymes involved in transforming propylene oxide to acetoacetate are shown in green. Alkene-related functions for the open reading frames shown in gray have not been assigned. The locus tags shown are truncated for clarity and contain the prefix "xaut_RS2" in the pXAUT01 plasmid.
Scheme 2. Proposed bacterial pathway for CoM biosynthesis.
Steps shown in blue are supported by data reported in this study. Steps shown in red are proposed based on bioinformatics analyses. Cysteine desulfhydrase activity has additionally been demonstrated for XcbE1 in the presence of either Lor D-Cys.
A new pathway for coenzyme M biosynthesis
tion, racemization, and elimination/replacement reactions of a variety of predominantly amino acid substrates (55,56). Many PLP-dependent enzyme reactions, moreover, share common intermediates, and several enzymes have been shown to be bifunctional, catalyzing combinations of reactions that utilize the basic PLP reaction chemistry (55). Examples include the decarboxylation and transamination catalyzed by dialkylglycine decarboxylase and the ␥-elimination/-replacement catalyzed by threonine synthase (57,58). It is therefore conceivable that XcbE1 could be a bifunctional PLP enzyme, catalzying a more complicated final step coupling for example thiolation and decarboxylation in the production of CoM (Scheme 2).
XcbB1 catalyzes the conversion of phosphoenolpyruvate to phosphosulfolactate
To begin to test these proposed steps of the pathway, we first examined the ComA homolog XcbB1. ComA catalyzes the addition of sulfite to PEP to yield phosphosulfolactate, an unusual metabolite that, according to currently annotated Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway assignments, is unique to the methanogen PEP-dependent CoM biosynthetic pathway. An active site Mg 2ϩ in ComA coordinates the enol form of PEP, facilitating nucleophilic addition of sulfite and protonation of the adduct by a conserved activesite lysine to form phosphosulfolactate (24). The predicted phosphosulfolactate-producing activity of the ComA homolog XcbB1 was initially examined via sulfite consumption assays using monobromobimane (mBBr) as a fluorescent label for free sulfite (Fig. 3A). Sulfite consumption increased specifically when XcbB1 was incubated with the substrates PEP and sulfite, with a specific activity of 487 Ϯ 45 nmol sulfite min Ϫ1 mg Ϫ1 . This specific activity is ϳ20% of the activity reported for ComA from Methanocaldococcus jannaschii, supporting the conclusion that XcbB1 and ComA share the same physiological function (23). Q-TOF MS was subsequently used to identify phosphosulfolactate as the product (Fig. 3B). The predicted m/z for phosphosulfolactate in negative ion mode ((M-H)-species) was 248.94, which matched the m/z of the emergent signal in XcbB1-catalyzed reactions. Additionally, the isotope distribution matched the predicted isotope distributions within 5% of their predicted values. Ϫ ) and XcbB1 (Fig. S4A). PEP consumption was temporally coupled to phosphosulfolactate production, detected via the increase of a characteristic doublet signal (3.36/3.35 ppm) upfield. The phosphosulfolactate triplet signal was not visible, likely because of the suppression of the water peak, plus the additional Tris and glycerol peaks in the 3.5-4.8 ppm region. The reaction went to completion within 30 min, consistent with the general time course for sulfite consumption expected from the specific activity measured in Fig. 3. We therefore assign XcbB1 as a PEP-dependent phosphosulfolactate synthase (EC:4.4.1.19).
XcbC1 catalyzes the -elimination of phosphate from phosphosulfolactate to form sulfoacrylic acid
The proposed substrate for XcbC1 is the phosphosulfolactate generated by the upstream enzyme XcbB1. In keeping with its sequence-based assignment to the arginosuccinate lyase family, the proposed -elimination to yield phosphate was monitored using a coupled assay involving XcbB1, XcbC1, and molybdenum blue as a phosphate indicator (Fig. 4A). When equimolar amounts of XcbB1 and XcbC1 were present in solution with a large excess of PEP and sulfite, a robust level of phosphate production was observed (specific activity ϭ 30 Ϯ 8 nmol PO 4 3Ϫ min Ϫ1 mg XcbC1 Ϫ1 ). We therefore concluded that XcbC1 catalyzes the dephosphorylation of phosphosulfolactate, which was in turn generated by the upstream catalyst XcbB1.
Canonical alkaline/acid phosphatases are typically metal-dependent enzymes that catalyze the hydrolysis of phosphomonoesters using metal-activated water, resulting in inorganic phosphate and alcohol products (59 -65). A classic phosphohydrolase-type reaction would be expected to yield sulfolactate from phosphosulfolactate (Fig. S3A). However, Q-TOF MS analysis of the reaction products indicated that sulfoacrylic acid was the organic product ( Fig. 4B), based on its measured m/z (150.967, (M-H)-species, negative ion mode) and isotope pattern. These matched predicted values for sulfoacrylic acid (m/z ϭ 150.97) rather than sulfolactate (m/z ϭ 168.98). This suggested that XcbC1 does not catalyze a simple hydrolytic reaction on the phosphosulfolactate substrate; rather, the reaction requires concomitant release of phosphate and a proton to form an unsaturated product, consistent with -elimination (see below).
Time-resolved 1 H NMR of the XcbC1 reaction directly demonstrated that disappearance of the doublet signal attributed to phosphosulfolactate was coupled with the appearance of a new pair of doublets further downfield (6.96/6.93 and 6.53/6.50 ppm, integrated to 1) (Fig. S4B). These doublets match the predicted 1 H NMR spectrum for sulfoacrylic acid (Fig. 4C and Fig. S3B). The measured spectrum furthermore bears no overlap with the 1 H NMR spectrum observed for a sulfolactate standard (Fig. S2B), confirming the production of a new double bond. The large j-values for the pair of doublets (16.08) indicate significant separation of the olefinic hydrogens relative to each Figure 2. Bioinformatics analysis of XcbC1 (WP_011992986) and XcbD1 (WP_011992987) sequences identifies the AFS families to which each belongs is shown. The maximum likelihood tree (100 bootstrap reps) was constructed for XcbC1 and appears to show two subbranches for argininosuccinate lyasetype enzymes. Bootstrap values are shown only for nodes of relevance to this work, as indicated by circles (Ն90, black; 80 -89, gray). The canonical argininosuccinate lyase/␦2-crystallin enzymes appear to occupy a separate clade from the XcbC-type enzymes that may be involved in a pathway for CoM biosynthesis, and XcbD appears to form an additional subgroup in the adenylosuccinate lyase clade.
A new pathway for coenzyme M biosynthesis
other, meaning that the XcbC1 product exists specifically in the trans conformation. The conversion of phosphosulfolactate to sulfoacrylic acid proceeded to completion within ϳ12-14 min, again consistent with the reaction time scale indicated by the phosphate release data in Fig. 4A. We therefore conclude that XcbC1 is an unusual AFS-type phosphatase that catalyzes a -elimination reaction using phosphosulfolactate as the substrate.
Modeling XcbC1 active site reactivity
The sequence of XcbC1 was analyzed in light of canonical AFS structures and mechanisms to understand the features that permit its unique reactivity. AFS members share a signature GSSXXPXKXN sequence, tertiary/quaternary fold, and active site structure and a general acid-base catalytic strategy, even while their sequence identities may be comparatively low (20 -30%) (Fig. S5) (40). Canonical AFS enzymes have been proposed to use a general base for proton abstraction from the C atom of the substrate, followed by collapse of the carbanion intermediate and cleavage of the substrate. Product release may be facilitated by donation of a proton from an active-site acid to the leaving group (34). Mutagenesis and structural studies point toward the strictly conserved first serine of the GSSXX-PXKXN motif as the base (35,50,66). Interactions with backbone amides or the -carboxylate group of the substrate may stabilize the catalytic Ser in its oxyanion form (41,48). The catalytic acid has been proposed to be an incompletely con-served histidine residue in argininosuccinate lyase/␦2-crystallin, adenylosuccinate lyases, and fumarate lyases, with various replacements in aspartases and members of the pCMLE (3-carboxy-cis,cis-muconate lactonizing enzyme) family (35,40,50). Finally, a strictly conserved lysine has been shown to interact with and position the ␣-carboxylate group of the substrate (41,66). The substrate-binding cavity otherwise appears to be variable, with various residues stabilizing substrates through an extensive hydrogen bonding network (50,66).
A homology-modeled structure for XcbC1 was generated using the crystal structure for a closely related ␦2-crystallin in its argininosuccinate-bound form (Fig. 5A) (40). Residues from the crystal structure that were proposed to interact with the substrate are superimposed with the corresponding residues from the XcbC1 homology model. XcbC1 appears to retain the catalytic base (Ser-285B, XcbC1 numbering) and strictly conserved lysine (Lys-291B) but very little else. Apart from the residues important for catalysis, the other residues highlighted for ␦2-crystallin form a stabilizing hydrogen bonding network around argininosuccinate (40,67). Notably, the catalytic His is absent in the XcbC1 model, with Tyr-164A-Ala-298B replacing His-162A-Glu-296B. Previous mutagenesis studies of aspartase from Bacillus sp. YM55-1 showed that the histidine is not absolutely required for activity and, when considered with the incomplete conservation of the residue, may indicate that the protonation of the substrate leaving group varies among the
A new pathway for coenzyme M biosynthesis
superfamily (34,43). Using the homology model and known AFS chemistry, it is possible to propose a catalytic mechanism for XcbC1 (Fig. 5B). Like the canonical AFS enzymes, the conserved Ser-285B could feasibly initiate the general base-catalyzed reaction through abstraction of the phosphosulfolactate C proton, which we expect to have a relatively high pKa, with stabilization from Lys-291B and putative H-bond donors present in the binding site. If the phosphoryl group of phosphosulfolactate is already singly protonated, it is possible that acid catalysis is not needed to facilitate phosphate release (phosphate pKa values ϭ 2.12 and 7.21). Hence, it is plausible that, distinct from the proposed aspartase/fumarase mechanism (34), proton abstraction and phosphate release from phosphosulfolactate occur in a concerted step. Interestingly, a similar -elimination resulting in the release of phosphate was recently described for the OspF family of bacterial enzymes (68,69). These catalyze the removal of phosphate from phosphothreonine, generating dehydrobutyrine (an alkene) using a conserved Lys and His as a base and acid, respectively. This so-called "eliminylation" reaction bears intriguing similarities to the proposed reaction catalyzed by XcbC1, although the OspF family does not appear to be evolutionarily related to the aspartate/fumarate lyases, and XcbC1/OspF enzymes share little homology.
Conclusions
Investigating the putative gene products of xcbB1 and xcbC1 by informatics, biochemical, and spectroscopic means provided a critical first step toward elucidating a PEP-dependent pathway for bacterial CoM biosynthesis that is distinct from the PEP-dependent pathway in methanogens. Of the four enzymes possibly involved in CoM biosynthesis, only XcbB1 is homologous to the pathway enzyme ComA, which is known to encode the first step in the PEP-dependent CoM biosynthesis in methanogens. XcbB1 catalyzes the addition of sulfite to PEP to yield phosphosulfolactate, which is delivered as the substrate for the subsequent XcbC1 reaction. The -elimination of phosphate catalyzed by XcbC1 yields sulfoacrylic acid and inorganic phosphate. To our knowledge, this reaction has not been observed before in AFS enzymes, marking a novel activity for an enzyme from a large, well-characterized family as well as a novel pathway intermediate. Although the activity of XcbD1 remains unidentified, we have implicated an important hypothetical role for PLP-dependent XcbE1 in providing the source of the CoM thiol derived from Cys. This work will serve as the framework for future studies aimed at uncovering the final stages of the biosynthetic pathway. By elucidating the XcbB1 and XcbC1 reactions, we have made significant strides toward understanding bacterial CoM biosynthesis, which has evaded characterization in previous years.
Growth of X. autotrophicus Py2
Cells were grown as described previously in phosphate buffer, assorted nutrients, and trace minerals at 30°C with . The XcbC1 reaction was established through biochemical and spectroscopic means. A, the production of phosphate was monitored from a coupled assay containing 1 mM PEP, 1 mM sulfite, and one, both, or neither enzyme (XcbB1 and XcbC1). Inorganic phosphate, measured via absorbance using the molybdenum blue assay, was produced only in the presence of both XcbB1 and C1, suggesting that the latter removes phosphate from the XcbB1 product, phosphosulfolactate. B, the products of the coupled assay initiated by 5 mM PEP and 5 mM sulfite were analyzed by MS. The inset shows the isotope distribution of sulfoacrylic acid along with the corresponding ratios. The predicted m/z 150.970 and the respective predicted isotope distribution are in agreement with the results. C, time-resolved 1 H NMR showed conversion of PSL to sulfoacrylic acid (SAA) over time by the XcbB1/C1-coupled reaction, initiated by 2 mM PEP and 1 mM sulfite. For clarity, only three time points are shown (0 -2 min, black; 8 -10 min, red; 28 -30 min, blue). XcbC1 appears to fully convert PSL before the 30 min end point. For all experiments, samples included 0.1 mg of XcbB1 in pH 8 buffer and were incubated at 30°C.
A new pathway for coenzyme M biosynthesis
shaking (180 rpm) in Erlenmeyer flasks sealed with rubber septum stoppers (70, 71). The cells in liquid medium were sparged with compressed air for ϳ15 min, and propene gas was injected into the headspace (10% volume) every 12 h. The cultures were allowed to reach an optical density at 600 nm (A 600 ) of ϳ1-1.5 before harvesting by centrifugation at 6000 ϫ g for 10 min. Aliquots of 5-10 ml were reserved for genomic DNA extraction using the DNeasy blood and tissue protocol for Gram-negative bacteria.
Amplification of genes for putative CoM biosynthesis
Sequences for the putative biosynthetic operon were obtained from the NCBI database file for the pXAUT01 megaplasmid. Primers (Integrated DNA Technologies, San Diego, CA) were designed for xcbB1 (XAUT_RS24680), xcbC1 (XAUT_ RS24685), and xcbE1 (XAUT_RS24695) using the respective locus tags given in parentheses (Table S1). Restriction sites were added to clone each gene with an added N-terminal His tag into a Duet expression system (Novagen). The forward and reverse restriction sites for each ORF were as follows: XcbB1, Sac1/Nde1; XcbC1, BamHI/BglII; XcbE1, PstI/EcoRV. Each amplicon was initially cloned into a pGEM-T vector and transformed into JM109-competent cells for propagation before being cloned into a Duet vector. XcbB1 and XcbE1 were inserted into multiple cloning site 1 (MCS1) of individual pETDuet-1 (Amp R ) vectors, whereas XcbC1 was cloned under MCS1 of pACYCDuet-1 (Cm R ). Sequences were verified via Davis Sequencing (Davis, CA).
Expression and purification of putative CoM biosynthesis gene products
BL21(DE3)-competent cells were transformed with each construct, XcbB1-pETDuet-1, XcbC1-pACYCDuet-1, and XcbE1-pETDuet-1, and cells were grown on lysogeny broth agar plates supplemented with ampicillin (Amp) (0.1 mg/ml) or chloramphenicol (Cm) (0.034 mg/ml), as specified for each construct. A single colony was used to inoculate an overnight culture in liquid lysogeny broth ϩAmp or Cm medium. Expression was initiated with a 10% volume of the overnight culture (XcbE1 expression included 40 M pyridoxine) and incubated at 37°C with shaking at 250 rpm until the culture reached an A 600 of 0.6 -0.8. Protein expression was induced with 1 mM isopropyl 1-thio--D-galactopyranoside, and the cells were moved to a 30°C incubator with shaking at 180 rpm. Protein expression proceeded for 3 h before the cells were harvested by centrifugation (7500 ϫ g, 10 min), followed by flash-freezing of the pellets in liquid nitrogen.
A new pathway for coenzyme M biosynthesis
For purification, frozen cell pellets were resuspended in lysis buffer (20 mM Tris, 500 mM NaCl, 5 mM imidazole (pH 8)) and homogenized with the addition of lysozyme, DNase, and phenylmethylsulfonyl fluoride. Cell pellets from cultures larger than 1 liter were lysed using a microfluidizer (M-110L Microfluidics Corp., Newton, MA). Lysates were cleared by centrifugation: 45,000 ϫ g for 30 min when using a gravity flow column or 105,000 ϫ g for 1 h prior to FPLC (Bio-Rad). The nickel-nitrilotriacetic acid columns were equilibrated to the lysis buffer prior to loading the cleared lysate. For FPLC elution, a 75-ml gradient from 0 -100% elution buffer was used. The elution buffer contained 20 mM Tris, 500 mM NaCl, 300 mM imidazole, and 20% glycerol. SDS-PAGE and Western blot anti-His tag antibodies (alkaline phosphatase-conjugated monoclonal immunoglobulin from hybridoma clone His-1, Sigma, catalog no. A5588, lot no. 096M4841V) were used to determine the purity of the protein as well as the integrity of the His tag. Buffer exchange of the pure protein into imidazole-free pH 8 buffer (20 mM Tris, 100 mM NaCl) was carried out via centrifuge filtration as a final step prior to storage at Ϫ80°C in 10% glycerol.
Determining sulfite uptake by the XcbB1-catalyzed reaction
An assay using a fluorescent sulfite indicator was adapted from prior methods (23). Reaction mixtures contained 100 mM Tris (pH 8), 100 mM NaCl, 5 mM MgCl 2 , 1 mM NaHSO 3 , and 1 mM PEP in 50 l. The assay mixture was incubated at 30°C for 5 min before addition of 0.1 mg XcbB1 enzyme in 50 mM Tris, 50 mM NaCl, and 20% glycerol (pH 8). The reaction was incubated at 30°C for a further 5 min prior to addition of 5 l of terminating solution (0.5 M arginine, 0.1 M EDTA adjusted to pH 12.8 with NaOH). Post-termination, 3 l of 50 mM mBBr dissolved in acetonitrile was added to the assay. The reaction was then incubated in the dark for 15 min at room temperature and diluted to 1 ml using 50 mM glycine and 10 mM EDTA (pH 10). Fluorescence of the sulfite-mBBr adducts was measured on a Cary Eclipse fluorescence spectrophotometer: excitation wavelength 410 nm, emission 480 nm, 350 V photomultiplier tube. Standard curves were generated using a gradient from 0 -1 mM NaHSO 3 in the reaction buffers. Data fit to linear equations (Kaleidagraph) were used to calculate concentrations of sulfite in enzymatic reactions.
Measuring inorganic phosphate production by the XcbC1catalyzed reaction
Phosphosulfolactate produced by XcbB1 was used as the substrate for XcbC1 in a coupled reaction, and the resulting inorganic phosphate was quantified. Samples containing 50 mM Tris (pH 8), 50 mM NaCl, 5 mM MgCl 2 , 0.1 mg XcbB1 enzyme, 1 mM sulfite, 1 mM PEP, and distilled H 2 O in a final volume of 500 l were incubated at 30°C for 30 min, followed by incubation at 95°C for 10 min to stop XcbB1 activity. The samples were centrifuged for 10 min at 14,000 ϫ g, and the supernatant was used for the subsequent reaction. To the supernatant, 0.1 mg XcbC1 was added to begin consumption of phosphosulfolactate. The reaction was allowed to continue for an additional 30 min before termination as with XcbB1. The supernatant was then used for phosphate determination.
Formation of inorganic orthophosphate can be detected using ammonium molybdate to form colored molybdenum blue complexes (25,72). A procedure was adapted from prior methods (73,74). Briefly, to a 500 l sample, 100 l of 2.5 M H 2 SO 4 was added, followed by 100 l of 2.5% ammonium molybdate. 10 l of reducing solution (0.2 g of 1-amino-2naphthol-4-sulfonic acid, 1.2 g of sodium bisulfite, and 1.2 g of sodium sulfite in 100 ml) was added, followed by distilled H 2 O to bring the volume to 1 ml. The samples were thoroughly mixed and then incubated at 50°C for 15 min. Absorbance was monitored at 700 nm using a Thermo Spectronic Biomate 3. Concentrations of phosphate were calculated from a standard curve generated using KH 2 PO 4 (0 -1 mM).
Determination of XcbE1 activity with an assay for H 2 S formation
The production of H 2 S can be detected using a method that converts H 2 S to methylene blue with the addition of NЈ,NЈdimethyl-p-phenylenediamine dihydrochloride (20 mM) in 7.2 M HCl and 30 mM FeCl 3 in 1.2 M HCl (75). Enzymatic assays were conducted in sealed crimp vials containing 250 l of reaction buffer (50 mM Tris and 50 mM NaCl (pH 8)), 0.1 mg of XcbE1, and distilled H 2 O to 450 l. 50 l of 10 mM L-cys, D-cys, or L-Ala was injected through the septum to initiate the reaction. Reactions were incubated for 1 h at 30°C, followed by quenching and derivatization with 100 l each of NЈ,NЈ-dimethyl-p-phenylenediamine dihydrochloride and acidified FeCl 3 . Color development proceeded for 30 min at room temperature. Absorption was measured at 670 nm and referenced to an Na 2 S standard (0 -150 M) generated under identical conditions.
Mass spectrometric analysis of reaction products
Sample buffers were prepared at 1 mM Tris and 1 mM MgCl 2 (pH 8) to avoid ion suppression. XcbB1 (0.1 mg) was incubated with 5 mM PEP and 5 mM sulfite for 45 min at 30°C prior to molecular weight cutoff filtration to remove enzyme. To determine whether phosphosulfolactate was consumed by XcbC1, 0.1 mg XcbC1 was added to the quenched XcbB1 reactions and allowed to react for 20 or 45 min. Samples were quenched by molecular weight cutoff filtration and analyzed by Q-TOF MS without additional derivatization or extractions.
Q-TOF MS
Analytes were detected with a 6530 series Q-TOF MS equipped with an electrospray ionization source (operated in negative polarity mode), and data were analyzed with Mass-Hunter workstation software version B.03.01 (Agilent Technologies). Samples were introduced via direct injection.
A new pathway for coenzyme M biosynthesis
to determine baseline peaks. The substrates PEP and bisulfite were then added to the reaction at varying concentrations (minimum 2 mM PEP for ease of detection) and monitored over the course of 30 min with NMR experiments every 2 min. 1D 1 H NMR spectra were acquired using the Bruker supplied noesygppr1d pulse sequence with 32 scans and a spectral width of 12 ppm, and data were collected into 32,000 data points. Spectral processing and analysis were performed using the Topspin TM software. Approximately 0.1 mg of XcbC1 was added to the reaction upon completion of the XcbB1 time course. The reaction was again monitored over 30 min with scans every 2 min as above.
Phylogeny and homology modeling
27 diverse members of the AFS, including six argininosuccinate lyase family members, were used to construct the maximum likelihood tree on MEGA 6.06 with 100 bootstrap replications. The resulting tree was prepared for publication using FigTree v1.4.2. Homology models for XcbC1 were constructed with SWISSMODEL, using the crystal structure of T161D variant duck ␦2-crystallin with bound argininosuccinate (PDB code 1TJW, 76% query coverage, 28% identity). | 2018-04-03T02:55:03.612Z | 2018-02-06T00:00:00.000 | {
"year": 2018,
"sha1": "09d6e497b7a475859fba5f14196a0235461ff4df",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/293/14/5236.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "a452cb5bb5554e0fefb782c5dfac83bc91b9874a",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
118821908 | pes2o/s2orc | v3-fos-license | In-medium minijet dissipation in Au+Au collisions at $\sqrt{s_{NN}}$ = 130 and 200 GeV studied with charge-independent two-particle number fluctuations and correlations
Medium effects on charged-particle production are studied using three analysis techniques. We find angular collinearity and number correlations on $p_t$ at moderate $p_t<3$ GeV/$c$. In this $p_t$ range abundant multiplicities enable precision measurements of correlations of non-identified hadrons for kinematic variables ($p_t$,$\eta$,$\phi$). Methods include (1) direct construction of two-particle correlation functions, (2) inversion of the bin-size dependence of non-statistical multiplicity fluctuations and (3) two-dimensional discrete wavelet analysis. Two-particle correlations on $p_t$ exceed expectations from a model of equilibrated events with fluctuating global temperature. A correlation excess at higher $p_t$ is interpreted as final-state remnants of initial-state semi-hard collisions. Lower-$p_t$ correlations exhibit a saddle structure varying with centrality. Variations in the forms and strengths of low and high $p_t$ correlations with centrality suggest transport of semi-hard collision products into the lower $p_t$ region as a manifestation of in-medium dissipation of minijets. Correlations on $p_t$ can be associated with angular correlations on ($\eta$,$\phi$), using analysis methods (1), (2) or (3). In particular, wavelet analysis (3) is performed in the ($\eta$,$\phi$) space in bins of $p_t$ ($<2$ GeV/$c$). Observed angular correlation structures include those attributed to quantum correlations and elliptic flow, as well as a localized structure, increasing in amplitude with $p_t$, and presumed to originate with minijets. That structure evolves with increasing centrality in a way which also suggests dissipation, including an increased correlation length on $\eta$ which may be related to the influence of a longitudinally expanding medium on minijet fragmentation.
In-medium minijet dissipation in Au+Au collisions at √ s N N = 130 and 200 GeV studied with charge-independent two-particle number fluctuations and correlations. Medium effects on charged-particle production from minijets are studied using three complementary analysis techniques. We find significant angular collinearity and number correlations on pt even at moderate pt < 3 GeV/c. In this pt range abundant particle multiplicities enable precision measurements of number correlations of non-identified hadrons for kinematic variables (pt,η,φ). Methods include (1) direct construction of two-particle correlation functions, (2) inversion of the binsize dependence of non-statistical multiplicity fluctuations and (3) two-dimensional discrete wavelet analysis.
Mikhail Kopytine (Kent State University), for the STAR Collaboration
Two-particle correlations on pt exceed expectations from a model of equilibrated events with fluctuating global temperature. A correlation excess at higher pt is interpreted as final-state remnants of initial-state semi-hard collisions. Lower-pt correlations exhibit a saddle structure varying strongly with centrality. Variations in the forms and relative strengths of low and high pt correlations with increasing centrality suggest transport of semi-hard collision products into the lower pt region as a manifestation of in-medium dissipation of minijets.
Correlations on pt can be associated with angular correlations on (η,φ), using analysis methods (1), (2) or (3). In particular, wavelet analysis (3) is performed in the (η,φ) space in bins of pt (< 2 GeV/c). Observed angular correlation structures include those attributed to quantum correlations and elliptic flow, as well as a localized structure, increasing in amplitude with pt, and presumed to originate with minijets. That structure evolves with increasing centrality in a way which also suggests dissipation, including an increased correlation length on η which may be related to the influence of a longitudinally expanding medium on minijet fragmentation. This note documents experimental observations of medium-modified minijet correlation structures in AuAu collisions at RHIC, presented at a poster session of the Quark Matter 2004 conference. The data come from three distinct analyses which address the same physics topic -quantitative diagnostic of the strong interaction medium created at RHIC. The intrinsic properties of this medium in equilibrium are connected with its observable response to the excitations [1], experienced in the course of heavy ion collision events due to minijet propagation. In this work we analyze the response on the basis of dynamical information contained in fluctuations and correlations in the number density of non-identified hadrons in the space of kinematic variables η, φ and p t .
A direct construction of two-particle correlations in p t has been performed on √ s N N = 130 GeV data. The basic object of this analysis is density of pairs ρ in the two-dimensional space spanned by the p t of the two particles. To make statistical errors more uniform along the kinematic variable of choice, we replace p t by X(p t ) ≡ 1 − exp[−(m t − m π )/0.4GeV] ∈ [0, 1[. Density of sibling pairs (coming from the same event) ρ sib is compared with a mixed pair reference ρ mix .
The physics of small momentum differences is known to be dominated by quantum statistical and Coulomb effects. To focus on the physics of large momentum differences (also referred to as large scale), we eliminate sibling and mixed pairs which simultaneously satisfy conditions |η 1 − η 2 | < 0.3, |φ 1 − φ 2 | < π/6, |p t1 − p t2 | < 0.15 GeV/c while having a particle with p t < 0.8 GeV/c. The result in form of a differential density ratio is shown in of ρ sib /ρ mix − 1 vary with multiplicityN as 1/N , but there are more subtle shape changes among the panels of that figure. To quantify those, we form a fitting model based on a Lévy distribution 1/p t dN/ dp t ∝ [1 + β 0 (m t − m π )] −n . Here the exponent n provides a Shown as a solid curve is the "hard" fit based on a hypothesis of a gaussian transverse rapidity distribution. This hypothesis is consistent with pp data. Right: centrality trends in the curvature measures. Curves indicate linear trends on mean path length ν ≈ (Npart/2) 1/3 . parameter responsible for "equilibration", n → ∞ being the Boltzmann limit. It is informative to use the sum and difference variables m tΣ ≡ m t1 + m t2 − 2m π and m t∆ ≡ m t1 − m t2 ; the ratios in Fig. 1 are concave along m tΣ and convex along m t∆ . For the mixed pairs, the assumed factorization of two Lévy distributions results in For the sibling pairs, the fitting model is as in Eq. 1, except that n in the first and second term is replaced by, respectively, n Σ and n ∆ . Curvatures of ρ sib /ρ mix at the origin are β 2 0 (1/n ∆ − 1/n)/2 along m t∆ and β 2 0 (1/n Σ − 1/n)/2 along m tΣ , and have different signs.
The strength of the saddle shape is quantified in terms of the two curvatures: (2) Thus, the correlation structure is decomposed into saddle shape (dissipation) and hard component (high X(p t ) peak). With centrality, the "dissipation" grows with a linear trend on the mean path length, as seen in the right panel of Fig. 2.
Constructing number correlations directly becomes inefficient when the number of events and multiplicities are large, since number of computations scales with the number of particles as O(N 2 ). To analyze the √ s N N = 200 GeV data sample, we use fluctuation analyses of computational complexity O(N ).
In the basis of Fourier harmonics, Wiener-Khinchin theorem relates autocorrelation with the local fluctuation power spectrum via Fourier transform. Relations between quantities characterizing fluctuations and correlations can be also obtained in the basis of box functions (bins). or in a discrete wavelet basis.
The inversion of the bin size dependence of the number fluctuations is performed to express the results of where A is autocorrelation, K is kernel which depends on the binning used, δη and δφ are acceptance ranges, and η ∆ , φ ∆ are difference variables. This integral equation is solved using standard techniques for inverse problems to yield the normalized net autocorrelation Fig. 3. Flow v 2 structure underlies a same-side minijet peak broadened on η ∆ with increasing centrality.
Application of the DWT power spectrum analysis technique in STAR is described in detail in [2]. The measure of local point-to-point fluctuation is the fluctuation power P λ (m), constructed out of the DWT expansion coefficients [3] in the (η, φ) space, a λ m,i,j , where λ indexes the three possible directional sensitivity modes, pseudorapidity η, azimuth φ, and diagonal ηφ.
The basic measure of correlation structure is the so-called normalized dynamic texture, (P λ true − P λ mix )/P λ mix /N , which incorporates the mixed event reference. Here N is a number of particles in an event or subevent being analyzed. In case of a one-dimensional random field X(t) such as a stationary time series, the relationship between this P (m) and an autocorrelation is: where τ = t 2 − t 1 , and W (τ, m) is the kernel function, localized and symmetric around 0, which depends only on the wavelet. For the Haar wavelet which we use, it is positive around 0 and turns negative away from 0, therefore one can think of P (m) as a derivative of an autocorrelation over scale (or τ ), averaged over an event sample. Centrality in the analysis is characterized by the accepted number of quality tracks in the TPC, N , relative to N 0 , where N 0 is such that 99% of minimum bias events have N < N 0 . Fig. 4 shows a difference between dynamic texture data for central AuAu events and various expectations. In the experiment, we see a change in the p t trend above p t = 0.6 GeV in the η mode. Instead of rising with p t (as in the peripheral events), the STAR data points become negative. This contradicts HIJING [4], including the "quenching" mode. In HIJING, the rise of the signal with p t is obtained by "turning on" jets. To make a comparison between central and peripheral data, bypassing the model, we formulate a "null hypothesis": the correlation structure (P λ true − P λ mix )/P λ mix in Au+Au collisions is invariant of centrality. Then, the difference in (P λ true − P λ mix )/P λ mix /N between central and peripheral events (including the p t trends) is due to the difference in 1/N (and in dN/ dp t ) Shown in Fig. 4 as is the peripheral data from STAR, rescaled under an assumption of the "null hypothesis", taking the difference in dN/ dp t into account. In the η mode it is visible that the actual and rescaled data sets differ in both magnitude and p t trend. This underscores the difference of correlation structures between central and peripheral events and invalidates the "null hypothesis" of the 1/N scaling. The left panel shows that jet-like behavior persists almost unaffected by centrality in the ηφ mode.
We hypothesize that we are observing a modification of the minijet structure predominantly in the longitudinal, η direction. Longitudinal expansion of the hot and dense medium formed early in the collision makes this direction special and is likely to be part of the modification mechanism. If so, we may be observing an effect of the longitudinally expanding medium on the minijet fragmentation and/or hadronization at "soft" p t . Fig. 5 shows the scale dependence of the correlation structure in a p t range where the centrality effect is most pronounced. We see that the modification of the structure does not imply disappearance of the correlations in the central events, even on the fine scales. Thermalization, while affecting the correlations (more than the popular model [4] predicts), does not result in a correlationfree system in central events, at least as seen by measuring final state hadrons.
We conclude that large multiplicities of hadrons in the STAR acceptance enable precision studies of the event correlation structure. Our observations are consistent with an emerging picture: minijets from initial state scattering are modified by longitudinally expanding colored medium. These measurments of the effect of the medium on the parton fragmentation and hadronization provide quantitative information about the medium and nonperturbative QCD. | 2019-04-14T03:06:45.252Z | 2004-03-10T00:00:00.000 | {
"year": 2004,
"sha1": "3e3b4c97591eb6a83da29d9c88f514c693fbb2b0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fda9a26ee08fd4da2d8a23484b415a432a805299",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
195577310 | pes2o/s2orc | v3-fos-license | Evaluation of stress state in rock mass surrounding underground structures of waterworks
Using the data of the numerical modeling of stress state in rock mass surrounding deep-level structures of waterworks, the author finds regular patterns and specific features of stress distribution under stage-wise underground construction. It is shown in the paper that as cross-section of an underground structure is enlarged, inelastic strain zones arise nearby the structures, and parameters of these zones are determined.
Introduction
When waterworks structures are arranged under the ground, the governing factor of stability for such structures is the effective stress state, and the role of the latter considerably grows with increasing depth. As a rule, stability of underground structures is evaluated using the estimates of stress state in surrounding rock mass. The classifications based on the analysis of mechanical properties of rocks may be inefficient in this case [1,2]. Numerical modeling of stresses in integration with the full-scale observation of deformation processes in the vicinity of underground structures provides reliable estimation and prediction of stress-strain state during both construction and operation of underground waterworks [3,4].
The sequence of formation of a mined-out void in surrounding rock mass for placement of waterworks structures is another determinant of their stability and a factor influencing stress state of rocks.
Numerical studies of the stress state and the results
This paper presents the analysis of changes in stress state of rock mass in the course of stage-wise construction of rock-fill embankment at Rogun Hydropower Plant, namely, powerhouse hall and transformer chamber, on the left-hand side of the Vakhsh River at a depth of 350 m. it is assumed that the transformer chamber (200 m long, 20 m wide and 40 m high) is completely formed, and crosssection of the powerhouse hall (length of 220 m, width of 22 m and maximum height of 78 m) is made in three stages: the first stage is formation of arch and cutting of 1/3 of the cross section; the second stage is cutting of 2/3 of the cross section; the third stage is full opening of the cross section.
The powerhouse hall and transformer chamber are arranged in hard rock mass composed of intercalating sandstone and siltstone. According to the data in [6], the rock layers are inclined at an angle of 70-75° relative to horizon. Sandstone and siltstone have uniaxial compression strength σ c ranging from 100 to 200 and from 60 to 80 MPa, respectively [7].
The model of the linearly deformable medium assumes that the stratified rock mass is quasi-isotropic; the interfaces of the layers are free from sliding but in rigid cohesion. The calculations are performed using [5]. The ratio of dimensions of the powerhouse hall and transformer chamber allows formulating plane stress problem.
For the numerical calculation of stresses in underground waterworks structures of hydroelectric power plants, in compliance with the research data on elastic, deformation and strength characteristics of enclosing rock mass [7], it is recommended to use: Poisson's ratio ν = 0.26, elasticity modulus Е = 3.5-4.5·10 4 MPa, cohesion С = 0.41-0.7 MPa, internal friction angle ϕ = 35°. In situ stress state is assumed to be as follows [6]: The calculated results are analyzed using numerical values of the stress tensor components ( x σ , (Figures 1, 2b, table).
The elastic convergence of the sidewalls in the powerhouse hall ( Figure 4) grows from 280 mm at stage I to 543 mm at the final stage (Figure 3a). The elastic floor-roof displacements are not higher than 50 mm. At stage I of the powerhouse hall cross section cutting, the inelastic strain zones extend to 1-2.5 m in hard rocks and to 5-6 m in damaged rocks. When the full cross section is opened (Figure 5c), the zones of inelastic strains in sidewalls extend to 1-3 m in hard rocks. In the pillar between the powerhouse hall and transformer chamber, the inelastic strain zones almost merge (Figure 5c). in the roof of the powerhouse void during its stage-wise cutting, s σ alter insignificantly.
Conclusions
The study has revealed the stress state behavior in rock mass during construction of underground waterworks structures at Rogun Hydropower Plant. In the powerhouse hall void cut to the full cross section: -the sidewall rock mass is completely free from the action of in situ stresses; -in the concentration zones, х σ and y σ exceed the ultimate compression strength of rocks by 30-50%; -the inelastic strain zones embrace upperlying rock mass from the side of the transformer chamber; -the convergence of sidewalls in the powerhouse hall reaches 543 mm. The comparison of the calculations using the boundary integral equations and the data of the transversely isotropic model proves good agreement of the results and applicability of the boundary | 2019-06-26T14:46:09.625Z | 2019-06-03T00:00:00.000 | {
"year": 2019,
"sha1": "14d9f4e8048881c7fc586cb930226f55343b64d6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/262/1/012017",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "55404742164b3be5e15129f8585bfd86f0abaa25",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
258572081 | pes2o/s2orc | v3-fos-license | Upadacitinib Is a Better Choice than Abrocitinib for Patients with Moderate-to-Severe Atopic Dermatitis: An Updated Meta-Analysis
,
Introduction
Atopic dermatitis (AD) is a chronic recurrent infammatory skin disease characterized by severe pruritus and afects up to 10% of adults and 25% of children and adolescents [1,2]. AD has complex pathophysiology involving environmental factors, impaired skin barrier function, genetic susceptibility, and immune imbalance [1]. Tese pathophysiological changes result in the loss of transepidermal water, xerosis, and eczema, leading to the destruction of skin barriers in about 80% of AD patients [3][4][5]. Individuals with AD are also at high risk of asthma, allergic rhinitis, and food allergy, which could also increase the hazards of relevant health and psychosocial outcomes [6].
Although topical anti-infammatory agents and emollients are the primary treatment for AD, they are not effective enough for patients with moderate-to-severe AD [7]. Topical corticosteroids (TCS) are widely regarded as the frst-line choice for moderate-to-severe AD, but side efects of TCS could be hardly ignored, including skin atrophy, hypothalamic-pituitary-adrenal axis suppression, and acneiform or rosacea-like eruptions [7]. Charman et al. had reported a phenomenon called TCS phobia: they included 200 patients with atopic eczema for the questionnaire, and the outcome demonstrated that 145 patients (72.5%) worried about TCS treatment, while 48 patients (24%) admitted noncompliance because of safety concerns [8]. As for other options, systemic corticosteroids are not recommended due to side efects and rebound [6,9]. Phototherapy is also constrained by adverse events like actinic damage, local erythema, burning, and stinging [9]. Terefore, a new efcient therapeutic agent for AD that avoids most of the side efects mentioned above is needed.
Recently, biological blocking agents for immune cytokine pathways were reported as an optional treatment for moderate-to-severe AD [6]. In recent years, the US Food and Drug Administration approval had been given to two oral JAK inhibitors (abrocitinib and upadacitinib) [10]. Eight randomized clinical trials (RCTs) of JAK1 inhibitors in AD have been published in the past three years [11][12][13][14][15][16][17]. JAK1 inhibitors regulate signal transduction and relieve pruritus through IL-4, IL-13, and other cytokines such as IL-31, IL-22, and thymic stromal lymphopoietin [18]. Meanwhile, JAK1 inhibitors could avoid potential risks of neutropenia and anemia caused by JAK2 inhibition [19]. Terefore, we conducted this meta-analysis to evaluate the efcacy and safety of JAK1 inhibitors, especially upadacitinib versus abrocitinib, for the treatment of moderate-to-severe AD. Both upadacitinib and abrocitinib are oral selective JAK inhibitors with greater inhibitory potency for JAK1 than JAK2, JAK3, or tyrosine kinase 2 (TYK2).
Search Strategy.
Our review was registered in PROS-PERO (registration number CRD42021244435) before the literature search. We followed the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) ( Table S1). Two independent reviewers (RL and PH) searched Web of Science, PubMed, Embase, and Cochrane databases updated on Mar 21st, 2021 for RCTs (we processed another search at the end of the study on Apr 11th, 2023). Te search strategy used for the PubMed database is available as supplementary material (Table S2). To expand the search range, the keywords were "atopic dermatitis," "Janus kinase inhibitor," or "JAK inhibitor." Te link https:// clinicaltrials.gov was searched for completed but unpublished RCTs. Two researchers (YZ and SR) independently screened the titles and abstracts. Tey only reviewed full-text articles which met the inclusion criteria. Reference lists of eligible reviews and trials were searched for additional citations.
Selection Criteria.
Tere was no restriction on sex, age, nationality, and race. Oral placebo treatment with an identical appearance was regarded as a comparison. Tere was no restriction on the dosage of JAK1 inhibitors. We only discussed data from the phase two and three RCTs. We only included data from patients with moderate-to-severe AD in the meta-analysis. Patients were permitted to use oral antihistamines and nonmedicated emollient. Patients with acute or chronic medical or psychiatric conditions, laboratory abnormalities, infectious diseases, coagulation disorders, receiving other therapies (individual explanation in diferent trials) before randomization, or having prior exposure to any JAK1 inhibitor were excluded. Concomitant use of topical (corticosteroids, calcineurin inhibitors, phosphodiesterase inhibitors, tars, antibiotic creams, or topical antihistamines) or other systemic therapies for AD or rescue medication were also prohibited.
Data Extraction.
Two researchers (YZ and PH) independently extracted data from eligible articles. Te extracted data included characteristics of the study, characteristics of the patient, baseline, and outcome data. Decisions were made by consulting another reviewer SR when YZ and PH met disagreements and failed consensus. Tey would also contact the corresponding author by email to send additional information when data were incomplete. Outcomes were classifed as primary outcomes and secondary outcomes. Primary outcomes included the proportion of IGA responders (IGA ≤ 1 or achieving a ≥2-point improvement from baseline) and the proportion of EASI-75 responders (improvement ≥75% in EASI from baseline). All assessment tools, in included RCTs, are shown in Table S3.
Quality Assessment. Two researchers (YZ and RL) used
Cochrane risk of bias assessment tool (CROBAT) to assess the quality of included studies independently. CROBAT included "allocation concealment," "random sequence generation," "blinding of participants and personnel," "blinding of outcome assessment," "incomplete outcome data," "selective reporting," and "other bias" (Table S4). Each question had 3 answers: "low risk," "moderate risk," and "high risk." According to the published information, researchers would assess the risk level of RCTs. Another reviewer BL would make decision when YZ and RL met disagreements and failed consensus. Small-study efects that led to potential reporting or publication bias could be calculated by Egger's test [20]. Publication bias was evaluated by funnel plots and P ≤ 0.05 was considered a statistically signifcant risk of bias. We used the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) tool to evaluate the quality of evidence for each outcome. Te GRADE tool classifed evidence of outcomes into "high," "moderate," "low," and "very low." Each assessment could reduce or promote the level of quality. Specifc rules are explained in Table S4.
Statistical
Analysis. STATA 16.0 and Review Manager 5.3 were used in our study. All forest plots were produced by Review Manager 5.3 with inverse variance as the statistical method. Continuous data using diferent scales would be measured by standard mean diference (SMD) with 95% confdence intervals (CI), while using the same scale would be summarized by weighted mean diference (WMD) with 95% CI. Dichotomous data would be calculated by odds ratio (OR) with 95% CIs [21]. Heterogeneity in the result of the meta-analysis was assessed using Cochrane Q and I 2 statistics with appropriate analysis models. All statistical tests were two-tailed, and P ≤ 0.05 was regarded as a statistically signifcant diference. Egger's tests were performed by STATA 16.0.
Subgroup analysis would be carried out when detailed data were available. It would be based on the type, dosage, and treatment time of JAK1 inhibitors. Diferent doses of the same drug were compared directly by the forest plot, where P value of the test for overall efect would demonstrate the signifcance. Besides, P value of the test for subgroup difference demonstrated the diference between diferent drugs because we regarded two drugs as two subgroups. Sensitivity analysis was performed in the meta-analysis by excluding each study once at a time to check whether the efectiveness of the outcome was determined by individual studies. Figure 1 shows detailed steps of the literature search, in which 404 studies were reviewed: 314 studies were excluded after screening titles and abstracts, remaining 90 studies were reviewed in full text. After excluding 82 studies according to selection criteria, eight RCTs with 4634 moderate-to-severe AD patients were included in our meta-analysis [11][12][13][14][15][16][17]. Table 1, all RCTs were multicentre trials: 6 (75%) were phase III clinical trials, and 2 (25%) were phase II clinical trials. Te sample size of included studies ranged from 167 to 901. Tere was no signifcant diference between the JAK1 inhibitor group and placebo group in general information (age, sex, disease duration, IGA point, and EASI). Figure S1 demonstrates the risk of bias summary. All included RCTs met compliance with the selection criteria. All RCTs were blind and followed the rules of randomization and allocation concealment. Tree RCTs without results posted on https://clinicaltrials. gov had risks of incomplete outcome data. Detailed assessment outcomes of CROBAT are shown in Table S5.
Moreover, the JAK1 inhibitor group resulted in more pruritus numerical rating scale (NRS) responders (defned as a 4-point or greater improvement from baseline in NRS score) than the placebo group (OR � 5.74 95% CI 4.18-7.88), and upadacitinib group displayed signifcant diferences compared with the abrocitinib group in Table S6.2 (OR � 9.21 95% CI 7.74-10.95; subgroup diference: P � 0.0001). JAK1 inhibitors also showed signifcant differences in scoring atopic dermatitis (SCORAD) and decreased lesion area, but the levels of GRADE assessment were not high due to high heterogeneity and risk of publication bias.
In our supplemental tables, we have presented various outcomes: PRISMA checklist (Table S1), detailed search strategy (Table S2), diferent assessment tools in included RCTs (Table S3) and our meta-analysis (Table S4), and outcome of CROBAT (Table S5). Table S6 shows the secondary outcomes including EASI-50/90/100 responders and other outcomes such as Pruritus NRS responder, SCORAD responder, and BSA change. We also provide a recommendation of infammatory diseases with JAK inhibitors treatment in Table S7.
Discussion
Our meta-analysis included eight high-quality multicentre RCTs with 4634 patients. We confrmed the efects of JAK1 inhibitors on moderate-to-severe AD both at EOT and after 2 weeks of treatment (Table 2). However, both upadacitinib Journal of Clinical Pharmacy and Terapeutics and abrocitinib groups demonstrated signifcant diferences in acne and headache. Besides, upadacitinib had signifcantly higher risks of upper respiratory tract infection and nasopharyngitis, and abrocitinib had signifcantly higher risks of nausea ( Table 3). As for subgroup analysis, we observed that both upadacitinib and abrocitinib demonstrated signifcant therapeutic efects and dose-dependent relationships (30 mg upadacitinib group and 200 mg abrocitinib group showed better outcomes than lower dose groups). Rapid response was displayed on JAK1 inhibitors after 2 weeks of treatment, both upadacitinib and abrocitinib demonstrated signifcant diferences in EASI-75 responder compared with the placebo ( Based on the current understanding of immunological mechanisms, various inhibitors targeting cytokines and interfering with signaling pathways have been developed. JAK1, 2, 3, and TYK2 signal pathways modulate the infammatory processes by activating intracytoplasmic transcription factors, namely signal transducer and activator of transcription (STAT) [22]. Activated proteins form dimers and translocate into the nucleus to modulate the expression of genes and fnally regulate type-2 diferentiation, T-cell activation, innate immunity, and epidermal diferentiation complexes [6,23]. JAK1 inhibitor regulates signal transduction and relieves pruritus through IL-4, IL-13, and other cytokines such as IL-31, IL-22, and thymic stromal Studies included in the meta-analysis (n=8) [24]. Moreover, JAK2 participates in signal transduction by erythropoietin and other colony-stimulating factors, which results in frequent neutropenia and anemia events in JAK2 inhibitors [25]. Terefore, compared with other JAK inhibitors, JAK1 inhibitors had the potential advantage of improving pruritus and avoiding hematologic adverse events [19]. Apart from the general mechanism of the JAK1 inhibitor mentioned before, upadacitinib also decreases the production of proinfammatory mediators induced by IL-6, IL-15, IFN-α, and IFN-c [26]. As for pharmacology, upadacitinib demonstrated an oral bioavailability of 76% with rapid absorption (plasma concentrations peak at around 1 to 2 hours after administration) and a 4-hour half-life, while abrocitinib also had rapidly absorbed after oral administration (reaching plasma concentrations peak within 1 h) and a 5-hour half-life [27,28].
A new recommendation for immune-mediated infammatory diseases with JAK inhibitors treatment was formulated by an expert committee comprising 29 multinational and experienced clinicians [29]. High levels of agreement were voted on for every point (10-point scale). It recommended that severe infections, severe organ dysfunction, pregnancy, and lactation were contraindications of JAK inhibitors (vote 100%, 9.9 points) and consider dosage should be adjusted in patients with higher age (>70 years) and signifcantly impaired renal or hepatic function (vote 100%, 9.9 points). As for the adverse events, serious infections, particularly opportunistic infections, including herpes zoster, received general consent. Te risk of infection could be lowered by reducing or eliminating concomitant glucocorticoid use (vote 100%, 9.9 points). Lymphopenia, thrombocytopenia, neutropenia, and anemia may also occur (vote 100%, 9.8 points). More detailed recommendations are displayed in Table S7.
Various adverse events were reported, in included RCTs. Both abrocitinib and upadacitinib demonstrated risks of acne and headache in our study. Although most of these symptoms were not serious, they might exacerbate the manifestation of AD, resulting in lower quality of life. Besides, 28 (1.5%) patients in the upadacitinib group and 13 (1.0%) patients in the abrocitinib group had reported herpes zoster, but the diference in incidence rate was insignifcant, and all cases of herpes zoster infection were nonserious, and none led to discontinuation of RCTs. Furthermore, JAK1 inhibitors might suppress platelet production by inhibiting Ashwell-Morrell receptors and downstream inhibition of thrombopoietin production [30]. Dose-related decreases in median platelet count were reported in all RCTs of abrocitinib, but the minimum level of platelet occurred in the 4th week and gradually returned to the baseline value with the ongoing administration of the drug. Besides, no plateletrelated event was reported in RCTs of upadacitinib. Terefore, it seems platelet count is reversible in most patients, and no clinically important event was observed (such as hemorrhage associated with the decreased platelet counts. A recent meta-analysis included 42 studies that discussed the venous thromboembolism (VTE) risk with JAK inhibitors; no evidence could support the current warnings of VTE risk for JAK inhibitors [31]. Another meta-analysis included 82 studies had also concluded that JAK inhibitors demonstrated no increased risk of malignancy or serious infections on multiple immune-mediated diseases [32]. Other side efects, including acne, nasopharyngitis, headache, and nausea, were frequently reported in included RCTs, but most side efects were not severe and relieved after drug withdrawal.
Several previous meta-analyses discussed the JAK inhibitors treatment on AD. Among these studies, only one network meta-analysis showed a similar design and conclusion to our study [33]. However, it only included one RCTof abrocitinib and one RCTof upadacitinib, and adverse events were not included in the data analysis. Our study focused on evaluating JAK1 inhibitors and comparing the efcacy and safety of upadacitinib and abrocitinib for moderate-to-severe AD. In summary, our study demonstrated rapid response and dose-dependent response on both JAK1 inhibitors and corroborated the advantages of upadacitinib. Still, our study has certain limitations. Firstly, three RCTs of upadacitinib had no 12-week data, and three RCTs of abrocitinib had no 16-week data. Terefore, we chose to analyse the results at EOT time rather than a specifc same time period. Because both 12-week and 16-week RCTs showed signifcant efects in their endpoint outcomes, and the diferences in our analysis were also signifcant, thus the heterogeneity led by diferent timing was acceptable without compromising our conclusion. Besides, adverse events with complicated causes demonstrated negative signifcant differences in JAK1 inhibitors, but it was difcult to explain the mechanism and provide recommendations. Close monitoring is required for high dose JAK1 inhibitors treatment, and low dosage regimen might be an alternative in case of adverse events. Moreover, diferent sample sizes between phase two and three RCTs might afect the veracity potentially.
Conclusion
JAK1 inhibitors demonstrate promising efcacy in AD with rapid response and dose-dependent response and signifcantly higher risks of acne and headache. Based on existing data, oral 30 mg upadacitinib QD has better outcome than oral 200 mg abrocitinib QD and is a recommended dosage regimen for moderate-to-severe AD patients. Oral 15 mg upadacitinib QD might be an alternative dosage regimen in case of treatment-emergent adverse events.
JAK:
Janus kinase AD: Atopic dermatitis TCS: Topical corticosteroids TYK: Tyrosine kinase PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses IGA: Investigator's Global Assessment EASI: Eczema Area and Severity Index BSA: Body surface area SCORAD: Scoring atopic dermatitis NRS: Numerical rating scale DLQI: Dermatology life quality index POEM: Patient-oriented eczema measure CROBAT: Cochrane risk of bias assessment tool GRADE: Grading of Recommendations, Assessment, Development, and Evaluation RCTs: Randomized controlled trials WMD: Weighted mean diference SMD: Standard mean diference CIs: Confdence intervals OR: Odds ratio EOT: End of treatment.
Data Availability
All data generated or analysed during this study are extracted from RCTs searched from online databases (Medline, Embase, Web of Science, Cochrane database, and https://clinicaltrial.gov).
Ethical Approval
Ethical approval was not required, given that analyses were conducted on deidentifed, secondary data derived from published studies. Our review followed the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), and the protocol was registered in PROSPERO (registration number CRD42021244435). Included studies must be in accordance with the Declaration of Helsinki and International Council for Harmonization Good Clinical Practice Guidelines and approved by respective ethics committees. Written informed consent of patients was also required.
Disclosure
Ying Zhang is the frst author of the manuscript.
Conflicts of Interest
Te authors declare that they have no conficts of interest.
Authors' Contributions
BL was in charge of the main idea and was the guarantor of integrity of the entire clinical study; RL and YZ were in charge of the study concepts, design, manuscript preparation, and editing; RL and PH searched databases independently. YZ and SR screened the titles and abstracts, articles meeting inclusion criteria; YZ and PH independently extracted data from eligible articles and conducted data analysis. YZ and RL independently assessed the quality of included studies; PH and SR were in charge of language polishing and grammar revision. Figure S1: risk of bias summary. Table S1: PRISMA checklist. Table S2: search strategy in Pubmed. Table S3: assessment standards in RCTs. Table S4: assessment tools in metaanalysis. Table S5: assessment result of CROBAT. Table S6: result of secondary outcomes. | 2023-05-10T15:04:25.739Z | 2023-05-08T00:00:00.000 | {
"year": 2023,
"sha1": "3bb5ea7ccbd27b7ebf910d5757d7aad03c3a103c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jcpt/2023/9067797.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6273e3fcaa1773d9231350b871c2b419cb1988fa",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
119128140 | pes2o/s2orc | v3-fos-license | Stationary distributions for a class of generalized Fleming-Viot processes
We identify stationary distributions of generalized Fleming-Viot processes with jump mechanisms specified by certain beta laws together with a parameter measure. Each of these distributions is obtained from normalized stable random measures after a suitable biased transformation followed by mixing by the law of a Dirichlet random measure with the same parameter measure. The calculations are based primarily on the well-known relationship to measure-valued branching processes with immigration.
1. Introduction. In the study of population genetics models, it is of great importance to identify their stationary distributions. Such identifications provide us with basic information of possible equilibria of the models and are needed prior to quantitative discussions on statistical inference. Since [5,14] and [1], theory of generalized Fleming-Viot processes has served as a new area to be cultivated and has been developed considerably. (See [2] for an exposition.) In view of such progress, it seems that we are in a position to explore the aforementioned problems for some appropriate subclass of those models. In this respect, it would be natural to think of the one-dimensional Wright-Fisher diffusion with mutation as a prototype. This celebrated process is prescribed by its generator x ∈ [0, 1], (1.1) where c 1 and c 2 are positive constants interpreted as mutation rates. The stationary distribution is a beta distribution where Γ(·) is the gamma function. In addition, the process associated with (1.1) admits an infinite-dimensional generalization known as the Fleming-Viot process with parent-independent mutation, whose stationary distribution is identified with the law of a Dirichlet random measure.
In the present paper, we consider a problem of finding a class of generalized Fleming-Viot processes whose stationary distributions can be identified. As far as the first term on the right-hand side of (1.1) is concerned, its jump-type version has been discussed in population genetics as the generator of a model with "occasional extreme reproduction". (See Section 1.2 of [2] for a comprehensive account.) We additionally need to look for an appropriate modification of the second term, which should correspond to a generalization of the mutation mechanism. With these situations in mind, our problems can be described as follows.
(I) By modifying both mechanisms of reproduction and mutation, find a jump process on [0, 1] whose generator extends (1.1) and whose stationary distribution can be identified.
(II) Establish an analogous generalization for the Fleming-Viot process with parent-independent mutation.
Since these problems are rather vague, it may be worth showing now the generator we will believe to give an "answer" to (I). For each α ∈ (0, 1), define an operator A α by where G are smooth functions on [0, 1]. Observe that A α G(x) → AG(x) as α ↑ 1. It should be noted that A α is a one-dimensional version of the generator of the process studied in [3] if c 1 = c 2 = 0. See also [12] and [13]. The reader, however, is cautioned that our notation α is in conflict with that of these papers, in which α plays the same role as α + 1 in our notation. (We adopt such notation in order for the formulae below to be simpler.) The constant c 1 (resp., c 2 ) in (1.3) can be interpreted as the rate of "simultaneous mutation" from one type to the other type and a proportion u of the individuals with that type, which are supposed to have the frequency 1 − x (resp., x) in the population, are involved in this "mutation" event with inten- .] As will be seen in Proposition 3.1 below for more general case, the closure of (1.3) with a suitable domain generates a Feller semigroup on C([0, 1]), and our main concern is the equilibrium state of the associated Markov process. It will be shown in the forthcoming section that a unique stationary distribution of the process governed by (1.3) is identified with where E α,y denotes the expectation with respect to (Y 1 , Y 2 ) with law deter- One might think that (1.3) is one of many possible generalizations of (1.1). In fact it arises naturally in the following manner. It is well-known [20] that the Fleming-Viot process with parent-independent mutation can be obtained by way of a normalization and a random time change from a measure-valued branching diffusion with immigration. (See also [6] and [18].) An extension of this significant result was shown in [3] for a class of generalized Fleming-Viot processes, which in the one-dimensional setting corresponds to (1.3) with c 1 = c 2 = 0. Moreover, [3] proved that such a jump mechanism is necessary for a generalized Fleming-Viot process to have the above mentioned link to a measure-valued branching process with immigration (henceforth MBI-process). Recently, [13] showed essentially that the second term of (1.3) is required when we additionally take a generalization of the mutation mechanism into account. Our argument will be crucially based on this kind of relationship between the generalized Fleming-Viot process associated with a natural generalization of (1.3) and a certain ergodic MBI-process. That relationship can be reformulated as a factorization result on the level of generators and hence is expected to yield also an explicit connection between stationary distributions. In principle, the problems (I) and (II) can be considered in a unified way. Nevertheless, we shall discuss (I) and (II) separately. This is mainly because the factorization identity will turn out to yield a correct answer only for certain restricted cases and in one dimension one can avoid its use by taking an analytic approach instead (although this does not reveal clearly the mathematical structure underlying).
The organization of this paper is as follows. Section 2 is devoted to derivation of (1.4) by purely analytic argument. Exploiting the relationship to MBI-processes, we show in Section 3 that the above mentioned answer to (I) has a natural generalization which settles (II). The irreversibility of the processes we consider is discussed in Section 4.
The equivalence of (2.1) and (2.2) is a consequence of uniform estimates which can be shown by observing that and in particular Indeed, these bounds ensure that the function is real analytic at least for −1/2 < t < 1/2. We prepare a simple lemma in order to calculate A α G t .
(i) It holds that for any θ 1 > 0 and θ 2 > 0 In addition, suppose that a ′ = a and a ′ + b > 0. Then Equation (2.4) is a one-dimensional version of the formula due to [4], which is sometimes referred to as the Markov-Krein identity. (See, e.g., [22] or (3.6) below.) We will give a self-contained proof based essentially on the well-known relationship between beta and gamma laws.
Proof of Lemma 2.1. The proof of (2.4) is simply done by noting that and then by change of variables to u := z 1 /(z 1 + z 2 ), v := z 1 + z 2 . The proof of (2.5) can be deduced from (2.4) with θ 1 = 1 − α and θ 2 = α since We proceed to calculate A α G t .
Lemma 2.2. For any t > 0 and x ∈ [0, 1], Proof. By straightforward calculations Replacing c 1 and c 2 by x and 1 − x, respectively, we get .
Plugging these equalities into (1.3) with G = G t and then applying Lemma 2.1 yield which equals the right-hand side of (2.6).
Next, we are going to characterize stationary distributions P in terms of which is a variant of the generalized Stieltjes transform of order α.
Proof. By virtue of Theorem 9.17 in Chapter 4 of [9], P is a stationary distribution of the process associated with A α if and only if (2.1) [or (2.2)] holds. By Lemma 2.2, (2.2) now reads for all t > 0 This equation becomes (2.8) by substituting the equalities all of which are verified easily.
We now derive (1.4) as the unique stationary distribution we are looking for. Recall that for each y ∈ (0, 1) we denote by E α,y the expectation with respect to the two-dimensional random variable (Y 1 , Y 2 ) with joint law determined by defines a probability measure on [0, 1]. Although for each y ∈ (0, 1) an expression of the distribution function is given as the formula (3.2) in [23], that is, we do not have any explicit form concerning P α,(c 1 ,c 2 ) except the case c 1 + The main result of this section is the following.
Theorem 2.4. The process associated with (1.3) has a unique stationary distribution, which coincides with P α,(c 1 ,c 2 ) .
Proof. Notice that the existence of a stationary distribution follows from compactness of the state space [0, 1]. (See, e.g., Remark 9.4 in Chapter 4 of [9].) Let P be an arbitrary stationary distribution of the process associated with (1.3) and S α be defined by (2.7). Put and Also, (2.8) can be rewritten as From these preliminary observations, it is direct to see that the equation (2.8) is transformed into a hypergeometric equation of the form Clearly, T α (0) = S α (0) = 1. In addition, where the last equality follows from (2.1) with n = 1. These facts together imply that (See, e.g., Sections 7.2 and 9.1 in [16].) Combining this with which is immediate from (2.9), we arrive at in view of (2.10). Therefore, we conclude that P = P α,(c 1 ,c 2 ) and the proof of Theorem 2.4 is complete.
Remarks. (i) In the case where c 1 + c 2 > 1, an alternative expression for P α,(c 1 ,c 2 ) exists: where Z 1 and Z 2 are independent random variables with Laplace transforms This reflects the fact that the solution to (2.11) with the same initial conditions T α (0) = 1 and T ′ α (0) = −c 1 /(c 1 + c 2 ) admits another integral expression of the form and accordingly by (2.12) On the other hand, it is not difficult to show that (2.15) with P α,(c 1 ,c 2 ) in place of P α,(c 1 ,c 2 ) holds, too. In fact, we prove in Lemma 3.5 below a generalization of the coincidence (2.13) in the setting of random measures. Also, the role of Z 1 and Z 2 will be made clear in connection with branching processes with immigration related closely to the process generated by (1.3).
(iii) In contrast with the case of the Wright-Fisher diffusion mentioned in the Introduction, P α,(c 1 ,c 2 ) with 0 < α < 1 is not a reversible distribution for the generator (1.3) at least in case c 1 = c 2 . This will be seen in Section 4.
3. The measure-valued process case. The main subject of this section is an extension of Theorem 2.4 to a class of generalized Fleming-Viot processes. But the strategy will be different from that in the previous section, and so an alternative proof of Theorem 2.4 will be given as a by-product. To discuss in the setting of measure-valued processes, we need new notation. Let E be a compact metric space having at least two distinct points and C(E) [resp., B + (E)] the set of continuous (resp., nonnegative, bounded Borel) functions on E. Define M(E) to be the totality of finite Borel measures on E, and we equip M(E) with the weak topology. Denote by M(E) • the set of nonnull elements of M(E). The set M 1 (E) of Borel probability measures on E is regarded as a subspace of M(E). We also use the notation η, f to stand for the integral of a function f with respect a measure η. For each r ∈ E, let δ r denote the delta distribution at r. Given a probability measure Q, we write also E Q [·] for the expectation with respect to Q.
Let 0 < α < 1 and m ∈ M(E) be given. We shall discuss in this section an M 1 (E)-valued Markov process associated with GENERALIZED FLEMING-VIOT PROCESSES
11
(3.1) where Φ belongs to the class F 1 of functions of the form Φ f (µ) := µ ⊗n , f for some positive integer n and f ∈ C(E n ). Equation ( for a > 0 and b ≥ 0, and let | · | stand for the cardinality. It holds that for any θ ≥ 0 and ν ∈ M 1 (E) where I are nonempty subsets of {1, . . . , n}, Θ As for the Fleming-Viot process with parent-independent mutation, the result corresponding to the next proposition is a special case of Theorem 3.4 in [10]. Proof. Let θ ≥ 0 and ν ∈ M 1 (E) be such that m = θν. We simply mimic the proof of Theorem 3.4 in [10]. In particular, the Hille-Yosida theorem (Theorem 2.2 in Chapter 4 of [9]) will be applied. Let n be an arbitrary positive integer. Rewrite (3.2) as where Θ (n) , Ξ (n) ν : C(E n ) → C(E n ) and c n (α, θ) are, respectively, the nonnegative operators and the positive constant-defined implicitly by the above equation combined with (3.2). Let λ > 0 be arbitrary. Given g ∈ C(E n ), define Then h ∈ C(E n ) since the operator norm of Θ (n) + θΞ (n) ν equals c n (α, θ). Moreover, (λ + c n (α, θ))h − (Θ (n) + θΞ (n) ν )h = g, so (λ − A α,θν )Φ h = Φ g . This implies that the range of λ − A α,θν contains F 1 , which is dense in C(M 1 (E)). The rest of the proof is the same as that of Theorem 3.4 in [10].
For simplicity, we call the A α,m -process the Markov process governed by A α,m in the sense of Proposition 3.1. This process is a natural generalization of the process generated by (1.3) in the following sense. Suppose that E consists of two points, say r 1 and r 2 , set m = c 1 δ r 1 + c 2 δ r 2 , and let {X(t) : t ≥ 0} be the process generated by (1.3). Then, verifying the identity A α,m Φ(µ) = A α G(x) for µ = xδ r 1 + (1 − x)δ r 2 and Φ(µ) = G(x), we see that the process {X(t)δ r 1 + (1 − X(t))δ r 2 : t ≥ 0} defines an A α,m -process. We note that [13] discusses the case where E = [0, 1] and m = cδ 0 for some c > 0.
We could also establish the well-posedness of the martingale problem for A α,m by modifying some existing arguments. More precisely, the existence could be shown through a limit theorem for suitably generalized Moran particle systems by modifying those considered in the proof of Theorem 2.1 [especially (2.2)] of [14], which took account of the jump mechanism describing simultaneous reproduction (sampling) only, so that simultaneous movement (mutation) of particles to a random location (type) distributed according to m(dr)/m(E) is allowed. The uniqueness would follow by the duality argument employing a function-valued process as in the proof of Theorem 2.1 of [14]. Its possible transitions and the associated transition rates are found in (3.2). The duality would be useful in discussing (weak) ergodicity of the A α,m -process. (See, e.g., Theorem 5.2 in [10] for such a result in the Fleming-Viot process case.) The following argument is based primarily on the relationship between the A α,m -process and a suitable MBI-process, which takes values in M(E). More precisely, the generator, say L α,m , of the latter will be chosen so that for some constant C > 0 where Ψ(η) = Φ(η(E) −1 η) and Φ is in the linear span F 0 of functions of the form µ → µ, f 1 · · · µ, f n with f i ∈ C(E), i = 1, . . . , n and n being a positive integer. In the case of the Fleming-Viot process (which corresponds to α = 1 formally), such a relation is well known. For instance, it played a key role in [20]. As for the generalized Fleming-Viot process, factorizations of the form (3.3) have been shown in [3] for m = 0 (the null measure) and in [13] for degenerate measures m. From now on, suppose that m ∈ M(E) • . To exploit (3.3) in the study of stationary distributions, we further require the MBI-process associated with L α,m to be ergodic, that is, to have a unique stationary distribution, say Q α,m , supported on M(E) • . Once these requirements are fulfilled, (3.3) suggests that would give a stationary distribution of the A α,m -process provided that η(E) −α is integrable with respect to Q α,m . This conditional answer may be modified to be a general one, which must be consistent with the one-dimensional result (1.4).
To describe the answer, we need both the α-stable random measure with parameter measure m and the Dirichlet random measure with parameter measure m, whose laws on M(E) • and M 1 (E) are denoted by Q α,m and D m , respectively. These infinite-dimensional laws are determined uniquely by the identities where f ∈ B + (E) is arbitrary. A random measure with law Q α,m is constructed from a Poisson random measure on (0, ∞) × E. (See also Definition 6 in [22].) Observe from (3.5) that E Qα,m [η(E) −α ] = 1/(m(E)Γ(α + 1)). As in [11], D m is defined originally to be the law of a random measure whose arbitrary finite-dimensional distributions are Dirichlet distributions with parameters specified by m. The useful identity (3.6) is due to [4] and reduces to (2.4) in one-dimension. We now state the main result of this paper.
Theorem 3.2. For any m ∈ M(E) • , the A α,m -process has a unique stationary distribution, which is identified with P α,m (·) := Γ(α + 1) To illustrate, consider the trivial case where m = θδ r for some θ > 0 and r ∈ E. Then it is verified easily that P α,m concentrates at δ r ∈ M 1 (E), and this is consistent with the equality A α,m Φ(δ r ) = 0 in that case. Also, for every m ∈ M(E) • , we note that P α,m → D m as α ↑ 1 since by (3.5) Q α,µ converges weakly to the delta distribution at µ for each µ ∈ M 1 (E).
The proof of Theorem 3.2 will be divided into three steps. As mentioned earlier, we first find an ergodic MBI-process whose generator satisfies (3.3) and show, under necessary integrability condition, that P α,m in (3.4) gives a stationary distribution of the A α,m -process. [In fact, the condition will turn out to be that m(E) > 1. This motivates us to make a reparametrization m =: θν with θ > 0 and ν ∈ M 1 (E).] Second, for each ν ∈ M 1 (E), we prove that P α,θν = P α,θν for any θ > 1. As the last step, we extend stationarity of P α,θν with respect to A α,θν to all θ > 0 by interpreting the condition of stationarity as certain recursion equations among moment measures which are seen to be real analytic in θ > 0. Also, the recursion equations will be shown to yield uniqueness of the stationary distribution.
For the first step, we prove in the next proposition that the MBI-process with the following generator is the desired one: where Ψ is in the class F of functions of the form η → F ( η, f 1 , . . . , η, f n ) for some F ∈ C 2 b (R n ), f i ∈ C(E) and a positive integer n, and δΨ δη (r) = d dε Ψ(η + εδ r )| ε=0 . Up to this first order differential term, the operator (3.8) for E = [0, 1] and m = cδ 0 with c > 0 is the same as the one discussed in Lemma 5.5 of [13], in which the factorization (3.3) has been proved. Thus, our main observation in the next proposition is that, keeping the validity of (3. 3), such an extra term yields the ergodicity. Note that the generator (3.8) is a special case of the one discussed in Chapter 9 of [17]. [See (9.25) combined with (7.12) there for an expression of the generator.] In particular, a unique solution to the martingale problem for L α,m defines an M(E)-valued Markov process, which henceforth we call the L α,m -process. Intuitively, because of absence of the "motion process", the law of this process is considered as continuum convolution of the continuous-state branching process with immigration (CBI-process) studied in [15]. [See (3.11) below.] In addition, Example 1.1 and Theorem 2.3 in [15] concern the one-dimensional version of the L α,m -process without the drift. The latter proved that the offspring distribution and the distribution associated with immigration of the approximating branching processes may have probability generating functions of the form s + c(1 − s) α+1 and 1 − d(1 − s) α , respectively. A random measure with law Q α,m may be called a Linnik random measure since it is an infinite-dimensional analogue of the random variable with law sometimes referred to as a (nonsymmetric) Linnik distribution, whose Laplace transform appeared already in (2.14). It is obtained by subordinating to an α-stable subordinator by a gamma process. (See, e.g., Example 30.8 in [19].) Namely, letting {Y α (t) : t ≥ 0} and {γ(t) : t ≥ 0} be independent Lévy processes such that The first equality implies that 3.9) clearly shows an analogous structure underlying, that is, where G m is the law of the standard gamma process on (E, m). (See Definition 5 in [22]). It is also obvious from (3.9) that, as α ↑ 1, Q α,m converges to G m . In addition, one can see that for "nice" functions Ψ, where δ 2 Ψ δη 2 (r) = d 2 dε 2 Ψ(η + εδ r )| ε=0 . This is a special case of the generator of MBI-processes discussed in Section 3 of [21]. It has been proved there that G m is a reversible stationary distribution of the process associated with L m .
Proof of Proposition 3.3. As already remarked, if the term −α −1 η, δΨ δη in (3.8) would vanish, (3.3) can be shown by essentially the same calculations as in the proof of Lemma 17 in [13]. [In fact, the change of variable z =: η(E)u/(1 − u) in the integrals with respect to dz in (3.8) almost suffices for our purpose.] So, for the proof of (3.3), we only need to observe that η, δΨ δη = 0 for Ψ of the form Ψ(η) = Φ(η(E) −1 η) with Φ ∈ F 0 . But this is readily done by giving a specific form of Φ. Indeed, for Φ(µ) = µ, f 1 · · · µ, f n the function Ψ takes the form Ψ(η) = η, f 1 · · · η, f n η, 1 −n , from which it follows that After integrating with respect to η(dr), the numerator on the right-hand side vanishes. The argument regarding ergodicity is based on a well-known formula for Laplace functionals of transition functions. (See (9.18) in [17] for a much more general case than ours.) To write it down, we need only auxiliary functions called Ψ-semigroup [15] because there is no "motion process". These functions form a one-parameter family {ψ(t, ·)} t≥0 of nonnegative functions on [0, ∞) and are determined by the equation with λ ≥ 0 being arbitrary. An explicit expression is found in Example 3.1 of [17]: Let {η t : t ≥ 0} be an L α,m -process, and for each η ∈ M(E) denote by E η the expectation with respect to {η t : t ≥ 0} starting at η. Then for any f ∈ B + (E) and t ≥ 0 where V t f (r) = ψ(t, f (r)). As t → ∞ the right-hand side converges to This shows the ergodicity required and completes the proof.
Moreover,
is a stationary distribution of the A α,m -process.
Proof. The first assertion is shown by using t −α = Γ(α) −1 ∞ 0 dvv α−1 e −vt (t > 0) and (3.9) with f ≡ v. Indeed, these equalities together with Fubini's theorem yield As in the one-dimensional case, Theorem 9.17 in Chapter 4 of [9] reduces the proof of stationarity of (3.12) with respect to A α,m to showing that for any Φ of the form Φ(µ) = µ, f 1 · · · µ, f n with f i ∈ C(E) and n being a positive integer. Without any loss of generality, we can assume that 18 K. HANDA 0 ≤ f i (x) ≤ 1 for any x ∈ E and i = 1, . . . , n. Furthermore, we only have to consider the case where f 1 = · · · = f n =: f because the coefficients of the monomial t 1 · · · t n in µ, t 1 f 1 + · · · + t n f n n equals n! µ, f 1 · · · µ, f n . Thus, we let Φ(µ) = µ, f n with 0 ≤ f (x) ≤ 1 for any x ∈ E. Because of the basic relation (3.3) and (3.12) together, (3.13) can be rewritten as where Ψ(η) = η, f n η, 1 −n . The main difficulty comes from the fact that Ψ does not belong to F . For each ε > 0, introduce Ψ ε (η) := η, f n ( η, 1 + ε) −n and observe that Ψ ε ∈ F . Thanks to Proposition 3.3, we then have (3.14) with Ψ ε in place of Ψ provided that L α,m Ψ ε is bounded. Thus, the proof of (3.14) reduces to showing the following two assertions: α,m Ψ ε and L (ii) It holds that for each k ∈ {1, 2, 3} correspond, respectively, to the first, second and last term on the right-hand side of (3.8).
Therefore, analogous calculations to those in (3.18) lead to This makes it possible to argue as in the case of L α,m Ψ ε to verify (i) and (ii) for L (See, e.g., Section 5 of [22].) We will make use of the identity (3.20) This is a special case of Theorem 4 in [22] and can be shown as follows: and therefore I(f ) = I(f ) as desired.
Remark. The "semi-explicit" form (3.19) can be explicit if m is a probability measure. More precisely, we have P α,ν = D αν for any ν ∈ M 1 (E). Indeed, observe that by (3.21) with m = ν | 2019-04-11T20:27:05.855Z | 2012-05-01T00:00:00.000 | {
"year": 2012,
"sha1": "13bdc3102ff332bbc40c89205f6e26d1f54e4f8b",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1214/12-aop829",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7b3ad0f20e231e4b971bc4af3c439c38ca597c98",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
265074327 | pes2o/s2orc | v3-fos-license | Combinatorial Maps, a New Framework to Model Agroforestry Systems
Agroforestry systems are complex due to the diverse interactions between their elements, and they develop over several decades. Existing numerical models focus either on the structure or on the functions of agroforestry systems. However, both of these aspects are necessary, as function influences structure and vice versa. Here, we present a representation of agroforestry systems based on combinatorial maps (which are a type of multidimensional graphs), that allows conceptualizing the structure–function relationship at the agroecosystem scale. We show that such a model can represent the structure of agroforestry systems at multiple scales and its evolution through time. We propose an implementation of this framework, coded in Python, which is available on GitHub. In the future, this framework could be coupled with knowledge based or with biophysical simulation models to predict the production of ecosystem services. The code can also be integrated into visualization tools. Combinatorial maps seem promising to provide a unifying and generic description of agroforestry systems, including their structure, functions, and dynamics, with the possibility to translate to and from other representations.
Introduction Agroforestry
Agroforestry systems (AFS) are composed of a mixture of trees, crops, and/or animals [1] providing many ecosystem services beneficial to humans [2].These systems are complex due to the high level of biodiversity that they contain (both planted and spontaneous), and the interactions between the different species.Since interactions between species act locally, AFS are characterized by a high level of spatial heterogeneity, and the spatial arrangement of the components of the system is a crucial driver of its functioning [3].Therefore, the spatial design of these systems determines the production of ecosystem services [4].Furthermore, AFS develop over a long period due to the slow growth of trees.AFS evolve through time due to • internal dynamic processes (tree growth is an example, which results in increased shade) • farmers' management, which can have an impact on the system's structure (for instance tree thinning).
Some choices made at plantation time (such as tree row distance and orientation) have consequences over the whole lifetime of the system.Conversely, some aspects of the management of these systems must be adaptive over time.In fact, as the trees grow and the shade becomes more important, the cultivated species and management change.
Models: From crop models to FSPM
Due to the diversity of possible combinations between tree, crop, animal species, and the associated management, and to the long development time of AFS, experimental evidence is scarce to understand the functioning, measure the performance and optimize the design of AFS.Simulation models would be of great help in this respect.Plant modeling has a long history, starting in the 60s [5,6] aiming to predict crop yields in relation to environmental conditions.Such models, designed at the crop level (and thus so-called crop models), became mature in the late 90s [7] and have since been used to evaluate cropping systems, including systems based on diversified rotations [8].However, these models fail to explain within field variability, cannot be employed when secondary growth must be considered (trees), and are not adapted to crop mixtures beyond simple 2-species mixtures [9,10].
In these cases, structural aspects should be considered, downscaling the model at the individual plant level.This can be effective using functional-structural plant models (FSPM), whose development started in the late 90s [11].FSPM rely on the strong mutual links between the plant architecture (related to structure) and the ecophysiological processes (related to functions and specifically production) [12].FSPM are often used to model and visualize plants [13] but also to predict the production of ecosystem services, either one by one [14] or several at a time [15].FSPM can be used as a decision support tool [16] to help farmers make decisions to optimize crop performance.However, FSPM often face the drawback of complexity in their calibration and validation, thus limiting their use in agroecology, which relies on species diversification.Subject to specific assumptions, it is possible to use an approach based on cohorts to overcome this limitation, as was done in the Greenlab model [17], thus bridging the scale from individual plant to crop scale [18].Nevertheless, they fail to give a conceptual modeling framework for AFS, since traditionally they were not able to address simultaneously different species.Recently, they have been used to optimize mixtures of several plants [19], but these works remain limited, in scope and genericity.Agroforestry models have rarely used a FSPM approach [20].When they did, they focused on individual plant architecture (for instance to define tree structural plasticity from local environmental conditions [21,22] and not the spatial organization of the system itself.
Limits of existing models of the structure of AFS
Existing agroforestry models (both simulation models and models aiming at visualization) have used a variety of ways to represent the structure of AFS.In the most abstract representations, tree positions are not explicitly represented, but only the topology, i.e., the adjacency relationships between the elements composing the agroforestry system.This results in graph representations of AFS [23].Graphs are built from nodes, representing the elements composing the system, and directional edges linking them, representing node relations such as adjacency.Graphs offer a structural description of composition and adjacency, as presented in Fig. 1, and can also be mobilized to represent the dynamics of the system.However, contrary to FSPM, they do not emphasize nor take advantage of the strong link between structure and function.An in-between could be to associate interactions between elements (including ecosystem services) to edges between the nodes.Then, simple algorithms can be applied to model the evolution of the plot [24] or the ecosystem services [25].However, this representation is not sufficient to explicit the spatial arrangement of an agroforestry system.For example, Fig. 1A to C show that 3 different plots may share the underlying graph (Fig. 1D).
One possible solution to this problem is to explicit the distances between the elements of the system and their relative positions, for a representative part of the system, i.e., to identify the Ecosystem Services Spatial Unit.The Ecosystem Services Spatial Unit is the smallest spatial unit encompassing all the interacting species and other functional components that together provide a specified set of ecosystem services in a farming landscape [26].This is the option chosen in WaNulCas [27] and Hi-sAfe [28].They simulate the functioning of a simple pattern (3 trees maximum for WaNulCas; any number, but the smallest possible for computational costs, for Hi-sAfe), which can then be reproduced by defining periodic boundary conditions to simulate an infinite space and avoid edge effects.Then, the spatial heterogeneity of AFS is modeled thanks to a discretization of space.WaNulCas operates in 2D, using the distances from the tree locations and the depth in soil, while Hi-sAFe operates in 3D, allowing representation of soil layers as well as distance and orientation to the tree.The advantage of these methods is that they reduce the computation cost: instead of simulating the whole plot, only the pattern is simulated.The disadvantage is that these models cannot represent constraints at the plot level, such as the width of the headlands to allow farm machinery to turn.
Other agroforestry models focus only on the structure and do not have the capacity to represent the functioning.This is the case with EcoAf [29], or RegenWorks [30], which uses a rule-based description.In these cases, the agroforestry system is described by the combination of a pattern (e.g., repetition of lines with a certain orientation and distance between lines, with a succession of trees along the lines) and rules (e.g., "the first line must be at least 20 m from the plot boundary" to allow for machine maneuvers, "no line should be shorter than 10 m" to avoid having trees in the corners of the plot).The pattern and rules can then be applied to any georeferenced plot to generate an instantiation of an agroforestry system, obtaining the geographical coordinates of trees to be used when planting in the field.The advantage of this method is that it takes into account some of the constraints at the plot level, but it does not simulate the links between structure and function.For example, in EcoAF, tree growth is simulated by simple look-up tables that predict the size of a tree as a function of its age, but it does not take into account its environment.Furthermore, rule-based models are not able to represent irregular systems such as scattered trees found in traditional AFS, or even in modern AFS, when tree mortality creates more or less random gaps in the initial pattern after a few years.
To overcome this problem and to represent accurately an agroforestry system, models can use the coordinates of each individual tree, either with local coordinates [31] or with a geographic coordinate system.To date, there are few examples employing formal approaches from the Geographic Information System (GIS) world to agroforestry system design.A formalism can be derived from a dedicated language as proposed by Degenne et al. [32] and can also be implemented as graphs in the generic Ocelet platform [33].This way of representing AFS is close to field reality, but simulating the functioning of AFS can become very expensive in terms of computational time in the case of large plots.Moreover, models using this representation of space often rely on basic relationships to simulate functions as a linear function between tree size and age as shown in Agroforestryx [34] or ShadeMotion [35].
Table 1 compares these different spatial representations in terms of complexity, spatial, and time criteria.Not surprisingly, there is a trade-off between realism in the spatial representation of agroforestry system and versatility, i.e., the ability to be applied to different plots/AFS with minimal additional effort.There seems to be an opposition between those focusing on the structure of AFS (GIS-based, rule-based) and the ones focusing on the functioning of AFS: realistic representations lack many features that would be desirable for the simulation of agronomic performance and ecosystem service production, such as the representation of interactions between species.Technical performances are also considered such as computation costs.Another lack is the interaction with the outside world: in spatial unit-based representations, the outside world does not exist because the pattern is replicated indefinitely.In rule-based representations, the outside is just represented by constraints linked with the shape of the plot, and in GIS-based representations, the outside world is not explicitly represented or not considered.
Finally, AFS and the ecosystem services they provide evolve over the seasons and years.Thus, it is important that the spatial representation of AFS allows representing the evolution of the structure through time, a feature about which the existing representations of AFS are not very good at.
As a summary, the use of graph shows wider benefits from other approaches, and our hypothesis was that the identified drawbacks can be overcome: • The ability to take into account constraints at plot scale may be solved using a hierarchical multiscale graph definition able to consider the system as a whole in interaction with the outside.• The ability to represent understory vegetation strip can be assessed using a classical multilayer approach, or even better, using a 3-dimensional graph approach representing systems components as solids and thus offering surfaces adjacency as exchange areas.• The realism of the representation can be assessed by refining the geometry of the components, not only on the nodes but also on the edges, allowing representations of fluxes and interfaces.
Among the various graph class formalisms, combinatorial maps [36] answer to these requirements.
In this paper, we introduce a new framework, inspired by FSPM approaches, to model AFS.This framework combines the versatility and abstracting power of combinatorial maps, with the spatial realism of coordinate-based representations.
Materials and Methods
In this section, we first introduce the combinatorial maps concepts, requested to describe the systems, and then introduced the combinatorial map dual, which is of high interest to model exchanges between the system components.
Combinatorial maps [37,38] are a theoretical framework that owes its origin to the modeling of the evolution of surfaces, applied to leaf growth simulation [39] and which has been mainly exploited in computer-aided design, mostly for efficient adaptive meshing.We propose this framework to explore to model AFS.The main motivation for choosing this framework is to rely on strong theoretical background based on graph theory.A graph is composed of nodes linked by directional edges.Its background ensures the possibility to add or delete nodes, which entails changes to the edges and faces, in a clean and robust manner, mechanically impacting the attributes.This is important for AFS' representation, to represent both the dynamics of ecosystem provision within the year, and the dynamics of the system's structure across years.
Combinatorial maps
A combinatorial map of dimension n can be seen as a generalization of a graph defined in n dimensions adapted to model spatial subdivisions.Here, we will work in 2 dimensions.
A combinatorial map of dimension 2 is composed of a limited set of darts (Fig. 2A) and 2 operators defined on the set of darts.The following paragraphs explain in more detail the terms and operations relevant to combinatorial maps.A dart links 2 nodes.It is oriented.Also, a dart can carry the relationship between the origin of the dart and its end (Fig. 2A).
The nodes, usually connected to spatial points, are defined as a starting (or ending) point of the darts.In our application, a node will represent a plant, a tree, or a tight group of plant.The mathematical definition will be given below because we need to define others elements before.
The faces are the areas surrounded by a closed list of consecutive darts (therefore, dart orientation matters).
The consecutive darts are defined with the permutation operator β1.It finds around the starting node of a given dart the dart that has the smallest angle (given an orientation convention).For example, in Fig. 2C, β1(1) gives 2.
To create a conventional edge that is not oriented (Fig. 2B), we need to have 2 darts linking the same edges but of opposite directions.To do this, we need to define an involution, written β2.It outputs the dart opposite a given dart.Therefore, for any dart d, β2(β2(d)) = d and in Fig. 2D, β2(2) = 4 (and β2(4) = 2).
A compound sequence of permutation(s) and involution(s) which allows one to traverse the map or a portion of the map from a specific dart d, defines an orbit of d.Orbits therefore define generic functions as a list of k composed operators.Mathematically, an orbit of a given dart d can be written as (βi1 o βi2 o … βik)(d), with ik = 1 or 2. For example, in Fig. 2C, the orbit drawing a face from dart 2, is the sequence of repeated β1 permutation, noted β1*, until reaching dart 2; the sequence is thus 2, β1(2) = 3, β1(3) = 1, β1(1) = 2 and its compound function is (β1 o β1 o β1).This specific orbit, applying β1 until reaching the initial dart d, allows us obtaining the face issued from dart d; we call it "orbitFace".This function can then also be used in orbit sequences to visit several faces and in particular the full map.For example, a way to explore the full map drawn in Fig. 2D starting from dart 2 can be the sequence: orbitFace(2), then β2(2) and orbitFace (4).It may also be written as orbitFace(β2(orbitFace(2))).
On this example, we see that some dart (such as dart 5) may have no image from the involution: this is the case for each dart lying on the external border of the system.We found it more explicit to constraint the combinatorial map to explicit this external border.Indeed, we consider that the adjacency of an element with the outside of the system is of interest in the case of AFS.On Fig. 2D, this leads to define the outside face introducing 4 new darts: 7, 8, 9, and 10 (see Fig. 3).Mathematically, for each dart d, β2(d) belongs to the set of darts of the map, and the orbitNode function applies on any dart until reaching it again.With this constraint, orbitFace and orbitNode define a cyclic operator.
Using combinatorial maps allow us to add or delete darts in a formal framework.If we delete an edge to merge the 2 triangle faces from Fig. 2D, all darts must be updated (orientation, update to the next dart) to maintain the map consistency.This possibility is very important for scene design and will be detailed in the discussion section.
Dual representation
Graph theory shows that one can construct a dual graph [40] from the original graph called primal.This applies to a combinatorial map and its dual representation, which is also a combinatorial map.It is interesting since it allows us to explicit higher-level adjacency relations automatically.Briefly, to construct the dual, each face becomes a node and each dart links the node corresponding to its face in primal to the node of its β2's face.An example is given in Fig. 4A showing a combinatorial map composed of 3 faces (F1, F2, and F3) and its dual (Fig. 4B).In its dual, F1, F2, and F3 become nodes and, for example, edges 3 and 5 between F1 and F2 in Fig. 4A become edges connecting nodes F1 and F2.Note that the β1 and β2 operators are still the same in the dual.
To explicit the contribution of combinatorial maps, we decline it on an agroforestry context in the following results section and we detail our implementation.
Application of combinatorial maps to AFS: A simple example
Starting from a simple example, we illustrate here the underlying concepts and operators of its combinatorial map representation and introduce their respective conceptual meaning in our application on agroforestry system modeling.Let us consider a simple agroforestry system composed of 2 tree lines, each planted on an understory vegetation strip, and 1 crop alley between them, as presented in Fig. 5. Let us also suppose that each tree line is composed of 3 trees.Its combinatorial map representation is shown in Fig. 5B in red.Each area (the 2 strips, trees inside the strips, and crop alley) is modeled by a simple rectangular face.Note that a face (such as a tree line) may contain other faces (trees).This is represented by a hole, a concept natively present in combinatorial map theory and that implies a hierarchical relationship between faces.In a tree line, we represent a tree as a face inside the strip face.At the highest level, the outside face (in red in Fig. 5B) encapsulates the plot composed of 3 faces (2 tree lines and 1 crop alley).An edge is an interface between areas and corresponds to the border either between crop and a tree line or between the plot and the outside of the plot.
In its dual map (the blue arrows in Fig. 5B), the agroforestry areas (the faces, including the outside face) become nodes, and the edges now represent the interactions between areas.Thus, in this dual representation, permutation β1 can be used to model exchange between different areas.The meaning-the interpretation in our application domain-of the different theoretical elements composing a combinatorial map, its dual, and key operators, are exposed in Table 2, the first 3 lines concerns the combinatorial map conceptual components, dart and operators; while the following lines consider orbits, that is to say compound operators.
Usefulness of combinatorial maps to represent the functioning of AFS
In complex systems, making sure that a model is consistent (in particular that modifications in one part of the model do not induce undesired consequences on another part) is challenging due to the numerous elements and all their possible interactions.The combinatorial map framework brings a straightforward answer to this question: the list of all aspects that must be considered is simply the list of all cycles in the map and its dual.Indeed, the main interest of using the formalism of combinatorial maps is the proper identification of adjacency relations at the n-dimension levels.This is trivial on the direct map but is also true on the dual.Depending on the hierarchical level of a given cycle, different data exchange procedures or functions (in the programming sense) can be applied to compute different functions (in the ecological sense) of the element.
The lowest cycle is the β2 operation since β2 represents a local relationship between 2 components.This relationship may carry local interactions between these components: specific modeling and data sharing can be assessed at this level without the need to query information from the rest of the map The orbitFace function defines a cycle bordering the element.This function can then carry the endogenic evolution of the element; it may include the evolution of embedded faces if there are any.In this context, modeling approaches can be defined on the full face and no specific data management has to be defined regarding other components of the system.
At the opposite, the orbitNode function defines a local cycle around all elements sharing a local adjacency.When previous function are mainly tools to move in maps, this function is of key interest for data sharing and exchange.Therefore, when modeling this aspect, it is necessary to use models of higher complexity, involving all the local interactions, but their relevance must be considered.We can consider a similar analysis on the dual map.The dual graph explains which components interact mutually in the system and with the outside and which are considered as simply embedded.The major interest arises from the fact that, on the dual, the adjacencies are related to a global instead of a local scale, since faces (i.e., elements) are now considered as nodes.At the lower scale, a dart represents an oriented relation between 2 elements.The involution β2 applies to its inverse relation.Then, a dart indicates what comes out of the node β2 indicates what comes in the node.At the darts level, we can instantiate global information exchange/models in a nonsymmetric manner, since darts are oriented.The orbitFace now defines a cycle between the different nodes (elements) and defines then a framework to address the specific ecosystem services supported by these elements.The orbitNode of an element is also of interest.Indeed, we can consider global output and input in the agroforestry element, and their balance.In agroforestry context, it groups all input and output of an area.
Combinatorial map transformation AFS and ecosystem services change over seasons and years in 2 possible ways: • The structure of the system does not change, but the internal mechanisms evolve (i.e., ecosystem services such as microclimate buffering start to be produced as trees grow).• The structure of the system is modified (addition or removal of plants, e.g., through tree thinning to keep only the best individuals after a few years of growth).
We can simulate the first type of change by modification on the attributes of darts.This operation is trivial in terms of code implementation and poses no risks of having unforeseeable repercussions on the system's structure.The second corresponds to the addition (or deletion) of a node.It is an operation well known in combinatorial maps [37], respectively called sewing and cutting functions and defined on all the dimensions of the map simultaneously.Concretely, this operation updates all the darts and their respective permutation and involution.Surprisingly, in the example presented in Application of combinatorial maps to AFS: A simple example, trees and crop are not connected.Tree-crop interaction can only be induce from through the strip component.Indeed, trees are still small in this case, and their representation in the strips is a small face.As trees grow and the projection of their crown overlaps with the cropped alley, the faces representing the trees will also expand and infringe on the crop face.Thus, the crop face subdivides into smaller faces corresponding to a mixing zone composed of crops overlapped by the canopy (in red in the Fig. 6).When updating the corresponding map, new local relationships will appear with dart adjacencies between the concerned trees borders and the crop border.Conversely, new global relationships will appear in the dual map between each concerned tree and the crop (in blue in Fig. 6).
Higher-level operations: Pattern matching
Numerous operators that have been developed in graph theory can be very useful in an agroforestry context.We will detail here pattern matching.
During the agroforestry design process, actors could want to know if their system is already existing somewhere else or if a specific ecosystem service is present inside the system.To answer these questions, we must compare 2 (sub)systems, a task for which many pattern matching methods exist in graph theory [36], both for exact and approximate matching requests.In the example in Fig. 7, we search for the structure that supports a biotic interaction (e.g., regulation of pests by natural enemies) between an alley cropping and 2 tree lines serving as overwintering habitats for the natural enemies (Fig. 7A).In this case, we search exact matching between the substructure and the whole plot map.As can be seen in Fig. 7B, this pattern is present once.In addition, the pattern matching can use the darts' information, so we can instantiate constraint-based attributes like "crop area must lay between two tree lines".Note that the pattern matching can also apply to properties such as distances computed from faces' coordinates; "the width of the alley cropping must be a multiple of the width of the farm cropping engine" is such a constraint.
Implementation
Our implementation of the model (available at https://github.com/agroforestar/carte_combinatoire) is still under development.The prototype founds on 2 key ideas.First, our map is a 2D map based on 2D coordinates, which makes sense in the case of mapping the location of trees in an agricultural plot.Second, our implementation is incremental, meaning that the map is built by sequentially merging more and more complex faces.The system starts from a list of sets of coordinates describing the areas covered by the trees, the crops, and the understory vegetation strips, makes a face for each area, and then merges these individual faces until the whole system is included in the final map.The algorithms for creating and manipulating the maps come from the reference book by Damiand and Lienhardt [37], and we implemented them in Python 3.9.An important point to note is that all objects in the agroforestry system are considered as a face on our map even trees, which are usually considered as point objects.
We implemented most of the algorithms described by Damiand and Lienhardt [37].In particular, we wrote the constructor (algorithm 22), the iterators (algorithms 28 to 30 and 34) and the functions to manipulate attributes of combinatorial map classes (algorithms 23 to 27).Then, we wrote the algorithms to create and remove darts (algorithms 35 and 36).We Fig. 6.Combinatorial map (in red) and its dual (in blue) of the system represented in Fig. 1, several years later: the trees have grown and their canopy overlap the crop area.The grown trees areas are now split in 2 parts, on USV and on crop, on both graph and dual graph (in blue) generated by our application and postprocessed for improved readability.also implemented the algorithms to copy a map (algorithm 37) and to sew darts (algorithm 44).This set of methods allows to incrementally constructing the map from different faces representing the elements in the system.In fact, this implementation was adapted to our context.We construct our class Face as a class inherited from class nMap (Fig. 8).Faces can have several holes.In case of a tree line face, a tree is representing by a hole in the strip face and filled with a face for the tree.We described the face class as an inheritance class from nMap (Fig. 8, left) with their holes.A hole is composed of darts and delimits a space filled with another face.The face definition can thus be considered as a recursive definition, and each level can be related to a specific scale.For instance, the area of a tree line can be considered as an entity at a given level, and its holes, corresponding to the individual tree areas, describe the area in more detail.
We introduce the Properties attribute in myDart, an inherited class of Dart (Fig. 8, right).It contains 2D coordinates, the type of the start node (tree, crop …) and may qualify other properties.We can simulate exchange between different areas into the plot and with the outside with the list of properties on darts.
Advantages of the proposed framework compared to existing models
Despite the fact that the structure of AFS is of crucial importance to understand, predict, or optimize the production of target ecosystem services [26], to our knowledge, no agroforestry model used the concept of FSPM to represent the spatial organization at the system scale.In this article, we propose a new framework to describe AFS based on combinatorial maps.
This framework shows a good balance between the important criteria (complexity, capabilities to hold time, and space dynamics) for an efficient spatial representation targeting agroforestry system design (Table 1).Three features of the framework are of particular interest and show improvement over existing representations of AFS.
First, the 2 dual representations inherent to combinatorial maps allow representing not only the structure of the agroforestry system (as existing models already do) but also its functions.Indeed, our framework allows the user to focus alternatively on the areas, which represent the structure of the agroforestry system, and on the interfaces between areas, thus enabling to simulate the interactions between the elements of the system, which are the basis of ecosystem services.The model automatically ensures the consistency between both representations.This is consistent with the concept of FSPM, where the structure of a system (the plant) is directly linked to its functions.However, further research is needed to fully exploit this dual representation to build a complete Functional Structural Agroforestry Models approach.
Secondly, combinatorial maps allow a recursive description of the structure of the agroforestry system, so it becomes possible to describe the system in a hierarchical way.At the highest level, the system is described within an outside face, in which the different components are embedded.In the example presented above, at the second level, the system is composed of the areas of tree lines and of the crop.Each tree line area, in turn, embeds the different tree areas composing the tree line.However, the way the orbits are defined is the same and at all levels of the description.This authorizes a multiscale computation of ecosystem services, which is deemed particularly important to model complex AFS [41].In particular, the fact that this framework can manage multiple scales allows aggregating individual plants at lower scales and adding a positive or negative impact of the system structure at higher scales.The combinatorial map and its dual define a framework that imposes that the interaction between the system elements must be modeled differently according to the local adjacencies and local relations.It also defines the cases where we need to mobilize models describing endogenous mechanisms or models describing interactions.For example, the fact that adjacencies between elements of the system are explicitly represented in the dual representation facilitates the simulation of local interactions between species.This recursive structure as well as the possibility of pattern matching is an asset for predicting ecosystem services either by reducing computation time (running relevant models' implementation only on subsets of the whole agroforestry system) or by allowing to use a database of structure-service relationships.
The third advantage is that the consistency of the agroforestry model is enforced by the combinatorial map structure even when the system is modified.Thus, the description is robust and remains valid when adding or removing elements.Therefore, it can be used to design the representation of a new agroforestry system from scratch, as well as to modify an existing system represented either by geographical coordinates of plants or by a graph where nodes are the plants.This framework is also particularly well suited to represent the intrinsic dynamics of an agroforestry system, in which elements can appear (such as tree planting), disappear (such as tree thinning, or tree death), change their properties, or even impact the adjacencies as is the case when tree growth causes the tree canopy to project beyond the understory vegetation strip.
Limits and perspectives
Although the proposed model fulfills all the criteria that we identified initially, some limits remain.For instance, global relations such as groups of nonadjacent elements (e.g., tree lines) are not represented directly in the combinatorial map.However, these relationships can be computing from the minimal topological graph [23]; this operation corresponds to an agroforestry pattern [26] extraction.Further work will allow full integration of the algorithms to do this into the prototype.There is a need for further research in agroforestry, in particular to better understand and predict the link between species interactions and production of ecosystem services.Although this link has been highlighted for some time (e.g., [42]), work has only just started to automatize mining agronomical knowledge in relation with plant traits to predict the production of ecosystem services of different species associations (e.g., [43]), and the tree-service relationships are currently only able to relate services to single tree species, not species associations (e.g., [44]).Further more, to our knowledge, no trait-service database or tool take into account the spatial layout of associated species, despite the fact that its importance has been recognized [26].Our framework could combine the notions of ecosystem service spatial unit and the relationship between species associations and biotic interactions in order to propose a modeling approach extending the principles of FSPM at the agroecosystem scale.Once we have built a catalog of patterns (i.e., tuples of agroforestry elements) that provide ecosystem services (potentially based on different biophysical models), our framework will allow us to analyze both structural and functional aspects of an agroforestry system.
Future applications of our framework will be useful to help farmers and advisors to design AFS and estimate their evolution through time.Integrating 2D or 3D visualization tools will facilitate the design process with a better communication [45] and help farmers to better project themselves and understand the impact of their choices on the system's functioning.For example, we intend to use this framework to develop augmented reality tools that could be used either during agroforestry design workshops, for user-friendly, quick and robust, interactive ex ante evaluation of agroforestry system, or after the design phase, to visualize the future aspect of an agroforestry plot in the field.
Conclusion
Modeling AFS is challenging due to the diversity of systems and the diversity of modeling objectives, from visualizing AFS during the design phase to running simulations to predict the functioning of the system.Our framework, inspired by FSPM, uses combinatorial maps to represent AFS.This framework allows focusing either on the agroforestry system structure (with the map representation) or on the system functioning (with its dual representation).The proposed approach ensures the consistency between both representations when the system evolves.Compared with existing representations of AFS, this framework combines the versatility of a graph-based representation with a spatial realism sufficient to represent irregularities in the pattern and the possibility to record geographical coordinates.Thus, our framework might serve as a unifying representation, with the possibility to export to and import from all other representations.
Fig. 1 .
Fig. 1.Three different agroforestry plots (A to C) represented by the same graph (D).
Fig. 2 .
Fig. 2. Examples of combinatorial map objects.(A) A dart, (B) an edge, (C) a face, and (D) 2 faces (F1 and F2) with β2, which sets the adjacency between F1 and F2 and allows to move from one face to another.
Fig. 5 .
Fig. 5. Example of agroforestry plot in combinatorial map.(A) Example of an agroforestry plot with 1 alley of maize crop between 2 lines of trees (credit National Agroforestry Center) (B) A corresponding combinatorial map in red and its corresponding dual in blue, reduced to strips of 4 trees each generated by our application and postprocessed for improved readability.
Table 2 .Dart A side of the current element 1 : 1 :
Combinatorial map's elements and their corresponding meaning in agroforestry context Combinatorial map object Map interpretation Example (red, Fig. 5B) Interpretation in dual Example (blue, Fig. 5B) External right border of the full system A link from the current element to the other Interface between the understory vegetation strip to crop β1 (permutation) Pass to next dart: follow the element border β1(1) = 2 links to lower right external border Go to next element issued from the current element β1(1) = 2 : Move from stripcrop link to crop-outside β2 (involution) Go to inverse dart to change face: move to neighbor face β2(1) = 9 links the external right border to the line strip right border Identify the element is connected from β2(11) = 12 Upper left tree is connected to left line tree strip Compound operations OrbitFace Path through all darts from the same face (β1 o β1 o β1 o β1)(9) = β1*(9) follows the right line tree strip starting from its right border (9) (β1 o β1 o β1 o β1)(9) follows the right line tree strip starting from its right border (9) (β1 o β1 o β1 o β1)(9) follows the right line tree strip starting from its right border (9) OrbitNode Intersection OrbitNode (16) = (β1 o β2) * (16) defines the node B from darts 16, 13,11,12,2,3 Area, identifies all relations issuing from an element Orbit(Crop): relations with top line tree strip, outside and bottom line-tree strip
Fig. 7 .
Fig. 7. Example of pattern search.(A) A combinatorial map of ecosystem service that helps to regulate insect pest.(B) A combinatorial map of a plot that contains this ecosystem service twice.
Fig. 8 .
Fig.8.UML class diagram of our implementation of combinatorial map.In black, attribute already present in referenced book and, in red, attributes we add in our application.
Table 1 .
Qualitative comparison of different types of spatial representations of AFS | 2023-11-10T16:26:06.688Z | 2023-11-07T00:00:00.000 | {
"year": 2023,
"sha1": "7ad0bd06c5a8cbfa4460a1d880ed24cdeafd16ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.34133/plantphenomics.0120",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57f084433daf0c8276be40abf227b92e39552ad1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9997174 | pes2o/s2orc | v3-fos-license | Subclavian artery stenosis as a cause of acute coronary syndrome in a patient after coronary artery bypass grafting
We described a case of a 74-year-old man who suffered from acute coronary syndrome 7 years after coronary artery bypass grafting. The patient underwent angioplasty of the obtuse marginal branch of the left coronary artery from venous graft access, which did not result in relief of ailments. Only angioplasty of the narrowed subclavian artery caused an improvement in the patient's condition. The clinical significance of narrowing within the subclavian artery in patients after the procedure of implanting the left subclavian artery into the coronary artery system was discussed.
Introduction
Stenosis within the subclavian artery occurs in 0.5-7.1% of patients in the course of advanced atherosclerosis [1,2]. In most cases it is asymptomatic; however, it may occur in the form of subclavian steal syndrome. In patients after coronary artery bypass graft (CABG) surgery using the left internal mammary artery (LIMA), the proximal narrowing of the left subclavian artery may cause an increase in stenocardial pain, and in extreme cases it may result in the occurrence of acute coronary syndrome. This is exactly the case we would like to describe.
Case report
A 74-year-old male with generalized atherosclerosis being the cause of ischaemic heart disease, abdominal aortic and right common iliac artery aneurysm, subacute ischaemia of the left leg with dry necrosis of the 5 th toe of the left foot, and also with renal failure, arterial hypertension and permanent atrial fibrillation, was hospitalised due to acute coronary syndrome with additional troponin values. So far, the coronary disease observed in this case has revealed itself in the form of two myocardial infarctions, which took place 25 and 23 years ago, and which compromised left ventricular ejection fraction to about 30%, as well as the necessity to perform CABG 7 years ago. The procedure was performed by implantation of two venous grafts: to the right coronary artery (RCA) and the obtuse marginal branch of the left coronary artery (OM), together with LIMA to the left anterior descending artery (LAD). When admitted, the patient suffered from recurring stenocardial pain. The ECG record did not reveal any diagnostic features due to permanent atrial fibrillation and left bundle branch block.
Because of the whole clinical image, including troponin values exceeding the normal range 2.5 times, the decision was made to perform coronary angiography. Due to significant atherosclerotic and ischaemic lesions within lower limbs, abdominal aortic aneurysm and very weakly palpable pulse within the scope of the left radial artery (the difference in systolic pressure in both limbs was about 20 mmHg, to the benefit of the right one) the procedure was performed from the puncture of the right radial artery. Advanced lesions within coronary arteries were observed, with amputation of the RCA and circumflex artery, next to numerous lesions within the remaining coronary vessels.
Angiography of venous grafts to the RCA and OM revealed their proper functioning; nevertheless, native circumflex arteries, the OM in particular, revealed advanced atherosclerotic changes. In the light of the angiographic image, mentioned above, the patient underwent angioplasty with stent implantation within the scope of OM, with retrograde technique, through a venous bridge ( Figure 1). Despite the successful course of the procedure, it was impossible to obtain relief of recurring stenocardial ailments or troponin level normalization. In relation to the above, despite relative contraindications, the decision was made to perform LIMA angiography from the right femoral artery puncture, as it is inaccessible to be evaluated from the right radial artery puncture. The examination revealed narrow stenosis of the left subclavian artery within its initial segment. Angioplasty was performed; a stent was implanted which led to widening of the vessel (Figure 2). The
Discussion
Despite the observed incidental long lifespan of venous grafts [3], the LIMA is the best bridge used in CABG due to its long lifespan and lack of atherosclerotic lesions. Among other things this is related to increased nitric oxide production by the LIMA, in comparison with other grafts. It is a spasm protecting activity and one limiting the development of arteriosclerosis, not only within the thoracic artery, but also in the peripheral segment of the coronary artery into which the graft is implemented. Unfortunately, in the case of significant stenosis or even proximal closure of the subclavian artery from the place of LIMA exit, which is a typical location of the lesion, it may lead to damage of the flow and even its inversion within the LIMA [4]. It leads to intensified symptoms of subclavian artery syndrome or acute coronary syndrome [5]. Diagnostics related to possible stenosis within the subclavian artery are based on arterial pressure measures, Doppler test and classical angiography or performed by means of computed tomography or magnetic resonance imaging. The difference in systolic pressure within the scope of upper limbs equalling at least 15 mmHg is treated as a factor indicating a significant probability of noteworthy haemodynamic stenosis within the subclavian artery [1]. In the case of classical angiographic examination it is impossible to perform the procedure from the puncture of the radial artery opposite to the diagnosed artery. The evaluation of the subclavian artery should be performed before the CABG procedure and after the procedure performed with the left and/or right thoracic artery in case of intensification of stenocardial ailments or occurrence of acute coronary syndrome. Angioplasty of the subclavian artery combined with stent implantation is characterized by high efficiency and good long-term results [6][7][8].
There are no unambiguously accepted criteria concerning the type and dimension of lesion within the subclavian artery that requires angioplasty. It seems that coexistence of ischaemia in the upper limb or in the brain, and in the case of implementing the LIMA to the coronary artery also the symptoms of ischaemic heart disease, constitute the indication for the procedure. Such clinical symptoms are usually related to stenosis in the subclavian artery, equalling at least 50%. It has been stated that stenosis not exceeding 50% is related to the difference in blood pressure in the brachial arteries, which does not exceed 20%. Angioplasty is considered successful when stenosis is decreased to less than 30%, which in the majority of cases relates the balance of pressure on both upper limbs. Further pressure monitoring is an efficient method of clinical observation related to the occurrence of restenosis [9].
The described case was one of the most difficult ones because of advanced atherosclerotic lesions limiting the vascular access, renal failure forcing us to limit the amount of contrast agent and the necessity of immediate action related to increasing troponin values. The patient underwent a previous coronary angioplasty procedure within the scope of the narrowed circumflex artery, performed by means of a technically quite difficult retrograde method through the venous bridge [10], which did not lead to relief of the acute coronary syndrome, but probably led to improvement in cardiac muscle perfusion.
In conclusion, patients before CABG performed on schedule and after the procedure with the internal thoracic artery in the case of increase in coronary ailments should be diagnosed from the view of narrowing within the subclavian artery.
Angioplasty of the narrowed subclavian artery is a relatively safe procedure and it can result in an improvement of clinical condition in patients after CABG with the thoracic artery with intensified symptoms of the coronary artery.
Re f e r e n c e s | 2014-10-01T00:00:00.000Z | 2011-10-01T00:00:00.000 | {
"year": 2011,
"sha1": "647b44061178a6f7ab9222e7c7c0b115462983c6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-19/pdf-17629-10?filename=Subclavian.doc.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "647b44061178a6f7ab9222e7c7c0b115462983c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218755265 | pes2o/s2orc | v3-fos-license | A Possible Antioxidant Role for Vitamin D in Soccer Players: A Retrospective Analysis of Psychophysical Stress Markers in a Professional Team
The health benefits of physical activity are recognized, however, high levels of exercise may lead to metabolic pathway imbalances that could evolve into pathological conditions like the increased risk of neurological disease observed in professional athletes. We analyzed the plasma/serum levels of 29 athletes from a professional soccer team playing in the Italian first league and tested the levels of psychophysical stress markers (vitamin D, creatine kinase, reactive oxygen species (ROS) and testosterone/cortisol ratio) during a period of 13 months. The testosterone/cortisol ratio was consistent with an appropriate training program. However, most of the athletes showed high levels of creatine kinase and ROS. Despite the large outdoor activity, vitamin D values were often below the sufficiency level and, during the “vitamin D winter”, comparable with those of the general population. Interestingly, high vitamin D values seemed to be associated to low levels of ROS. Based on the results of our study we proposed a vitamin D supplementation as a general practice for people who perform high levels of physical exercise. Beside the known effect on calcium and phosphate homeostasis, vitamin D supplementation should mitigate the high reactivity of ROS which might be correlated to higher risk of neurodegenerative diseases observed in professional athletes.
Introduction
Physical activity and exercise training are recognized to provide a range of significant benefits associated to both physical and mental health [1,2]. Nevertheless, excessive exercise (EE) may lead to an increased risk of heart dysfunctions as well as altered biological, neurochemical, and hormonal regulation mechanisms [3]. This is common in professional athletes whose training involves an overload period which is often not complemented by an adequate recovery. As a consequence, depending on the time needed to restore performance capacity, athletes may experience overtraining/overreaching syndrome [4] which is frequently associated to catabolic and anabolic imbalance involving skeletal muscle proteins, the neuroendocrine system, and the autonomic nervous system [5,6]. Furthermore, EE has been associated to sport-related skeletal muscle injuries due to repetitive sarcolemma micro-damages and altered calcium homeostasis, bony stress fractures due aberrant loads and accelerated bone turnover, and acute macro-trauma especially in contact sports [7]. If contact sports and musculoskeletal injuries might be seen as an obvious connection of events, a less evident correlation was observed lately between sports and neurodegenerative diseases [8]. Recent studies showed that former professional athletes, who participated in contact sports, had an increased risks of impaired cognitive function and dementia, Parkinson's disease, Alzheimer disease and amyotrophic lateral sclerosis (ALS) [8][9][10]. However, besides the statistical significance, no clear evidence explaining the association between head trauma and neurodegenerative diseases has been proposed. Furthermore, conflicting results have been published showing an association between high-level chronic physical activity and ALS regardless of any traumatic events [11]. ALS, a fatal adult-onset neurodegenerative disorder often associated with professional sports and soccer in particular [12,13], is characterized by the progressive loss of motor neurons in the brain, brainstem, and spinal cord, which brings paralysis and death within a few years from diagnosis [11,14]. Recent studies showed that the motor neurons impairment in ALS is often associated with the protein misfolding and deposition of superoxide dismutase 1 (SOD1), into insoluble aggregates, likely caused by a structural destabilization induced by gene mutations and/or oxidative damage [15].
Thus, although many studies have shown that "a certain level of exercise is good", more seems not to be necessarily better and too much might become deleterious [2].
During a season, soccer players face up to several matches in a congested schedule, with as little as 3 days of recovery in between. In this situation a complete recovery is not reached due to the added biochemical stress. Fatigue, indeed, may persist for days after a single match, impairing physical performance and neuromuscular functions, increasing perceptual discomfort (e.g., muscle soreness) and inducing biochemical perturbations (e.g., muscle damage, inflammatory and immunological markers) [16]. For instance, blood creatine kinase (CK) activity remained significantly higher during the 72 h-recovery period [17,18] and the immune function remained altered in 48 h-post match [19]. Accordingly, the accumulation of muscle damage, inflammatory and immune perturbations triggered by consecutive matches and the daily training sessions may hinder the recovery and, consequently, limit the athletes' readiness and increase the risk of injury [20][21][22].
We retrospectively analyzed the plasma/serum levels of different psychophysical stress markers such as the testosterone (T) to cortisol (C) ratio [23], vitamin D (vitD) [24,25], creatine kinase (CK) [24] and reactive oxygen species (ROS) [26], in an elite soccer team of the Italian first league during an entire season with the aim of determining the seasonal changes and the eventual association with pathological conditions or injuries that emerged during the observation period.
Study Cohort
The entire squad of soccer players, made up by 29 male athletes, aged 18-40 years (25.9 ± 5.0 years), belonging to the A.C. Milan football team of the Italian "Serie A" were included in this retrospective observational study. The averaged heights and weights were, respectively, 183.7 ± 5.9 cm and 77.5 ± 7.7 Kg. The team regularly trained and competed at latitudes with middle/high sun exposure (between 45 • and 46 • N of latitude), even during autumn and winter. The competitive season started on 20th of August 2017 and ended on 20th of May 2018. Thus, the off-season typically takes place in June and the pre-season starts in July. All individuals involved in the study gave an informed consent to the use of their anonymously collected data for retrospective observational studies (with reference to article 9.2.j of the EU general data protection regulation 2016/679 (GDPR)), according to the San Raffaele Hospital internal policy (IOG075/2016).
No regular supplementation of vitD was followed by the team, however, a sporadic intake of vitamin D by single athletes cannot be excluded.
Clinical Data
VitD, CK activity, ROS, and T/C were evaluated in all samples. Measurements were performed, within 4 h from withdrawal, on a Roche COBAS 8000 [28] (Roche, Basel, Switzerland) using electrochemiluminescence immunoassays (25(OH)D, T and C), and spectrophotometric assays (CK activity and ROS). VitD was measured as serum total 25-hydroxyvitamin D (25(OH)D) which is considered the best indicator of vitD status [29]. ROS concentrations were expressed in Carratteli units (Car/U) where one Car/U corresponds to a H 2 O 2 concentration of 0.08 mg/100 mL. The T/C ratio was calculated by dividing the two hormone levels both expressed in nm/L. All of the instrumentation was routinely checked, each month, by averaging approximately 25-28 measurements (one each working day) of standard solution at low and high concentrations.
Statistical Analysis
Statistical analyses and graphs were performed with the software Sigmaplot (Systat-Software, Inc. San Jose, CA, USA) and GraphPad Prism v6.01 (GraphPad Software Inc., La Jolla, CA, USA). A linear regression analysis was performed for the whole dataset and for the dataset without the outliers (values which were at least 3 standard deviations away from the mean) to investigate the relationship between each pair of psychophysical markers. Averaged values and their corresponding standard deviation intervals (STD) were also calculated at each withdrawal date.
Vitamin D
The averaged circannual 25(OH)D variations for the 29 players were compared with that of the general population living at the same latitude [30] (Table S1). Of the 195 data collected only 41% were above the 30 ng/mL sufficiency threshold [30] whereas 44.6% had insufficient levels of 25(OH)D (20-30 ng/mL) and 14.4% were 25(OH)D deficient (<20 ng/mL) [27]. Most of the 25(OH)D deficient measurements (18 out of 28; 64.3%) were from the four players of African origins which showed levels between 5.7 and 20.4 ng/mL (Table S1).
As expected, the majority of the 25(OH)D insufficient/deficient levels (<30 ng/mL) were in the "vitamin D winter" (November to March) [30]. During this 5-month interval, 74.6% of the collected blood samples were below the suggested 30 ng/mL limit. In contrast, during the warm season only 41.1% of the samples were 25(OH)D insufficient/deficient.
CK
The averaged CK activities were above the normal clinical range U/L) during the entire period of observation Figure 1, panel (B). Table S2 shows that 74.9% of the samples analyzed exceeded the upper limit. The highest percentage of samples above the 195 U/L limit were observed both in the pre-seasons (between July and August) and at the beginning of the season (September to November), whereas in the central and final parts of the season the percentage of samples above the 195 U/L limit was between 58% and 67% Figure 1, panel (B). Of the 195 samples, 22 were collected when the corresponding athletes were on an injured period (Table S2) yet, 14 of them (63.6%) had CK activity levels still above the limit (Table S2).
Reactive Oxygen Species
ROS exceeded the suggested upper limit of 300 Car/U in 42.6% of the measurements (Table S3). The highest percentages of athletes above the limits were recorded in the central part of the season: November 29th, 2017 (59.2%), January 15th, 2018 (72.4%) and March 6th, 2018 (63.0%) Figure 1, panel (C). Of the 22 blood samples taken from soccer players on an injury period, only 3 (13.6%) exceeded the normal range.
T/C Ratio
The averaged T/C ratios were above the 0.76 level, considered as the threshold below which there is a risk of overtraining [31], for the whole season Figure 1, panel (D). Of the 195 collected samples, 183 (93.8%) were in the normal range whereas only one athlete, at the beginning of the season (July, 5th, 2017), showed a T/C ratio < 0.50 consistent with a high risk of overtraining (Table S4). Ten samples showed T/C levels between 0.58 and 0.75 consistent with a moderate risk of overtraining and only one sample was between 0.50 and 0.57, thus consistent with a substantial overtraining risk. Of the 22 blood samples taken from soccer players on an injury period, only one (4.5%) was below the 0.76 threshold. Table 1 shows the parameters obtained from the analysis of the linear correlations between each pair of psychophysical markers Table 1). The same analysis was performed without outliers: the results confirmed the previous observation with the exception of the correlation between 25(OH)D and T/C which showed no significant deviation from horizontal (P = 0.105, data not shown). CK vs. T/C −0.000*X + 1.317 0.017 0.071 NOT SIGNIFICANT T/C vs. ROS 0.000*X + 1.213 0.000 0.874 NOT SIGNIFICANT
Figure 2. Linear correlation between vitD and free radicals (A), vitD and T/C (B), vitD and CK (C).
For each regression the corresponding equation, the R 2 and the P value are shown.
Figure 2. Linear correlation between vitD and free radicals (A), vitD and T/C (B), vitD and CK (C).
For each regression the corresponding equation, the R 2 and the P value are shown.
Discussion
We retrospectively studied the psychophysical markers in elite soccer players during a competitive season in order to verify/recognize whether the large loads of physical activity carried out by professional athletes could raise concerns about their physical health. The T/C ratio has been used as a marker of overtraining [31] based on the assumption that free testosterone is a marker of anabolism while cortisol is indicative of catabolism. Our results showed that although the averaged T/C ratio values (above the 0.76 threshold limit during the whole season) were consistent with an appropriate training program, most of the athletes experienced high levels of CK and ROS. Moreover, despite the significant outdoor activity, during the "vitamin D winter", vitamin D values were often below the sufficiency level, compared with the general population.
The CK and ROS values were above the normal clinical range limit in 74.9% and 42.6% of the cases, respectively. These percentages became even higher (76.4% and 46.2% for CK and ROS, respectively) if samples taken from athletes on an injury period were omitted.
The CK serum activity, is a physical stress marker widely used in sport [32] but is also a marker of pathological conditions like acute myocardial infarction, myositis and myocarditis, hypothyroidism, myopathies etc. [33]. None of the athletes were affected by any of these conditions, however, CK activity might be above the normal clinical range in healthy subjects as well [34] due to CK leaking into the bloodstream upon muscular injury and, after strenuous physical activity. CK serum activity transiently rises to as much as 30 times the upper limit within 24 to 48 h and then slowly decreases over the next 7 days [35]. Thus, relatively high CK activity levels in professional athletes performing sports involving physical contacts, like soccer, could be considered as normal and are not necessarily associated to an overtraining condition. The highest percentages of samples above the 195 U/L limit, observed in the pre-seasons (between July and August) and at the beginning of the season (September to November), might be consistent with the impact to the newly started training program, after the rest period, and the consequent adaptation. If CK can be considered as a "pure" physical stress marker and does not induce any "side effect" to the athletes, the same is not true for ROS. Reactive oxygen species, also known as free radicals, are formed during the mitochondrial respiration, whose rate is enhanced during exercise, as mitochondrial superoxide or consequently to reperfusion after exercise-induced transient skeletal muscle ischemia [36]. ROS participate in a variety of chemical reactions and are also essential in adaptation to exercise (mitochondrial biogenesis, myofibers regeneration), however, when produced in excess, they can oxidize, and thus damage, a range of biological molecules, including lipids (e.g., membrane phospholipids), nucleic acids (DNA), as well as carbohydrates and proteins [37,38]. High ROS concentrations are associated with a decline in cognitive functions, as observed in some neurodegenerative disorders and age-dependent decay of neuroplasticity [38]. Thus, a long term exposure to this highly reactive species might lead to pathological implications [39]. Interestingly, one of the most common neurodegenerative disease among former professional soccer players is ALS [8] whose etiology is associated with the deposition of SOD1, a metalloenzyme responsible for scavenging free radicals [40], into insoluble aggregates in motor neurons. This amyloid-like formation is probably due to a structural destabilization and/or oxidative damage induced by gene mutations [15]. Because a high level of ROS results in higher transcription and translation of the SOD1 gene [41], we might speculate that an increased ROS level, constantly perpetuated along the athlete's career, will induce an almost constant SOD1 overexpression and thus promote the concentration-dependent formation of SOD1-amyloid-like aggregates [42] possibly associated with the pathogenesis of this neurodegenerative disease. In our study almost 50% of the measured levels were above the normal clinical limit, posing concerns for the long-term health of the players.
VitD, mainly synthesized by the skin when exposed to ultraviolet B radiation (UVB) [27], refers to a group of related steroid hormones involved in several physiological processes centered on the maintenance of calcium and phosphate homeostasis, as well as iron and zinc [27]. Since UVB is necessary to synthesize the vitD precursor cholecalciferol, vitD deficiency in populations living at high latitudes is common, especially during winter [30]. Although an optimal (25(OH)D) level helps to maintain the musculoskeletal system efficiency [43,44], studies on athletes highlighted a surprisingly high prevalence of vitD insufficiency, both in outdoor and indoor disciplines [45,46].
The average (25(OH)D) of the 29 athletes was, in the cold season, similar to those of the general population living at the same latitude. The four players of African origins do not lower significantly the averaged vitD levels, also, from recent population statistics [47] we expect a similar percentage of people of African origin in the general population living in the Milan area as well. Thus, the comparison between the whole group and the general population can be considered as pertinent. As previously observed in other studies [48] we noticed a scenario of insufficiency whilst deficiency was observed mainly in the four athletes of African origins, primarily due to skin pigmentation [49]. This was quite surprising because professional soccer players spend most of their time outdoors and, at this latitude even in the cold seasons, 2 h of sun exposure with 10% of body exposure at solar noon are sufficient for an optimal vitamin D dose [50]. This might be tentatively explained by the training outfit of the athletes which, in the cold season, usually covers a large part of the body. In contrast, during the warm season, where the training is usually performed with short sleeves and short pants, the athletes group showed averaged vitD levels much higher than the general population living at the same latitude.
Interestingly we found a significant correlation between (25(OH)D) and ROS (Table 1) which might confirm the recently discovered antioxidant role attributed to this hormone [51]. A weak yet significant correlation was found also between (25(OH)D) and the T/C ratio. Higher levels of vitD were thus associated to a slightly increased risk of overtraining. The correlation might indicate a possible biological interaction between the hormones, however, because of the rather high P value as well as the lack of correlation when outliers were removed, further studies, involving larger dataset, are needed to investigate this issue.
Although no correlation between vitD status and athletic performance have been shown previously [45], and lower rates of osteoporotic fractures have been recorded in African-Americans having insufficient vitD levels [49], there are now evidences that vitD protects against other chronic conditions, including cardiovascular disease, diabetes, and some cancers [49]. The possible antioxidant effect observed in this study might be a further reason to suggest a vitD supplementation to professional athletes.
Conclusions
Although modern training programs seem to avoid the risk of overtraining in professional soccer players, others psychophysical stress markers, like the free radicals, are often above the normal clinical limit posing as a long term risk for several pathological situations like neurodegenerative diseases. Our data seems to associate an antioxidant effect with normal/high vitD levels. Thus, in the light of the general vitD insufficiency observed in the 29 professional soccer players, we suggest that vitD supplementation should become a general practice for professional soccer players and athletes that have to cope with high ROS levels for a long period of their life. Supplementation will preserve athletes from the harmful skeletal effects of low level of vitD as well as mitigate the detrimental high reactivity of ROS capable of damaging nucleic acids and protein conformation.
Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/17/10/3484/s1, Table S1: VitD levels (ng/mL) measured during the soccer season 2017/2018. Table S2: CK levels (U/L) measured during the soccer season 2017/2018. Table S3: ROS levels (Car/U) measured during the soccer season 2017/2018. Table S4: T/C ratios measured during the soccer season 2017/2018. Author Contributions: D.F. wrote the paper and contributed to data analysis. G.L. contributed to data analysis and writing of the paper. M.S., M.P. and, A.M. contributed to data collection. M.L. contributed to data analysis and paper supervising. All authors have read and agreed to the published version of the manuscript.
Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-05-21T09:16:14.075Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "c21f225fb1d270921300337fe4f6fa785a84c798",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/10/3484/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8a119263892a5d89117a213de230be5685231e3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219604065 | pes2o/s2orc | v3-fos-license | Slow oscillation-spindle coupling is negatively associated with emotional memory formation following stress
Both stress and sleep enhance emotional memory. They also interact, with the largest effect of sleep on emotional memory being seen when stress occurs shortly before or after encoding. Slow wave sleep (SWS) is critical for long-term episodic memory, facilitated by the temporal coupling of slow oscillations and sleep spindles. Prior work in humans has shown these associations for neutral information in non-stressed participants. Whether coupling interacts with stress to facilitate emotional memory formation is unknown. Here, we addressed this question by reanalyzing an existing dataset of 64 individuals. Participants underwent a psychosocial stressor (32) or comparable control (32) prior to the encoding of 150-line drawings of neutral, positive, and negative images. All participants slept overnight with polysomnography, before being given a surprise memory test the following day. In the stress group, time spent in SWS was positively correlated with memory for images of all valences. Results were driven by those who showed a high cortisol response to the stressor, compared to low responders. The amount of slow oscillation-spindle coupling during SWS was negatively associated with neutral and emotional memory in the stress group only. The association with emotional memory was significantly stronger than for neutral memory within the stress group. These results suggest that stress around the time of initial memory formation impacts the relationship between slow wave sleep and memory.
Introduction
Sleep aids in the consolidation of episodic memory (Stickgold, 2005;Payne, Ellenbogen, et al., 2008;Diekelmann & Born, 2010;Rasch & Born, 2013). One of the main theoretical accounts of this process, the active systems consolidation theory, posits that across sleep, memories become less dependent on the hippocampus and more dependent on neocortical areas (Takashima et al., 2006;Klinzing et al., 2019). According to this theory, memory consolidation occurs primarily during periods of slow wave sleep (SWS) and is facilitated by the precise triple phase-locking of neocortical slow oscillations, thalamocortical sleep spindles, and hippocampal sharp-wave ripples (Rasch & Born, 2013;Klinzing et al., 2019). Evidence for this triple-phase locking, and its importance for memory has emerged with rodents (Latchoumane et al., 2017). In humans, this coordination has been demonstrated in epilepsy patients with intracranial hippocampal recordings (Staresina et al., 2015). Although hippocampal ripples cannot be detected noninvasively, slow oscillation-spindle coupling as detected via scalp EEG has been associated with memory consolidation in healthy humans (e.g. Niknazar et al., 2015;Mikutta et al., 2019;Denis, Mylonas, et al., 2020;Zhang et al., 2020).
In fact, several studies have found relationships between emotional memory and SWS but not REM sleep, raising the question of whether SWS and REM sleep differentially contribute to emotional memory consolidation (Benedict et al., 2009;Payne, 2011Payne, ,, 2014Payne et al., 2015;Wagner et al., 2002). The positive effects of SWS have often been shown in daytime naps rather than overnight designs (e.g. Alger et al, 2018;Payne et al, 2015), suggesting that sleep stages act differently on emotional memories depending on time of day (Alger et al., 2018). Others suggest a complementary role for the two stages in emotional memory consolidation, whereby the SWS-REM cycles that naturally occur in overnight sleep serve to strengthen and integrate newly acquired emotional memory traces in preexisting memory networks (Cairney et al., 2015).
In spite of these findings implicating SWS in emotional memory consolidation, the role of SWSbased oscillatory activity, including slow oscillation-spindle coupling, remains underexplored in regard to emotional memories. Research on slow oscillation-spindle coupling has focused almost exclusively on episodic memories for neutral information (though Latchoumane et al. (2015) employed a fear conditioning paradigm in rodents). A key distinction between neutral and emotional memory formation is the involvement of the amygdala .
While theories of memory consolidation highlight dialogue between the hippocampus and neocortex during SWS as being involved in memory consolidation, interactions between the hippocampus and amygdala, and their relevance for behavior, is currently underexplored (though see Cox et al, 2020). Furthermore, many studies examining SWS in relation to memory focus exclusively on sleep stage correlations. Although broad sleep stage information provides insight into sleep's role in memory consolidation, they fail to capture the neural activity that may more directly underlie consolidation processes during sleep. It is therefore important to consider both broad sleep stage macroarchitecture and the specific neurophysiological "micro" events when investigating the impact of sleep on memory.
Although recent theories (Hutchison & Rathore, 2015) and empirical studies (Kim et al., 2019;Nishida et al., 2009;Sopp et al., 2017) of REM sleep and emotional memory consolidation have examined EEG characteristics during REM sleep (e.g. REM theta (4-7Hz) activity), to our knowledge no studies in humans have assessed the role of slow oscillation-spindle coupling during SWS in emotional memory. Despite some research finding positive associations with sleep spindle activity and emotional memory consolidation (Kaestner et al., 2013;Cairney et al., 2014;Cellini et al., 2016;Alger et al., 2018) many other studies report no association (Prehn-Kristensen et al., 2011;Baran et al., 2012;Bennion et al., 2015;Göder et al., 2015;Payne et al., 2015;Bolinger et al., 2018). Given the likelihood that it is the coupling between spindles and slow oscillations that are important for consolidation processes, this may explain some of the mixed findings when assessing sleep spindles in isolation. Mechanistically, the coupling of spindles to slow oscillations has been suggested to be crucial to the induction of synaptic plasticity that underlies the formation of long-term memory representations in cortical networks (see Klinzing et al, 2019 for a recent review). Because research on the relationship between slow oscillation-spindle coupling and emotional memory does not yet exist, exploratory work assessing these associations is important if we are to more fully understand how emotional memories are consolidated during sleep. Exploring these relationships is one of our main goals here, as we attempt to better understand the relationship between features of SWS and emotional memory.
A second goal involves better understanding how stress interacts with sleep to benefit emotional memory (Bennion et al., 2015;Kim & Payne, 2020;. When it comes to the selective processing of emotional memories, sleep is not the only state important for consolidation. Exposure to stress, and stress-related neuromodulators such as norepinephrine and cortisol, have been linked to better subsequent memory for emotional compared to neutral stimuli (Payne et al., 2007;Shields et al., 2017;Cunningham et al., 2018). When these neuromodulators are present around the time of the initial encoding event, changes are triggered in brain regions relevant for emotional memory, including enhanced activity in and connectivity between the hippocampus, amygdala, and prefrontal cortex (PFC) (Veer et al., 2011(Veer et al., , 2012Ghosh et al., 2013;Vaisvaser et al., 2013). Critically, these stress induced changes in neural activity and connectivity are associated with selective enhancement of emotional memory (Schwabe, 2017;Shields et al., 2019).
Psychosocial stress has been shown to lead to alterations in subsequent sleep. In particular, one review highlighted a number of changes in sleep architecture following stress, including reductions in SWS and REM sleep duration (Kim & Dimsdale, 2007). More recently it has been shown that a post-stress nap shows reduced delta (0.5 -4.5Hz) power compared to a control nap (Ackermann et al., 2019). A small but growing body of work has shown sleep and stress interact to facilitate emotional memory formation Kim & Payne, 2020 for review). Levels of cortisol during encoding have been positively correlated with subsequent memory after nocturnal sleep, but not after an equivalent period of daytime wake (Bennion et al., 2015). This relationship is stronger for emotional memory compared to neutral memory (Bennion et al., 2015). In a different analysis of the dataset reported on here, where stress prior to encoding was induced in half of the participants, theta oscillations (4-7Hz) during REM sleep predicted emotional memory in stressed participants, particularly those showing a high cortisol response following the stressor (Kim et al., 2019). These results suggest that elevations in stressrelated neuromodulators during learning aid in the tagging of emotional memories, potentially via enhancement of amygdala-hippocampus-PFC connectivity, for preferential processing during sleep (Kim & Payne, 2020).
An important next step for understanding stress-sleep interactions in emotional memory is examining the potential impact of stress on oscillatory activity during SWS. Again, by investigating slow oscillation-spindle coupling, we can further understand how sleep physiology subserves not only memory for different types of encoding materials (neutral vs emotional memories), but also different types of encoding conditions (stressed or not). Here, we utilized a rich dataset containing a stress manipulation, overnight sleep with polysomnography, fMRI recordings (not utilized in the present report), and an emotional memory task. This dataset has previously been analyzed in studies of encoding-related changes in resting state functional connectivity in non-stressed control subjects (Kark & Kensinger, 2019a), effects of physiological arousal on emotional memory vividness (Kark & Kensinger, 2019b), repetition suppression effects in non-stressed control subjects (Kark et al., 2020), and interactions between stress and theta activity during REM sleep (Kim et al., 2019). To answer our specific questions about interactions between stress and slow wave sleep, we conducted exploratory analyses on this dataset with the goal of answering the following questions: 1. Does time spent in SWS correlate with memory for emotional items , neutral items (Groch et al., 2015), or both emotional and neutral items? 2. Does slow oscillation-spindle coupling correlate with memory for neutral items, as in previous research (e.g. Mikutta et al., 2019), and does coupling also correlate with emotional memory?
3. Does stress exposure at the time of learning alter these relationships, perhaps by making it even more likely that emotional memories will be consolidated over neutral ones?
Participants
We analyzed data from a multi-day experiment designed to examine the effects of stress and sleep on emotional memory (see Kim et al., 2019;Kark & Kensinger, 2019 for other analyses of this dataset). In total, 65 participants (ages 18-31, M = 21.86, SD = 2.75) took part in the study.
One participant was excluded due to a recording error. Participants were assigned to either the stress (n = 32, 19 female; Mage = 21.5 years, SD = 2.8) or control (n = 32, 15 female; Mage = 22.3 years, SD = 2.7) group. One participant from the control group was excluded from analysis of slow oscillation-spindle coupling due to an error with the PSG recording file. Participants reported no history of neurological, psychiatric, or sleep-related disorders, and were free of any other chronic medical conditions or medication affecting the central nervous system. Participants were compensated for their time. The study received ethical approval from the Boston College Institutional Review Board.
Trier Social Stress Task (TSST)
This task is a reliable, well-validated inducer of psychosocial stress. Participants in the stress group were given 10 minutes to prepare a five-minute speech on a topic (e.g. "Why are you the best candidate for a job?") using information about themselves. They were told that they would give the speech to two judges and were given writing materials to prepare notes. At the end of the 10 minutes, participants were escorted to a separate room with two seated judges (confederates). Participants then had their notes taken from them and were asked to give the speech from memory. The confederate judges were instructed to keep neutral expressions throughout the speech. If participants completed the speech before the five minutes were up, they were told to continue. Following the speech, participants performed an arithmetic task aloud for five minutes (e.g. "Continuously subtract 13 from the number 1022 as quickly and accurately as possible"). If they made a mistake, they were told to restart from the beginning.
Participants in the control group performed a similar set of tasks, but designed to minimize psychosocial stress. They had 10 minutes to prepare for the speech task as well, but were informed they were in the control group to mitigate anticipatory stress. They were then escorted into the same room, but without recording equipment (though physiological measures were still taken). They then read their speech aloud in the empty room. After five minutes, they completed an arithmetic task in the empty room.
Emotional memory task
Stimuli were 300 International Affective Picture System (IAPS) images and their corresponding line drawings (Kark & Kensinger, 2015). The use of line drawings was motivated by several reasons: 1) minimize confounds due to re-encoding during the recognition phase; 2) allow cueing of specific memories of the full color image without representing the images themselves; 3) minimize arousal effects during recognition by using retrieval cues that were less emotionally arousing than the images presented at encoding; 4) create a challenging recognition task with sufficient hit and false alarm rates for calculating memory scores.
Negative and positive images were preselected using the IAPS normative database for arousal and valence. Critically, negative and positive images were matched on arousal and absolute valence, and were both more arousing and more negative/positively valanced than neutral images (Kark & Kensinger, 2015).
Encoding
During incidental encoding, participants viewed 150 negative (50), neutral (50), and positive (50) images. The images that were viewed, versus those held out as foils on the recognition task, were varied across participants. Each line drawing was presented for 1.5s, followed by the full color image for 3s. After viewing each image, participants indicated whether they would approach or back away from the scene if they encountered it in real life. This incidental encoding task facilitates deep encoding. Between each trial, a fixation cross was displayed for 6 -12s ( Figure 1B).
Recognition
Participants viewed only the line drawings during the recognition task. All 300 line drawings (100 negative, 100 neutral, 100 positive) were shown to every participant. One hundred fifty were the previously studied items, and 150 were new line drawings, with the encoding lists varying which items were to be classified into each of these categories. Each line drawing was presented for 3s. For each item, participants judged whether an item was new or old. If participants indicated that the item was old, they rated the vividness of their recollection on a scale of 0 = New, 1 = Old-not vivid, 2 = Old-somewhat vivid, 3 = Old-vivid, 4 = Old-Extremely vivid. Between trials, a fixation cross was displayed for 1.5 -9s ( Figure 1B).
Procedure
For up to seven days prior to the experimental period, participants wore a wrist actigraphy monitor and kept a sleep log to monitor their sleep schedule. In the 24 hours before participation, participants were asked to refrain from caffeine, alcohol, and tobacco. To avoid contamination of cortisol samples, participants were told to refrain from physical activity, eating, drinking liquids aside from water, smoking, and brushing their teeth in the two hours prior to the study, and refrain from drinking water for at least 15 minutes prior to the study.
Upon arrival to the laboratory, participants provided a saliva sample for baseline cortisol levels (see Kim et al., 2019 for full details on saliva sampling or cortisol measurement). Participants were assigned to a group and completed either the stress (TSST) or control task. This was then followed by two further saliva samples taken approximately 15 minutes apart. (Additional saliva samples were taken later in the day to monitor circadian rhythms but will not be discussed further). Then, approximately 30 minutes after the TSST or control, they completed the encoding portion of the incidental emotional memory task during an fMRI scan. Participants then left the lab and went about their day before returning for overnight sleep monitoring that evening (approximately 6 hours after the end of the MRI session). Participants were instructed not to sleep in this intervening period. Two more saliva samples were taken prior to bedtime. All participants slept in the lab with polysomnographic recording. The following day (approximately 24 hours following encoding), participants completed a surprise recognition task, again in the MRI scanner (The study protocol as relevant to the current study is shown in Figure 1A. The full protocol is shown in Supplementary Figure 1).
Figure 1. Study timeline and stimuli.
A -Protocol details of relevance to the current analyses. On day 1, participants underwent the Trier Social Stress Task or control, then completed the encoding task. All participants then slept overnight in the lab with PSG recording. On day 2, participants completed a surprise recognition test. B -Encoding/retrieval task. During encoding (top), each line drawing was shown for 1.5s followed by the full IAPS image for 3s. Participants made an incidental encoding judgement (approach or back away) after each image pair. During recognition (bottom), each line drawing was shown for 3s, followed by an old-new judgement and vividness rating. IAPS = International Affective Picture System. Figure Polysomnography Polysomnography (PSG) was acquired for a full night of sleep for all participants. PSG recordings included electrooculography recordings above the right eye and below the left eye, electromyography from two chin electrodes (referenced to each other), and electroencephalography (EEG) recordings from six scalp electrodes (F3, F4, C3, C4, O1, O2), referenced to the contralateral mastoids. Data were collected using a Grass Aura amplifier and TWin software at a sampling rate of 200Hz. Following acquisition, data were sleep scored in accordance with American Academy of Sleep Medicine (2007) guidelines. Data were artifact rejected using automated procedures in the Luna toolbox. Artifact free segments of data were subjected to further spectral analyses (see below). One participant had their sleep data removed due to a corrupted file.
Sleep spindles
Spindles were detected at each electrode during SWS using a wavelet-based detector . The raw EEG signal was convoluted with a 7-cycle complex Morlet wavelet with a peak frequency of 13.5Hz (full-width half-max bandwidth 12-15Hz) using the continuous wavelet transform implemented in MATLAB (cwt function). Spindle detection was performed on the squared wavelet coefficients after being smoothed with a 100ms moving average. A spindle was detected whenever the wavelet signal exceeded a threshold of nine times the median signal amplitude of artifact-free epochs for at least 400ms (Mylonas et al., 2019). With these parameters, average spindle frequency was 13.18Hz.
Slow oscillations
Slow oscillations were detected at each electrode during SWS using an automated algorithm. The data were band-pass filtered between 0.5 -4Hz, and all positive-to-negative crossings were identified. Candidate slow oscillations were marked if two consecutive crossings fell 0.5 -2 seconds apart. Peak-to-peak amplitudes for all candidate slow oscillations were determined, and oscillations in the higher 25th percentile (i.e. the 25% with the highest amplitudes) were retained and considered slow oscillations (Staresina et al., 2015;Helfrich et al., 2018).
Slow oscillation-spindle coupling
Coupling was calculated at every electrode in SWS. The full, artifact free SWS signal was bandpass filtered in the delta (0.5 -4Hz) and sigma (12 -15Hz) bands. The Hilbert transform was applied to the whole artifact-free SWS signal to extract the instantaneous phase of the delta filtered signal and instantaneous amplitude of the sigma filtered signal. For each detected spindle, the peak amplitude of that spindle was determined. Then, it was determined whether the spindle peak occurred at anypoint during a detected slow oscillation (i.e., did the spindle peak fall between two consecutive positive-to-negative zero crossings which defined the start and end point of the slow oscillation). If a spindle was found to co-occur with a slow oscillation, the phase angle of the slow oscillation at the peak of the spindle was calculated. This approach to slow oscillation spindle coupling has been widely used elsewhere (e.g. Staresina et al., 2015;Helfrich et al., 2018;Mylonas et al., 2020). We extracted the percentage of all spindles coupled with a slow oscillation at each electrode, the average phase of the slow oscillation at the peak of each coupled spindle, and the overall coupling strength (mean vector length). Coupling phase at each electrode was measured in degrees, with 0° indicating the spindle coupled at the positive peak of the slow oscillation. Coupling strength at each electrode was assessed using mean vector length measured on a scale of 0 -1, where 0 indicates each coupled spindle occurred at a different phase of the slow oscillation, and 1 indicates that each coupled spindle occurred at the exact same slow oscillation phase. For all statistical analyses, coupling values from frontal (F3, F4) and central (C3, C4) electrodes were averaged together.
Statistical analysis
Behavioral results have been reported in detail elsewhere (Kim et al., 2019). Briefly, memory was measured as corrected recognition (hit rate -false alarm) for each valence type. Positive and negative stimuli were averaged together to create a combined emotional memory corrected recognition score. ANOVAs and follow up t-tests were used as appropriate. High cortisol responders and low responders were then identified via a median split (see Kim et al, 2019 for details), resulting in n = 16 high responders and n =16 low responders.
Relationships between sleep oscillatory measures and memory were assessed via multiple linear regression models. For all analyses, we looked both at positive + negative averaged together (hereafter referred to as emotional memory) as well as positive and negative separately. For all regression models, we were critically interested in the interaction between sleep measure (SWS time or coupling) and experimental group (stress or control). To control for multiple comparisons, interaction terms were evaluated at a false-discovery rate (FDR) adjusted significance level of p < .05. FDR adjustment was based on the four valence categories assessed (neutral, emotional (combined), positive, negative), and applied separately to analyses of SWS time and SWS coupling. Between-group comparisons of correlation coefficients were performed using Fischer's r-to-z transformation (Fisher, 1925). Within-group comparisons were performed using Meng's z test (Meng et al., 1992). Both were carried out using the cocor package for R (Diedenhofen & Musch, 2015). For analyses of coupling, one participant from the stress group had their data excluded on the basis of being an outlier (>3SD from the mean). Appropriate circular statistics were used in all analyses involving coupling phase. Specifically, a Hotelling paired samples test was used to assess differences in coupling phase between frontal and central electrode sites (van den Brink et al., 2014;van den Brink, 2020). Differences in coupling phase between the stress and control group were tested using a Watson-Williams test (Berens, 2009)., and correlations between coupling phase and memory were performed using circular-linear correlations (Berens, 2009).
Behavior
Although behavioral results, including efficacy of the TSST in evoking stress, are reported in our previous publication (Kim et al., 2019), we summarize them briefly here (Table 1; Figure 2).
When examining the effect of stress exposure on memory, we found there was a significant effect of valence (F (2, 124) = 4.69, p = .011, ηp 2 = .07). Memory at the recognition test was significantly better for both positive (t (63) = 2.46, p = .017, d = 0.31) and negative (t (63) = 2.77, p = .007, d = 0.35) images compared to neutral images. There was no difference between positive and negative items (t (63) = 0.38, p = .71, d = 0.05). A similar analysis was run for high and low cortisol responders within the stress condition ( Figure 2B). There was a significant main effect of valence (F (2, 60) = 4.94, p = .010, ηp 2 = .014), with neutral items being more poorly remembered than both positive (t (31) = 2.34, p = .026, d = 0.41) and negative (t (31) = 2.83, p = .008, d = 0.50) items. For both analyses, there surprisingly was no effect of stress condition/reactivity, nor were there any significant interactions. cortisol reactivity within the stress group, as determined by a median split. Errors bars represent the within-participant standard errors. Asterisks denote significance of exploratory within-group analyses of significant differences compared to neutral items ** = p < .01, * = p = .05.
Group differences in sleep
Our previous report indicated no group differences in any measures of sleep macroarchitecture between the stress and control group (Kim et al, 2019). As a first step, we examined whether there were any group differences in SWS spectral measures. This would test whether the stressor (or stress reactivity) itself impacted SWS oscillatory activity during the sleep period. Results are displayed in Table 2. There were no significant differences between the stress and control group.
Spindle and slow oscillation counts were significantly higher in low cortisol responders compared to high responders within the stress group, but these differences disappeared when stage time was controlled for using density measures (spindles/slow oscillations per minute of SWS). Note. M = mean, SD = standard deviation, sig = p value (uncorrected) from unpaired samples t-test. Differences in coupling phase assessed using a Watson-Williams two-sample test for circular data. so = slow oscillation. For coupling phase angle, 0° represents the peak of the spindle occurred at the positive peak of the slow oscillation.
SWS time
There was a significant interaction between group (stress or control) and percentage of time Figure 3). However, this was not the case in the control group (all r > -.16, all p > .40). It should be noted that the overall regression models did not explain a significant proportion of the variance at the p < 0.05 level (Supplementary Table 1). We again underscore the preliminary and exploratory nature of our analyses, yet when adjusting for multiple comparisons using the false-discovery rate, all interaction terms approached, but did not meet, statistical significance (all padj > .053).
SWS time in high and low cortisol responders
We next explored whether the relationship between SWS time and memory in the stress group was driven by those who showed a high cortisol response following the stressor. There was no significant interaction between cortisol response and SWS time for any valence type ( Despite this, we note that correlations between SWS time and memory were only significant in the high responder group (neutral: r = .66, p = .006; emotional (combined): r = .72, p = .002; positive: r = .74, p = .001; negative: r = .66, p = .005). Correlations in the low responder group were all non-significant (all r < .31, p > .25). For all valence types, the between-group difference in correlation magnitude was not significant (all z < 1.63, p > .10).
Slow oscillation-spindle coupling
Slow oscillation-spindle coupling dynamics during SWS are shown in Figure 5. When coupling phase was averaged across all coupled spindles within an individual, and across frontal and central electrodes, a Rayleigh test for non-uniformity was highly significant (z = 63, p < .001).
We investigated whether slow oscillation-spindle coupling during SWS was related to emotional memory following stress exposure. We focused specifically on coupling, due to emerging evidence that these events are specifically involved in memory processes (Klinzing et al., 2019).
First, we examined the overall amount of coupling during SWS. As such, the percentage of all SWS spindles that co-occurred with a slow oscillation was used as the dependent variable. We took the average of frontal (F3, F4) and central (C3, C4) electrodes. Full regression model results are shown in Supplementary Table 3.
There was a significant interaction between the amount of slow oscillation-spindle coupling and worse memory performance following stress (Figure 6). Importantly, the correlation with emotional memory was significantly larger than for neutral memory (z = 2.02, p = .04), suggesting that amount of coupling was more strongly related to impairment of emotional items compared to neutral. There were no significant relationships in the control group (neutral: r = .19, p = .30; emotional (combined): r = -.08, p = .69).
Coupling in high and low cortisol responders
Having found that the amount of slow oscillation-spindle coupling following stress is related to worse memory, we next sought to determine if the effect was driven by those who showed a high cortisol response to the stressor (Figure 7). Regression models did not indicate a significant interaction between amount of coupling and cortisol response for any valence type (neutral: β For emotional items, correlations between amount of coupling and emotional memory were
Coupling phase and strength
Having shown that the amount of coupling is associated with impaired memory following stress, we next evaluated whether the timing and/or consistency of coupling events were also implicated in this effect. With regards to coupling strength, regression models for both neutral and emotional memory were non-significant (all R 2 < .01, all p > .34). For coupling phase, circular linear correlations between coupling phase and memory were all non-significant (all r < .32, all p > .19). Together, these analyses suggest that the overall amount of slow oscillation-spindle coupling that is associated with worse memory following stress, and this effect is not related to the consistency or phase timing of the coupling events themselves.
Sleep spindles and slow oscillations in isolation
Given the existing literature suggesting a critical role of slow oscillation-spindle coupling in memory processes (e.g. Klinzing et al., 2019) we chose to primarily focus on coupling in this analysis. Nevertheless, we conducted supplemental analyses examining sleep spindle and slow oscillation densities in isolation. With regards to sleep spindles, all regression models were nonsignificant (all R 2 < .04, all p > 18). With regards to slow oscillations, we did observe a pattern of results similar to coupling, in that a higher slow oscillation density was associated with worse memory in the stress group (see supplemental results for details). To ask whether coupling explained significantly more of the variance in emotional memory than slow oscillation density alone, we ran a multiple linear regression model predicting emotional memory from slow oscillation-spindle coupling and slow oscillation density. Coupling predicted emotional memory independently of slow oscillation density ( Additional post-hoc tests There were no significant interactions with memory when coupling during stage 2 sleep was considered (all p > .44). Our primary metric of SWS coupling was the percentage of spindles coupled to slow oscillations. The reverse (percent of slow oscillations coupled to a spindle) did show a similar pattern of results, with a negative correlation between the amount of coupling and emotional memory in the stress (r = -.39) but not the control group (r = -0.039). When the number of coupled spindles (rather than the %) was considered, again a similar pattern of results were found, though results were generally weaker than when using the percentage. We note that the number of coupled spindles is confounded with the total number of spindles, with the two being extremely highly correlated (r = 0.90, p < .001). Comparatively, the percentage of spindles coupled was not significantly correlated with the total number of spindles (r = -0.16, p .20).
Discussion
We set out to explore the role of slow wave sleep (SWS) and slow oscillation-spindle coupling during SWS in facilitating sleep-stress interactions on long-term emotional memory formation.
Although recent theoretical and experimental evidence suggests a key role for SWS and slow oscillation-spindle coupling in memory consolidation (Klinzing et al., 2019), their role in emotional memory consolidation has yet to be explored, and possible interactions with stress are unknown.
First, we queried whether time spent in SWS correlates with memory for emotional, neutral, or both emotional and neutral items. A large body of research has linked consolidation of neutral episodic memories with time spent in SWS (Plihal & Born, 1997;Ackermann & Rasch, 2014), and more recently this has been extended to the consolidation of emotional memory as well Alger et al., 2018). Studies that directly compare the benefit of SWS on neutral and emotional memory have found mixed results, with some reporting correlations with just emotional memory , or just neutral memory (Groch et al., 2015). In the present study, we found that time spent in SWS was positively associated with memory for both neutral and emotional items, but only in stressed participants. This suggests a more general memory function for SWS time such that, following stress at least, more time spent in SWS provides a memory benefit, but this benefit occurs for all items equally, independent of their emotional valence. SWS time may facilitate a sleep-stress interaction on memory, but this is not specific for emotionally valanced material.
Next, we looked at slow oscillation-spindle coupling events, which are believed to mediate sleep-based memory processing (Rasch & Born, 2013;Klinzing et al., 2019). Contrary to our SWS time finding, the amount of slow oscillation-spindle coupling was negatively correlated with memory in the stress group. This suggests that coupling impairs memory following stress, rather than promote it as is typically found for non-stressed participants and neutral memories (Niknazar et al., 2015;Mikutta et al., 2019). These effects were stronger for emotional items compared to neutral ones, suggesting that stress and SWS coupling may interact to impede emotional memory to a greater extent than neutral memory. This inverse relationship underscores the importance of considering both broad sleep stages and specific neurophysiological events when investigating sleep's effect on memory. Following stress, it appears that sleep stage time and oscillatory events within that stage impact memory differently.
We did not find any associations between coupling phase and memory performance, contrary to other work which has suggested that fast spindles (~12-16Hz) coupled to the excitable slow oscillation upstate peak to be particularly important for memory (Helfrich et al., 2018;Mikutta et al., 2019;Muehlroth et al., 2019). In our dataset, we found an average coupling phase of ~35°, suggesting preferential coupling just after the slow oscillation peak. Although this is similar to prior work using the same spindle detection parameters (Demanuele et al., 2016), it is slightly later than other work in similar samples, who show preferential phase coupling just before or at the slow oscillation peak (Cox et al., 2018;Helfrich et al., 2018;Muehlroth et al., 2019;Denis, Mylonas, et al., 2020). Differences between sleep stages, differences in spindle definitions, and differences in algorithm may all in part explain these between study differences. Recent work has also suggested that fast spindles may be further subdivided into "early" fast (occur primarily on the rising phase of the slow oscillation) and "late" fast (occur primarily at or shortly after the slow oscillation peak) spindles (McConnell et al., 2020). It is plausible that late fast spindles were primarily detected in this sample, especially given recent evidence that late fast spindles are more prevalent in SWS (McConnell et al., 2020). Functional significance of these potential subtypes should be explored in future research.
Future research is needed to fully understand why slow oscillation-spindle coupling negatively impacts emotional memory following stress. One possibility is that the nature of memory consolidation during SWS is not immediately suitable for highly salient memories formed under stress. Slow oscillation-spindle coupling is believed to facilitate systems consolidation during SWS, whereby hippocampal-dependent memories become more dependent on neocortical sites and are integrated into existing knowledge networks (Rasch & Born, 2013;Klinzing et al., 2019). This may not be appropriate for memories formed under stress where emotional tone is high. Over time, the affective response associated with a memory is diminished, leaving the experience itself with reduced emotional tone (Dolcos et al., 2005). It has been suggested that REM sleep could be involved in this process (Walker & van der Helm, 2009), and it is possible that more REM-based emotional processing is needed before a memory becomes fully integrated in the cortex via SWS-based consolidation. Under this hypothesis, consolidation of emotional memories in the stress group during SWS may be damaging because they are undergoing systems consolidation before other necessary offline processing is achieved. This could be tested in future studies whereby both sleep and memory are measured over longer time periods. In sleep immediately following learning, coupling would be expected to be negatively associated with emotional memory as found here. However, over longer time periods this may be reversed as the memory becomes ready to be integrated more into cortical networks.
The role of the amygdala in emotional memory formation and its activity during REM sleep have been well documented (Murty et al., 2010;Genzel et al., 2015). Far less is known however about amygdala activity during SWS in humans. Recent research has shown that sharp-wave ripples occur in the human amygdala during SWS, and these are temporally linked with both ripples in the hippocampus and sleep spindles (though notably not slow oscillations) .
This provides a potential physiological basis for emotional memory consolidation during SWS, although an association with memory is yet to be reported. How amygdala-hippocampal interactions during SWS differ after stress is also unknown. As such, it remains possible that the negative association between coupling and emotional memory in the stress group could in part be driven by some currently uncharacterized process occurring between the hippocampus and the amygdala during SWS.
Why SWS time and SWS coupling show opposite associations with memory is difficult to reconcile and requires future work to fully understand. It is important to consider that coupling events are just one neurophysiological event that occurs during SWS and is a particularly rare event. Only ~15% of spindles couple to slow oscillations, and coupled spindle density has been reported as being around 0.5 per minute (Denis, Mylonas, et al., 2020;Mylonas et al., 2020). As such, coupling is a rare event in the broader SWS sleep state. It is possible then that while coupling events themselves impair memories formed under stress, other aspects of SWS may still be beneficial, and these are captured in a broad sense when considering just SWS time. For example, the neurochemical milieu of SWS differs greatly from wakefulness and promotes memory consolidation (Feld & Born, 2020).
We found that stress alters the association between SWS and memory. While the stressor significantly increased cortisol levels from baseline, and to a greater extent than the control task, these effects did not persist into the evening. When cortisol was measured before bedtime, there were no differences between the groups, and cortisol levels had returned to baseline levels. This suggests that the results were due to stress-related neuromodulatory activity at encoding, and not due to stressor-related cortisol results at bedtime.
It is notable that no associations between SWS activity and memory were found in the control group, given other research showing SWS-related benefits for both neutral and emotional episodic memory in non-stressed participants (Plihal & Born, 1997;Cox et al., 2012;Payne, Chambers, et al., 2012;Groch et al., 2013;Ackermann & Rasch, 2014;Niknazar et al., 2015;Payne et al., 2015;Mikutta et al., 2019;Denis, Mylonas, et al., 2020). While the reason for this finding is unclear, one possible explanation may be the delay interval used in this study design. In non-stressed participants, several studies have shown the benefit of sleep is largest if learning occurs close before sleep, with a sleep effect not being detected when there is a long wake period between learning and sleep. This has been demonstrated for both neutral (Gais et al., 2006;Talamini et al., 2008;Payne, Tucker, et al., 2012;Denis, Schapiro, et al., 2020), and emotional memories (Payne, Chambers, et al., 2012). In the present study, participants left the lab and went about their day before returning to sleep.
During that interval, memory strength may have deteriorated to the point to which sleep was unable to act on them. On the other hand, in the stress group, heightened cortisol during encoding could have led to stronger initial representations that were tagged for further offline processing (Shields et al., 2019), allowing later sleep to further strengthen and consolidate these memories.
Prior work has found mixed results with regards to sleep spindles and emotional memory. While some studies have found that SWS spindles do correlate with emotional memory consolidation (Cairney et al., 2014;Alger et al., 2018), other work has not found any association (Morgenthaler et al., 2014;Sopp et al., 2017;Bolinger et al., 2018). Pharmacologically enhancing stage 2 spindles has also been documented to improve emotional memory (Kaestner et al., 2013), although other correlational studies have shown no association between stage 2 spindles and emotional memory (Prehn-Kristensen et al., 2011;Baran et al., 2012;Göder et al., 2015;Bolinger et al., 2018). Similarly, we did not find stage 2 spindle coupling to relate to emotional memory in the present study. Clearly, more work is needed to untangle the exact role of sleep spindles on emotional memory. It is notable that no prior work (to our knowledge) has specifically examined slow oscillation-spindle coupling, which should be assessed in future research.
The exploratory nature of this work means that it is important for future studies to replicate these findings with a priori hypotheses. Additional limitations should also be considered. Although emotional memory recognition was numerically higher in the stress group compared to the control group, we did not detect a significant interaction effect between stress and memory. This may suggest the study was underpowered to detect this group level interaction, and future research should work to employ larger sample sizes. In addition, memory was only tested once.
As such, it is not clear from this study whether sleep helped to facilitate a change in memory performance from pre-to post-sleep. This study also utilized a between-subjects design, making it more difficult and less well-powered to examine differences between stress and a non-stress control. The current design also utilized a different task to previous studies of sleep-stress interactions. Bennion and colleagues used the emotional memory trade-off task, which presents complex scenes and then uses a recognition memory test to separately assess memory for salient objects and the background scene on which they were presented (Bennion et al., 2015). This differs from the current task, which used line drawings to more holistically cue memory for complex scenes; the use of line drawings as cues is likely to lead to a harder retrieval task, and it also minimizes the presence of emotion in the retrieval cues; these design differences may have affected the pattern of results when compared to past designs (Bennion et al., 2015). Finally, this study examined the impact of sleep and stress on both positive and negative stimuli, as opposed to just negative stimuli as it Bennion et al., (2015). The present work also lacked a wake control group. As such, it is not possible to say that sleep led to a significant emotional memory benefit compared to time spent awake. This sleep-specific benefit, however, has been frequently documented in prior research Payne & Kensinger, 2010; though see Lipinska et al., 2019;Schäfer et al., 2020). Additionally, this does not preclude correlational analyses between memory and sleep neurophysiology, the primary focus of this investigation.
Notably, we did not find any differences in SWS oscillatory events between the stress and control groups. One notable exception was that the total number of spindles and slow oscillations were higher in low cortisol responders within the stress group than high cortisol responders.
However, this difference disappeared when sleep stage time was controlled for by using density measure (spindles/slow oscillations per minute of SWS). This differs from previous research, which has found that psychosocial stress impacts the architecture of subsequent sleep, in particular reducing slow wave (0.5Hz-4.5Hz) oscillatory power (Ackermann et al., 2019). There are several possible reasons for this discrepancy. First, previous work showing changes in slow wave power following stress were looking at changes within an afternoon nap that occurred in close proximity to the stressor, rather than overnight sleep (Ackermann et al., 2019). Cortisol levels around the time of sleep onset and the sleep period itself may have been higher in the Ackermann et al. study than in ours which led to the alterations in spectral activity (Ackermann et al., 2019). The natural circadian rhythm of cortisol means that cortisol levels around sleep onset in our overnight design (~11pm) would most likely be lower than at sleep onset in an afternoon nap. In addition, approximately 6 hours elapsed in our study between the stressor and sleep onset, as opposed to ~3 hours in the Ackermann et al study. Previous work has examined spectral power in a wide frequency range of 0.5-4.5Hz, which includes both slow oscillation and delta frequency activity. It is possible that stress leads to elevation in overall spectral power in this band without impacting the individually detected slow oscillatory events. Finally, Ackermann et al. noted that changes in sleep following stress were not seen across the whole nap period, but only in the first 15 minutes. How this time-dependent effect in a nap would map to a full night of sleep is unclear but is worthy of future investigation.
More generally, our findings demonstrate the importance of not relying solely on sleep stage correlations when probing sleep and memory associations (Lim et al., 2020;Muehlroth & Werkle-Bergner, 2020;Simor et al., 2020). By simply assessing time spent in SWS, no consideration is given to specific neurophysiological events. It is equally important to also examine the specific oscillatory events that occur during different periods of sleep. Recent experimental work has emphasized a mechanistic role for discreet events occurring during SWS that underpin memory consolidation processes (Latchoumane et al., 2017;Klinzing et al., 2019).
To this end, coupling between slow oscillations, sleep spindles, and sharp-wave ripples mediate the consolidation of memories during sleep (Latchoumane et al., 2017). Furthermore, as we show here, these macro and micro level features of sleep can in fact have opposite effects on memory, and show selective effects based on memory valence and stress around the time of initial encoding.
In summary, this study adds to a small but growing body of research examining interactions between sleep and stress in the processing of emotional memories. We examined, for the first time, the role of key SWS oscillations in this process. To our surprise, we found that increased coupling between slow oscillations and sleep spindles impaired emotional memory following stress. Future research should seek to replicate this finding, and further probe the possibility that stress leads to a change in the function of slow oscillations and their coupling with spindles. If | 2020-06-11T09:01:42.438Z | 2020-06-09T00:00:00.000 | {
"year": 2021,
"sha1": "ab5f60e679b40067777c1b6e5c2a3ec11044fc6f",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/01/22/2020.06.08.140350.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "e0a32fff00aa4491a6594b02ef5f0cb6043d10f9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Psychology"
]
} |
119284787 | pes2o/s2orc | v3-fos-license | PyGFit: A Tool for Extracting PSF Matched Photometry
We present PyGFit, a program designed to measure PSF-matched photometry from images with disparate pixel scales and PSF sizes. While PyGFit has a number of uses, its primary purpose is to extract robust spectral energy distributions (SEDs) from crowded images. It does this by fitting blended sources in crowded, low resolution images with models generated from a higher resolution image. This approach minimizes the impact of crowding and also yields consistently measured fluxes in different filters, minimizing systematic uncertainty in the final SEDs. We present an example of applying PyGFit to real data and perform simulations to test its fidelity. The uncertainty in the best-fit flux rises sharply as a function of nearest-neighbor distance for objects with a neighbor within 60% of the PSF size. Similarly, the uncertainty increases quickly for objects blended with a neighbor more than four times brighter. For all other objects the fidelity of PyGFit's results depends only on flux, and the uncertainty is primarily limited by sky noise.
INTRODUCTION
Astronomy has increasingly benefited from highresolution imaging, exemplified by the Hubble Space Telescope, with a point-spread function (PSF) size of < 0.1 ′′ (Dressel 2011). Obtaining ancillary multiwavelength data at comparable resolution is often impractical, and it is commonly necessary to work with mixed resolution data sets. For instance, lower resolution groundbased data are often used in conjunction with high resolution space-based imaging, and at infrared wavelengths higher-resolution imaging is often not available or feasible. In such cases, effective crowding can vary substantially as a function of wavelength, and the quality of the final data set is limited by the reliability of fluxes extracted from the most crowded images. So long as sources remain unresolved, PSF fitting provides a viable method for extracting fluxes in crowded fields. However, a different procedure is needed to measure magnitudes of resolved or marginally-resolved sources in crowded fields. With mixed-resolution datasets, such a procedure must measure magnitudes in a consistent way despite differences in PSF, resolution, and crowding.
We present a new program, Python Galaxy Fitter (PyGFit) 3 aimed at solving these problems. PyGFit is not the first program to address these issues (see for example Fernández-Soto et al. 1999, Labbé et al. 2005, Laidler et al. 2007, and de Santis et al. 2007). Indeed, PyGFit and the well-known codes TFIT (Laidler et al. 2007) and ConvPhot (de Santis et al. 2007) are conceptually similar. TFIT and ConvPhot work by taking cutouts from a high-resolution image (HRI), convolving them with the low-resolution PSF, and fitting these models directly to the low-resolution image (LRI). As a result, the HRI and LRI must be astrometrically aligned, and their pixels have to be properly matched. In the case of ConvPhot the pixel scale must be the same in the lowand high-resolution images, and any offsets between the images must be integer pixel offsets and have to be fed into the program. In the case of TFIT, the pixel scale of the LRI must be an integer factor of the pixel scale of the HRI, and the images must cover exactly the same area on the sky. In both cases any sub-pixel offsets between the two images can introduce errors into the final results (Laidler et al. 2007).
PyGFit, however, uses analytic source models. It works on the basis of a high-resolution catalog (HRC) which gives the parameters of a model fit (i.e. a Sérsic profile) for every object in the HRI. PyGFit fits those models to the LRI, simultaneously fitting blended sources. The use of models minimizes the impact of shotnoise in the HRI, especially for objects with low S/N ratio. Also, this decouples the HRI and LRI, such that the LRI can have an arbitrary pixel scale and can cover larger or smaller areas on the sky. Surveys with high resolution imaging routinely fit model profiles to all visible sources, which means that PyGFit can often build off of already existing catalogs. Moreover, PyGFit is relatively fast, in many cases taking just a few minutes to fit the LRI for an area of the sky corresponding to a single HST pointing. PyGFit performs an alignment step to account for any zeroth order offsets between the WCS of the HRI and LRI, and can also account for small sub-pixel shifts between the two images, which may arise due to either small imprefections in the WCS solutions or morphological k-corrections that subtly shift the object centroid. PyGFit currently supports two models, a point source and Sérsic model, and is extensible to include any analytic profile. It also has a built-in capability to quantify uncertainties via simulations of artificial galaxies.
The intended purpose of PyGFit is to measure PSFmatched photometry from mixed-resolution datasets, especially for marginally-resolved sources in crowded fields. This enables reliable measurements of galaxy SEDs and consequently, stellar mass fits. However, PyGFit is not limited to this single application. As a profile fitting routine, it has a number of other potential uses. For instance, PyGFit can be used to subtract foreground sources from an image to search for faint background sources (such as gravitational arcs). It can also subtract objects identified in one image from another image (presumably taken at a different wavelength), a feature that can be used, for instance, to identify high-z dropout candidates.
This paper is structured as follows. Section 2 describes PyGFit's fitting procedure. Section 3 demonstrates PyGFit's usage on real data and discusses some relevant limitations. Section 4 describes the simulations built into PyGFit and uses them to measure the fidelity of PyGFit. Finally, Section 5 gives our conclusions. All magnitudes are on the Vega system, and we assume a WMAP 7 cosmology (Komatsu et al. 2011; Ω m = 0.272, Ω Λ = 0.728, h = 0.704) throughout.
Overview
The fundamental goal of PyGFit is to enable matched photometry in mixed-resolution data sets that is robust to the effects of crowding in the lower resolution images. In the limiting case where a source is effectively a point source in the lower resolution data set, this problem has long been solved as it is effectively a matrix inversion process (see e.g. the MOPEX software for MIPS photometry; Makovoz & Marleau 2005, or DAOPHOT;Stetson 1987). Optimal deblending however becomes more challenging and computationally intensive when sources are even marginally-resolved, and the convolution of the PSF and underlying galaxy profile must be considered.
With PyGFit we present an approach that is designed to be fast, flexible, and reliable. This code was originally designed to enable such robust photometry in the crowded cores of high-redshift cluster galaxies using the combination of HST and Spitzer data, but is generally applicable to any situation in which one desires profilematched photometry between mixed resolution data sets. As described below, PyGFit can successfully deblend the photometry of two sources as long as their intrinsic separation is more than approximately 60% of the FWHM of the PSF in the low resolution data. It is also important to note that PyGFit makes the implicit assumption that the shape of the underlying profile is the same at all wavelengths − effectively an assumption that morphological k−corrections are small. In cases where the morphology changes strongly with wavelength, such as a starburst galaxy with an underlying old stellar population, the results from PyGFit should be treated with care. Such cases may be flagged through the galaxies' colors and SEDs.
At its core, PyGFit uses position and shape information of objects in a high-resolution image (HRI) to determine how to divide the luminosity of overlapping objects in a low-resolution image (LRI) among the constituent components. As such, the primary input into PyGFit is a high-resolution catalog (HRC) that gives positions and shapes of all objects in the HRI. PyGFit's procedure can be broadly separated into four steps: object detection and segmentation of the LRI, alignment of the HRC with the LRI, object fitting, and final catalog generation. There are five primary inputs into PyGFit which must be provided: the HRC, the LRI, an RMS map for the LRI, the PSF image of the LRI, and a Source Extractor configuration file for the LRI.
The HRC should give Sérsic model parameters for all objects in the HRI which are resolved in the LRI. This requires fitting a Sérsic profile to every object in the HRI, a task which is becoming common for surveys with HST imaging. While any program can be used to fit models to the HRI, the modeling routines built into PyGFit use precisely the same equations as GALFIT (Peng et al. 2002(Peng et al. , 2010, enabling the output from GALFIT to be fed directly into PyGFit. Therefore, the simplest way to build the HRC is by using a program that can run GAL-FIT and fit a Sérsic profile to every object in the image (for example GALAPAGOS, Häußler et al. 2011).
The first step PyGFit executes, object detection and segmentation, is performed by running Source Extractor (Bertin & Arnouts 1996) on the LRI. The primary goal of this step is to generate a segmentation map of the LRI. This provides a convenient method for determining which objects in the HRC are blended together and hence must be modeled together, and it also divides the process into manageable chunks. Source Extractor also creates a low-resolution catalog (LRC) and a background map. PyGFit stores the LRC and includes any desired information from it in the final output catalog. The background map is used to estimate the sky for all objects, and is subtracted from the LRI before fitting. This is followed by an alignment step between the HRC and the LRI which serves two purposes. First, it accounts for any zeroth order offsets between the WCS of the HRC and the LRI. Next, it accounts for any miscentering of the low-resolution PSF image. PyGFit performs this global alignment by finding isolated objects and calculating the offset via least squares minimization. PyGFit takes the median best-fit position offset and then adjusts the positions of objects in the HRC accordingly.
PyGFit then moves on to fitting all the objects. It iterates through the segmentation regions of the LRI (i.e. the low-resolution sources) and finds all overlapping objects from the HRC. PyGFit then generates and stores a model for all matching objects from the HRC, convolves each with the low-resolution PSF as needed, and performs a χ 2 minimization using a Levenberg−Marquardt algorithm to fit the models to the low-resolution source. During the χ 2 minimization only the positions and fluxes of the objects are left as free parameters. All other Sérsic parameters (radius, Sérsic index, aspect ratio, and position angle) are held fixed. The positions are restricted to small shifts (typically less than a pixel) and the fluxes are constrained to be positive.
Finally, the output catalog is generated. This consists of the final fluxes measured by PyGFit for the objects in the HRC, any additional information requested from the LRC, and various diagnostics of each object. At this stage PyGFit also generates a residual image for quick quality control and assessment. Figure 1 gives a high level overview of PyGFit's procedure, showing the primary inputs required by PyGFit on the top, the main steps it executes, and how the various inputs feed into each step.
Object Detection and Segmentation
The first thing PyGFit does is to run Source Extractor on the low resolution. It feeds the RMS map into Source Extractor and detection limits are set in a Source Extrac- tor configuration file. PyGFit uses three data products from this Source Extractor run: the segmentation map, the LRC, and the background map. The most important output from Source Extractor is the segmentation map, which PyGFit uses to separate the process into distinct blocks . We refer to each region of the segmentation map as a low resolution source. A single low resolution source can have any number of objects from the HRC associated with it. In practice, PyGFit ignores all low resolution sources which do not have any overlapping objects from the HRC.
One advantage of Source Extractor is that it has an easily configurable level of deblending. It is preferable to minimize the amount of source deblending done by Source Extractor and rely on PyGFit to simultaneously fit the photometry for blended objects; however, a complete lack of deblending with Source Extractor can result in large segments of the LRI being assigned to one low resolution source. These large segments in turn can cause unreasonably large execution times as PyGFit attempts to perform χ 2 minimization for a problem with hundreds of free parameters. In such cases allowing Source Extractor to perform a small amount of deblending can dramatically decrease execution with little to no loss of fidelity for the final results.
Source Extractor also generates a LRC which PyGFit stores. PyGFit does not use any of the information in the LRC but simply passes it along to the final catalog. By default, PyGFit extracts positions and auto-magnitudes from the LRC and copies them into the final catalog. However, it can also pass along additional parameters from Source Extractor if desired. Finally, PyGFit subtracts the background map from the LRI to remove its contribution from the low-resolution sources.
Catalog Alignment
Next PyGFit runs an alignment step. The alignment step accounts for any zeroth-order misalignment of the HRC and LRI, as well as any miscentering of the low resolution PSF image. PyGFit begins the alignment step by identifying isolated sources, i.e. low resolution sources which only have one associated object from the HRC. PyGFit has configurable parameters to determine how many isolated sources should be used for the alignment step, as well as to limit them to a particular magnitude range.
After identifying isolated sources, PyGFit fits them using its normal routine (Section 2.4) but with a larger allowed position shift than during normal fitting. The precise size of the allowed position offset is configurable by the user, and should be large enough to account for any potential offset between the HRC and LRI. Since there are only three free parameters being fit to the cutout from the LRI (x, y, and magnitude), there is no degeneracy and PyGFit easily recovers the position of the high resolution object in the LRI. It is then straightforward to measure the median difference between the object positions in the HRC and the LRI and correct the HRC accordingly. This step also accounts for any miscentering of the PSF template, which may happen when PSFs are determined empirically. If the PSF is not properly centered then the galaxy models will also be miscentered after PSF convolution. The fitting process will naturally account for this offset, such that the final object positions will be shifted by the PSF offset. Therefore when PyGFit performs the alignment step, it automatically corrects the HRC in such a way that the PSF-convolved galaxy models will be properly aligned with the LRI.
2.4. Fitting 2.4.1. Cutouts The first step in the fitting process is to identify all low resolution sources which have matching objects from the aligned HRC. Objects from the HRC are matched with a low resolution source if the object falls on one of the pixels identified by Source Extractor as belonging to the segmentation region for an object in the LRI. Low resolution sources without any overlapping objects are ignored. Any desired further analysis of these sources can be performed separately using the residual image. Fitting is done with the background-subtracted LRI, and fitting proceeds from the brightest low resolution sources to the faintest. After each source is fit, the best-fit model is subtracted from the LRI to remove its contribution to any nearby sources.
PyGFit generates a cutout of the blend from the LRI and extracts a matching cutout from the RMS map. The extracted cutout is square and is large enough to enclose the full segmentation region of the low resolution source. The cutout is further extended in every direction by the size of the allowed position shift during the fitting process, and an extra two pixels of buffer are added on each side. If the resultant cutout image extends off of the LRI then PyGFit shifts the cutout to abut the edge of the image.
Model Generation
PyGFit then generates a model image for every matching object from the HRC. PyGFit currently supports two models, a point source and Sérsic model, and is extensible to include any analytic profile. Model generation is very straightforward for point sources, which are simply a copy of the PSF image shifted to match the HRC position and scaled to match the total flux of the first guess used during the fit (Section 2.4.3). Shifting is accomplished with third-order spline interpolation.
Sérsic model generation begins by calculating the average surface brightness (Σ) of the Sérsic model in each pixel of the cutout. The Sérsic profile depends upon the effective radius (r e ), Sérsic index (n), axis ratio (B/A), position angle (PA), total flux (F tot ), a boxiness parameter (c), and the profile center (x cent , y cent ). From these eight parameters PyGFit derives two more parameters: the surface brightness at the effective radius (Σ e ) and a coupling factor (κ) that ensures that the effective radius is also the half-light radius (see for example Peng et al. 2002). The surface brightness as a function of radius is then given by: where .
During model generation, PyGFit sets the flux according to its first guess for χ 2 minimization (Section 2.4.3). Γ(2n) is the gamma function and R(c) is given by: In this equation β is the beta function with two parameters. All of these definitions precisely match those for GALFIT (Peng et al. 2002(Peng et al. , 2010, which is done intentionally for ease of use. If GALFIT is used to fit Sérsic profiles to the HRI, then the output from GALFIT can be passed directly into PyGFit without modification. PyGFit must calculate the flux in each pixel of the model image. The most straightforward way to do this is to integrate the Sérsic function over each pixel. However, the integration time of the Sérsic function can be computationally prohibitive, and PyGFit would be dramatically slower if it attempted to integrate the Sérsic function over every pixel. Instead PyGFit performs a numerical integration by splitting each pixel into subpixels, evaluating the Sérsic function at each subpixel, and averaging their values together. The level of resampling is finer towards the center of the model, with different levels of resampling for r > 2r e , r < 2r e , and the central pixel. For these regions PyGFit resamples the model image such that the size of each subpixel is at most r e /2, r e /20, and r e /200, respectively. Extensive testing has shown that this methodology provides a reasonable execution time without compromising the results. The only exception is for galaxies with small radii and high Sérsic indexes (n ∼ 8), where we find that the only way to reliably calculate the flux at the center of the Sérsic profile is by directly calculating its integral. However, these cases are easy to detect and, if desired, PyGFit can automatically switch from its default treatment to a full integration to guarantee that all galaxies are properly modeled. After generating the Sérsic model PyGFit then convolves it with the low-resolution PSF.
At the end of the model generation process PyGFit has a model image for every high-resolution object associated with a given low resolution source. The generated model image matches the cutout for the blend. The total flux of the model has been normalized to match the first guess that goes into the χ 2 minimization (Section 2.4.3), and the model has been convolved with the low-resolution PSF. Therefore, all necessary steps have been performed to prepare the model images for fitting to the cutout of the low resolution source.
Fitting
PyGFit uses a Levenberg−Marquardt algorithm to minimize χ 2 and fit the models to the low-resolution cutouts. The cutout from the RMS image gives the uncertainty of every pixel in the cutout. Each model has only three free parameters: x, y, and flux. For Sérsic models all other parameters (n, r e , B/A, and PA) are held fixed to the values found in the HRC. Since only the position and flux of the objects are free parameters, PyGFit does not need to generate a new model image at every iteration of the χ 2 minimization. Instead, PyG-Fit takes the stored model image, shifts it to match the new position (using third-order spline interpolation), and rescales it to match the new flux. This is done for point source models as well as for Sérsic models. For each iteration of the χ 2 minimization PyGFit takes the adjusted models, adds them together to make a total model image, and then calculates χ 2 in the standard way.
The total number of free parameters (n f ) for each blend is given by n f = 3 * n HRC where n HRC is the number of objects from the HRC associated with the low resolution source, and the total degrees of freedom for each fit is the number of pixels in the cutout minus the number of free parameters. As a first guess for model positions PyGFit uses the object positions from the HRC after the alignment step. The first guess value for the flux of each model is the magnitude of the object from the HRC converted to a flux using the zeropoint of the LRI.
During the fit the positions are constrained to move within a fixed distance of the first guess. The size of the allowed position shift is easily configurable. Rather than placing a constraint directly on the χ 2 minimization, PyGFit uses a mapping function to convert the position offset calculated by the χ 2 minimization from an infinite range to a finite range. This keeps the positions within the desired offset without any modifications to the χ 2 fitting routine. We also force the fits to have positive fluxes.
Final Catalog Output
After fitting has been completed for all low resolution sources, PyGFit generates a final catalog. The final catalog combines data from a number of sources. It includes the best fitting magnitudes and fluxes for all matching objects in the HRC. The catalog also includes the object number and auto-magnitude for the low resolution source from Source Extractor, plus any other selected Source Extractor parameters. All the information from the HRC is copied to the final output catalog. Finally, PyGFit computes a number of diagnostic measures which can be included in the output catalog. These include values such as the total number of high-resolution objects associated with the low-resolution source, the distance and best-fit magnitude of the nearest object in the blend, the best-fit magnitude of the brightest object in The center row of panels shows the LRI with the same field of view, while the bottom row of panels shows the residuals of the LRI after fitting. The LRI for the left uses ground-based R band imaging, while the example in the center is taken from the 4.5µm imaging and the example on the right is taken from the IRAC 3.6µm imaging. The left column shows a galaxy with extended features which cannot be described by a Sérsic profile. The center column shows a galaxy which is isolated in F160W but which is blended with another source in 4.5µm. The right column shows a galaxy near the edge of the F160W image which is blended with a bright source which is outside of the F160W image. Further details are in the text. the blend, the total best-fit flux and magnitude of all objects in the blend, and the fraction of the blend flux which is accounted for by each object.
APPLYING PYGFIT TO REAL DATA
Our initial test case for PyGFit involved measuring SEDs of galaxies in high redshift galaxy clusters. The data and project are described in detail in Mancone et al. (submitted). In summary, we use 13 galaxy clusters with 1 < z < 1.9 observed with broadband photometry in eleven filters. All the clusters were observed with ground-based optical imaging in the B w , R, and I bands, ground-based NIR imaging in the J, H, and K s bands, space-based NIR imaging in all four IRAC bands, and finally HST WFC3/F160W imaging. We use GALAPA-GOS (Häußler et al. 2011) to run GALFIT and fit a single Sérsic profile to every galaxy in our F160W images. We then run PyGFit on each of the bands using the GALFIT catalog as the HRC. Figure 2 illustrates typical results from PyGFit. It shows the original images in four different bands and their residuals after fitting in the center of ISCS J1434.5+3427, a galaxy cluster at z = 1.243. PyGFit cleanly subtracts all the visible objects. This is very typical for our ground-based imaging, especially in the NIR where the residuals are indistinguishable from sky noise in virtually all cases.
At 4.5µm (far right of Figure 2) there are small residuals visible in the very centers of many objects. These residuals are common to both our 3.6µm and 4.5µm filters but are not visible in the 5.8 and 8.0µm filters. The primary difference between the IRAC images is the size of the PSF, which varies from 1.66 ′′ to 1.98 ′′ ). Our IRAC images have been dithered and resampled to have a pixel scale of 0.865 ′′ /pixel. For this pixel scale the 3.6µm and 4.5µm PSFs are slightly undersampled, while the longer wavelengths are Nyquist sampled.
Without a fully resolved PSF, interpolation (which PyGFit performs during model generation and fitting) can introduce artifacts, and this is likely the source of the small residuals observed in our blue IRAC bands. However, our simulations (Section 4) conclusively demonstrate that PyGFit can reliably extract magnitudes and fluxes from the observations, and that the primary source of uncertainty is simply sky noise.
An examination of our residuals images reveals a few classes of problems which can result in PyGFit failures. We show a few examples of these cases in Figure 3. One source of difficulty arises when a galaxy is not well represented by a Sérsic function. In the example in Figure 3 (far left) a galaxy has extended features which cannot be modeled by a single Sérsic profile. As a result, the central region of the galaxy is over-subtracted, while the outer region is under-subtracted. As long as the galaxy does not have a substantial amount of flux outside of the model radius, PyGFit can still return an approximately correct total flux. If the galaxy does have substantial flux outside of the model radius, PyGFit will underestimate the total flux of the galaxy. However, any error introduced by a mismatched model will be the same for all filters. Therefore, when using PyGFit to measure SEDs, this class of problem can lead to an underestimated SED normalization but will not introduce any additional filterto-filter uncertainty in the SED.
PyGFit can fail catastrophically when objects in the LRI are missing from the HRC. If, in the LRI, an object in the HRC is blended with another object which is not in the HRC, then PyGFit will assign flux from the second object to the first, overestimating its flux. This can happen in a number of ways, two of which are illustrated in Figure 3. The top central panel of Figure 3 shows a faint galaxy. The center panel shows that, in the 4.5µm image, there appears to be a significant elongation towards the bottom right, which cannot be accounted for from the F160W image. After subtraction (bottom center) there appears to be an object left over below and to the right of the F160W source. The only way to explain this is with the presence of an object which is bright in 4.5µm but nearly invisible in F160W, and which happens to be blended with the object visible in F160W. As a result, the object from the HRC is overfit to account for the flux from the additional low-resolution object, and therefore its fit is unreliable.
The right set of panels in Figure 3 show the same class of problem in another context. This shows what can happen when the LRI extends past the HRC. The top right panel shows an object which is near the edge of the F160W image. In the 3.6µm image (center right) a bright object happens to be nearby but is just outside of the F160W field of view, and is therefore missing from the HRC. Although this second object is outside of the F160W field of view, it is bright enough to contribute substantially to the flux near the object of interest. As a result, PyGFit overestimates the 3.6µm flux of the object which is in the HRC. While this particular problem can likely occur for any image, we see it most commonly in our IRAC images. This is because our IRAC images have the highest source densities and the largest PSF of all of our images, and this combination increases the likelihood of having such a blend.
Obviously, PyGFit cannot account for objects which are missing from the HRC. This fact should be kept in mind when using PyGFit and care must be taken to include all sources which will be visible in the LRI, or to reject sources that are blended with objects missing from the HRC. The residual image generated by PyGFit can be used to identify drop-outs, and the χ 2 statistics returned by the code can be used to identify objects that are poorly fit due to blends with objects that are missing in the HRC.
SIMULATIONS
We use simulations to estimate the errors for the bestfit magnitudes and fluxes from PyGFit, as well as to evaluate its fidelity and limitations. To aid in this pro- -Difference between input and output magnitude for simulated galaxies as a function of flux ratio (left) and separation relative to the size of the PSF (right). To cleanly separate the two competing effects the left panel only includes galaxies separated by at least 1 PSF FWHM, and the right panel only includes galaxies which are the brightest galaxy in the pair.
cess we developed a companion routine to PyGFit named PyGSTI (Python Galaxy Simulation Tool for Images). PyGSTI uses the same model generation routines developed for PyGFit, generates simulated galaxies, and inserts them into images. We have designed PyGFit to use PyGSTI in a fully automated fashion. We note that while PyGSTI is packaged with PyGFit, it can also run as a stand-alone program and is convenient for generating simulated galaxies and stars for any number of applications.
When running simulations, PyGFit randomly selects galaxies from the HRC, assigns them magnitudes from the magnitude distribution of the LRC, places them into random locations in the original image, runs PyGFit on the simulated frame, and repeats this process many times. PyGFit limits the number of artificial galaxies placed into each simulated frame to prevent an excessive increase of crowding. By default, the source density in PyGFit's simulated frames is 2.5% higher than in the original LRI, and PyGFit generates 100 simulated frames. Both of these parameters are easily configurable.
Once PyGFit runs on all simulated frames, a final catalog is created with input and output magnitudes and fluxes for the simulated galaxies, along with all of the output parameters normally recorded by PyGFit (Section 2.5). This includes information on the number of objects the simulated galaxy was blended with, how close and bright the nearest neighbor is, and other environmental indicators.
Simulation Results
The simulations show that for a small percentage (∼ 2%) of galaxies, PyGFit dramatically underfits the model, effectively assigning them zero flux. This occurs only for faint galaxies blended with brighter neighbors. In the simulations these galaxies are easily recognized as having a best-fit magnitude more than five magnitudes (100 times) fainter than the brightest galaxies in the blend. We find similar galaxies in our real data and note that they are easily detected by the same criteria. Such galaxies should always be removed from any real sample as their magnitudes are completely unreliable. Similarly, we remove them from our simulated galaxy The bottom row of panels shows the corresponding error as a function of magnitude which is calculated by binning the simulated galaxies in magnitude space and measuring the standard deviation in each bin. Error bars are calculated with bootstrap resampling. The solid curve in the bottom row of panels shows an estimate of the magnitude error introduced by sky noise for an aperture magnitude with a diameter of 4 ′′ . While the magnitudes returned by PyGFit are model fits rather than aperture magnitudes, this aperture provides a useful reference point for comparison because it is a common choice for Spitzer IRAC surveys Ashby et al. 2013).
For the R and J bands the magnitude errors that result from PyGFit are comparable to the expected sky noise for a 4 ′′ diameter aperture magnitude. These errors (solid circles in Figure 5) are substantially better than for a simple 4 ′′ aperture (open circles in Figure 4), due to the impact of crowding on the aperture magnitudes. PyGFit does not however achieve performance comparable to the sky-noise limit for a 4 ′′ aperture magnitude in our 3.6µm data. A close examination of the top right panel of Figure 5 shows that there are poorly fit galaxies (|M (Input) − M (PyGFit)| > 0.75) driving this scatter. Our simulations reveal that PyGFit begins to break down when two galaxies are very close together or when a galaxy is blended with a much brighter one. To quantify this, we perform another simulation where we insert pairs of galaxies into an image with the same noise properties, pixel scale, and PSF as the 3.6µm image. These simulated pairs have separations between 0.2 ′′ and 3 ′′ , magnitude differences between 0 and 3 (i.e. flux ratios between 1 and ∼ 15), and the brighter galaxy in the pair has a magnitude between 15 and 17. We drop these pairs into an otherwise blank image and measure PyGFit's fidelity as a function of flux ratio and separation for close pairs. Figure 6 illustrates the result. The left panel shows the magnitude error for simulated galaxies as a function of the flux ratio of a galaxy and its neighbors. To isolate the influence of the flux ratio, this panel excludes galaxies separated by less than the FWHM of the PSF (1.66 ′′ in 3.6µm). The right panel shows the magnitude error for simulated galaxies as a function of the separation between the pair relative to the FWHM of the PSF. This panel only shows galaxies which are the brightest galaxy in the pair. We find that our 3.6µm PyGFit results become unreliable for galaxies with flux ratios < 0.25 (i.e 1.5 magnitudes fainter than the blended object) or separations 60% ( 1 ′′ ) of the PSF radius. Tests show that our other filters encounter a similar issue for such pairs. However, source density is by far the highest in our IRAC images. Because the crowding is less of an issue in our other bands, these limits have a smaller impact in our real and simulated data for our non-IRAC bands. We therefore remove from our sample simulated galaxies within 1 ′′ separation of another object, or within 3 ′′ of another object that is > 1.5 mag brighter, and plot in Figure 7 the fidelity of PyGFit's results for the remaining galaxies. Figure 7 shows that after removing this problematic case of galaxies from our sample, the quality of the IRAC results is significantly improved. The standard deviation also decreases for the shorter wavelength bands, and is better than the 4 ′′ sky noise limit at faint magnitudes. These simulations demonstrate that PyGFit provides reliable results fitting galaxies with separations as small as ∼ 60% of the PSF FWHM, and in blends where the neighboring galaxy is less than 1.5 mag brighter.
We examine how PyGFit performs as a function of environmental diagnostics. In Figure 8 we show the fidelity of PyGFit's results as a function of the number of objects blended together, the distance to the nearest object, the flux ratio between an object and its nearest neighbor, and the fraction of the blend accounted for by the simulated object. We only plot simulated galaxies in this figure if they have [3.6] < 19.0 and pass the cuts discussed above (i.e. flux ratio > 0.25 and separation > 1 ′′ ). There are no strong correlations in Figure 8, demonstrating that the quality of PyGFit's results are independent of the degree of crowding or other environmental factors. Similarly, we also find that there is no relationship between the uncertainty of PyGFit's results and any of the Sérsic parameters. Other than magnitude, PyGFit's fidelity is independent of r e , n, B/A, and PA.
CONCLUSIONS
We present PyGFit, a program which generates PSF-matched photometry from images with disparate pixel scales and PSF sizes. PyGFit takes model fits from high resolution images, fixes shape parameters, and fits the models to low resolution images allowing only the magnitudes to vary along with small position shifts.
The code is publicly available at \protecthttp://www.baryons.org/pygfit, along with additional documentation to facilitate use. We apply PyG-Fit to real images and also perform simulations to measure PyGFit's fidelity. With the exception of some small residuals in the two bluest IRAC filters where the PSF is undersampled, PyGFit is able to cleanly subtract galaxies from the LRI. Especially in the ground-based images, where the PSF is well resolved, only sky noise is visible in the residual images. Simulations show that the uncertainty in PyGFit's magnitudes are consistent with being limited by sky noise.
Our simulations identify a few classes of problems which can introduce errors into the PyGFit results. Most important are catalog problems, i.e. incorrect or missing high resolution data. Primary examples of catalog problems include fitting galaxy models in the HRI which are a poor fit to the galaxy, or the presence of objects in the LRI that are missing from the HRC. The latter commonly happens because of differences in filter wavelength or because of the finite size of the HRI. Our also simulations show that a small fraction (∼ 2%) of faint, blended galaxies are effectively ssigned zero flux. While we find no obvious predictors for when this happens, such cases are rare and easy to detect/remove. We further find that (as expected) PyGFit cannot deblend galaxies with arbitrarily close neighbors or arbitrarily bright companions. This effect is important for our 3.6µm data where crowding is the most prominent. We find that PyGFit's results are reliable down to separations as small as ∼ 60% of the PSF size. | 2013-10-22T20:00:26.000Z | 2013-10-22T00:00:00.000 | {
"year": 2013,
"sha1": "5e9bd3a8e549e17e508760df6d3b2d05e3b50186",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1310.6046",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5e9bd3a8e549e17e508760df6d3b2d05e3b50186",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17527239 | pes2o/s2orc | v3-fos-license | Changing pattern of infective endocarditis in Iran: A 16 years survey
Objectives: To investigate the changes in characteristics of patients with infective endocarditis in Iran and comparing the results with the changing profiles of Infection Endocarditis (IE) in other countries. Methodology: We studied all patients with definite or possible IE seen at four referral teaching hospitals in Iran from Jan. 1995 to Dec. 2010. The data was analyzed both collectively and separately in two consecutive eight-year periods, i.e. 1995-2003 and 2004-2010. Results: A total of 286 episodes of IE, 172 males and 114 females, were reviewed from which 162 ones were in the first eight-year time period and 124 episodes in the second one. Mean age of the patients was significantly increased in the second eight-year period (24.2±11 vs 39.4±15 years old, p value = 0.01). Increase in the episodes caused by Staphylococcus aureus was significant (40.7% vs 22.8%, p value = 0.01). The mean size of the vegetation was noticeably higher among IDUs than non-IDUs (1.53±0.1cm vs 0.76±0.2cm, p value < 0.001). As well as extra cardiac complications, mortality rate was noticeably higher among the patients with vegetation size ≥ 1cm (34.4% vs 16.3%, p value = 0.003). There was not a significant difference regarding the mortality rate between the conservatively and surgically treated patients (20.7% vs 22.9%, p value = 0.07). Conclusion: The most important changing characteristic of IE which influences the outcome of the disease seems to be vegetation size which can account for as the outcome predictor.
INTRODUCTION
Known as a serious infection of heart valves, Infective Endocarditis (IE) is still associated with high mortality and morbidity despite advances in medical and surgical interventions. [1][2][3] Historically, infective endocarditis was a disease which occurred predominantly in patients with an underlying heart problem, particularly, congenital heart disease (CHD) and Rheumatic heart disease (RHD). It was mostly a result of community acquired bacteremia including infections acquired following invasive dental or medical procedures or during the hospitalization in medical centers. Streptococcus viridans was the most common causative agent and compared to other heart valves, aortic and mitral valves were more frequently involved. [2][3][4][5] Over the past two decades, new trends in the epidemiology of IE have been noticed which influenced the manifestations and outcome of the disease. [4][5][6][7][8] Recent studies indicate a significant change in the population at risk, causative microorganisms, involved valves and survival of the patients. [7][8][9][10][11][12][13] For instance, the prevalence of RHD is decreasing while the number of clinically ill patients with IE is increasing. 9,10 Endocarditis has increasingly become a disease of the elderly. 11 In the developed countries, about one-half of all IE cases occurs in patients over the age of 60 and the median age of patients has increased steadily during the past decades. 3,12 There is limited information about the characteristics of IE and its potential changes in Iran compared to the past. In this study, we investigated the situation of all patients with definite or possible IE seen at four referral teaching Medical Centers affiliated by Shahid Beheshti University of Medical Sciences (Loghman Hakim, Shahid Modarres, Labbafinejad and Bu-Ali Hospitals) in Tehran, Iran, from Jan. 1995 to Dec. 2010 in an effort to identify the changes in the characteristics of patients with infective endocarditis in Iran and to compare the results with the changing profile of IE in other countries.
METHODOLOGY
We reviewed medical records of all patients with a discharge diagnosis or postmortem findings of IE hospitalized in the four teaching hospitals from January 1995 to December 2010. These hospitals are considered as tertiary referral hospitals for infectious, cardiac and cardiothoracic surgical diseases, serving patients from all over the country. In this study, the cases of IE which didn't meet the Duke criteria for definite or possible IE were excluded. We also excluded the patients' records which didn't contain expected details. In the case of microbiologic findings, we considered three sets of blood cultures drawn one hour apart before the introduction of antibiotic treatment as required including factor. For echocardiographic studies, at least, one transthorasic echocardiography which contained enough details, particularly about vegetation size, was demanded. Using medical records, we also extracted required epidemiologic and demographic data including age, sex, predisposing conditions such as heart diseases, recent surgical procedures, intravenous drug injection, underlying diseases and co-infections.
In order to define the potential changes in the clinical presentation of the patients with IE, this study was divided into two chronologic parts; the first eight-year period (from January 1995 to December 2002) and the second eight-year period (from January 2003 to December 2010). The characteristics of the disease were analyzed both collectively (in whole 16 years) and separately (in eight-year intervals). Chi-square test was used to analyze the differences between the eight-year time periods in distributions of the categorical variables. Student "t" test and one-way analysis of variance were done to analyze the mean differences. In order to compare the blood culture groups, we used the Kruskal-Wallis analysis of variance as a nonparametric method. All statistical calculations and analyses were performed by using the IBM SPSS (PASW) Statistics (version 19.1). P value of less than 0.05 was considered significant.
RESULTS
From Jan. 1995 to Dec. 2010, a total number of 286 episodes of IE was reviewed from which 162 episodes were in the first eight-year period and 124 episodes were in the second one. Two hundred and four patients were diagnosed as definite IE and 82 patients as possible IE. There were 172 male patients with male to female ratio was 1.5:1. Mean age of all patients was 30.2 ± 16 years, age ranged from 3 to 81 years. There was a significant increase in the mean age of the patients in the second eightyear period of time compared to the first interval (24.2±11 vs 39.4±15 years old, p value = 0.01). The mean intervals between the initiation of the symptoms and the admission to the hospital in first and second eight-year intervals, were 15.6(±8) days and 10.5(±3) days, respectively (p value = 0.113). Collectively, the most frequent clinical presentation was fever (≥ 38° C) in 227(79.4%) of the patients followed by the loss of appetite in 109 (38.1%) of the patients. Table-I The most frequently observed predisposing cardiac conditions were rheumatic heart disease among the patients in the first period of time (38.9%) and intravenous drug injection between the patients in succeeding eight years (41.9%). There was a significant increase in the prevalence of blood-born infections (HIV, HCV, and HBV) in second period of time compared with first eight-year period. Table- Table-III. It is worth mentioning that overall, the mean maximal dimension of the vegetation was 1.01 ± 0.2cm. In separate analysis, it was found that the mean maximal dimension of the vegetation was significantly higher in Jan.2003-Dec.2010 period than the one in Jan.1995-Dec.2002 interval (0.89±0.4cm vs 1.31±0.3cm, p value = 0.01). Also, the mean size of the vegetation was noticeably higher among IDUs than non-IDUs (1.53±0.1cm vs 0.76±0.2cm, p value < 0.001). Outcome: Sixty four patients (22.3%) died from which 42 patients belonged to the first eightyear period of time (mortality rate = 25.9%) and 22 ones were included in the second eight-year period (mortality rate = 17.7%). Thirty two deaths happened during hospital admission, 18 ones took place between 1 to 3 months and 14 ones occurred during three months to one year after the admission. There was no significant difference regarding the mortality rate between the conservatively treated and surgically treated patients (20.7% vs 22.9%, p value = 0.07), but compared to conservative treatment, more patients died with surgical treatment within one month of admission (54.3% vs 32.1%, p value = 0.041). Mortality rate was noticeably higher among patients with vegetation size ≥ 1cm compared to those with vegetation size < 1cm (34.4% vs 16.3%, p value = 0.003). Extra cardiac complications, were also significantly increased when the vegetation size was larger than 1cm.
DISCUSSION
Characteristics of infective endocarditis are changing all over the world. We reviewed 286 episodes of infective endocarditis over 16 years to evaluate the possible changes in the characteristics of IE in Iran including the changes of demographic, microbiologic, and echocardiogaphic characteristics, and outcome of the disease. Changes in demographic characteristics: The mean age of the patients with IE increased during last 16 years. Also, the comparison of two consecutive eight-year intervals of our study showed a significant increase in the mean age of the patients in the second period. Most of previous studies in developing countries reported RHD as the predisposing factor for IE in a high proportion of cases. 14-17 Some of them reported it in as high as 53% of patients. 14 In this study, the overall prevalence of RHD as the predisposing heart condition was 26.6%. Also, a noticeable decrease was observed in the IE episodes predisposed by RHD during the second eight-year period compared to the first one and the previous studies. Putting these facts together and considering the higher mean age of the patients in the second eight-year period, we can propose that the decrease in the prevalence of RHD caused by the improvement in the diagnosing and treatment strategies of Streptococcal pharyngitis could be an important explanation for the increase in the mean age of the patients.
Consistent with previous studies which evaluated the characteristics of IE in Iran during the last decade, 18 this study leads us to conclude that significant increase in the IE cases was associated with intravenous drug injection during the second eight-year period compared to the first one. This is also compatible with the changes of IE profile in most of other courtiers particularly developed ones. 19,20 On the other hand, compared to the previous studies in developed countries, 8,19 we didn't observe a significant increase in the IE episodes related to hemodialysis. This could be related to the relatively younger population of the IE patients among developing countries compared to developed ones. Changes in microbiologic characteristics: We observed an increasing trend for IE episodes caused by Staphylococcus aureus. Many previous studies from both developed and developing countries reported the same changing pattern 16,19,20 This could be attributed to the increase in the population of IE patients with the history of intravenous drug injection.
Changes in echocardiographic characteristics:
Consistent with most of the previous studies reported from Iran and other developing countries, 15-17 mitral valve was the overall most frequently involved cardiac valve in our study; but, when analyzed separately, tricuspid valve was involved more than the other valves during the second eight-year period of our study. This significant increase in the cases with tricuspid valve involvement was again because of the increase in the population of IDUs with IE in whom the right side of the heart and particularly tricuspid valve frequently got involved. Changes in outcome: Compared to previous studies in Iran, 14,18 the mortality rate increased in our study. In separate analysis, the mortality rate was higher in second eight-year period than the first one. This increasing pattern in the mortality of IE is also reported by other studies from both developing and developed countries. 2,3,8,19 Some of the studies considered the specific characteristics of the disease as predictors of mortality. Cabell et al 8 considered Staphylococcus aureus infection as an independent predictor of higher mortality whereas Thuny et al 21 who reported vegetation size as a strong predictor of new embolic events and mortality. In our study, vegetation size was the most important prognostic predictor. We found that mortality and extra cardiac complications were significantly higher among the patients with vegetation size ≥ 1 cm.
All this shows that the characteristics of IE are changing all over the world but there are some differences between developed and developing countries. These changes seem to have contributed to the changes in our medical practice in post antibiotic era and the emergence of intravenous drug injection-associated episodes of IE. Furthermore, these changing characteristics have considerably influenced the outcome of the disease. Among these changing characteristics, vegetation size seems to have the greatest impact on the mortality and it could be considered as one of the outcome predictors in patients with infective endocarditis. | 2017-06-16T09:45:33.047Z | 2012-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "74b8f3aa65b202271d60e7d5562d6134d07448d3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12669/pjms.291.2682",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74b8f3aa65b202271d60e7d5562d6134d07448d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16134799 | pes2o/s2orc | v3-fos-license | UvA-DARE ( Digital Academic Repository ) ( No ) time for control : frontal theta dynamics reveal the cost of temporally guided conflict anticipation
During situations of response conflict, cognitive control is characterized by prefrontal theta-band (3to 8-Hz) activity. It has been shown that cognitive control can be triggered proactively by contextual cues that predict conflict. Here, we investigated whether a pretrial preparation interval could serve as such a cue. This would show that the temporal contingencies embedded in the task can be used to anticipate upcoming conflict. To this end, we recorded electroencephalography (EEG) from 30 human subjects while they performed a version of a Simon task in which the duration of a fixation cross between trials predicted whether the next trial would contain response conflict. Both their behavior and EEG activity showed a consistent but unexpected pattern of results: The conflict effect (increased reaction times and decreased accuracy on conflict as compared to nonconflict trials) was stronger when conflict was cued, and this was associated with stronger conflict-related midfrontal theta activity and functional connectivity. Interestingly, intervals that predicted conflict did show a pretarget increase in midfrontal theta power. These findings suggest that temporally guided expectations of conflict do heighten conflict anticipation, but also lead to less efficiently applied reactive control. We further explored this post-hoc interpretation by means of three behavioral followup experiments, in which we used nontemporal cues, semantically informative cues, and neutral cues. Together, this body of results suggests that the counterintuitive cost of conflict cueing may not be uniquely related to the temporal domain, but may instead be related to the implicitness and validity of the cue.
evidence of theta-band (3-to 8-Hz) oscillatory activity as the underlying "language" of communication within this network (see Frank, 2014, andCohen, 2014, for reviews), where the MFC has been proposed to be a "hub" for theta phase-synchronized information transfer (Cohen, 2011).
Cognitive control is a transient response, waxing and waning depending on the presence or absence of risks or demands such as response conflict. Indeed, because frontally mediated cognitive control is effortful, it is inefficient to recruit these mechanisms continuously (Ridderinkhof, Ullsperger, et al., 2004). Here, conflict is defined as the incongruence between a task-relevant learned response and a task-irrelevant stimulus feature, which results in slower and more error-prone behavior relative to nonconflict (the "conflict effect"). The immediate trial history in a typical conflict task influences the level of activated control in the subsequent trial (the "congruency sequence effect" [CSE] ;Egner, 2007;Gratton, Coles, & Donchin, 1992). When the previous trial imposed conflict, the cognitive control system is engaged, resulting in better performance on the subsequent conflict trial. Importantly, these trial-to-trial fluctuations in behavioral conflict effects have been shown to covary with trial-to-trial variability in midfrontal theta activity (Cohen & Cavanagh, 2011).
Although the CSE could be regarded as a form of anticipatory control through conflict adaptation (Botvinick et al., 2001;Egner, 2007), it is still reactive in nature : Earlier conflict detection boosts adaptive control, as called forth by conflict encountered on the subsequent trial. Anticipatory control can be triggered proactively as well, by means of contextual cues (Gratton et al., 1992). For example, when an informative red crosssign symbol was presented always before an incongruent arrow-flanker target, performance improved relative to when an uninformative question-mark symbol was used as the cue (Correa, Rao, & Nobre, 2009). Interestingly, this behavioral cueing effect was accompanied by the attenuation of a frontal N2 component (see also Strack, Kaufmann, Kehrer, Brandt, & Stürmer, 2013), a potential neural marker of conflict processing (van Veen & Carter, 2002;cf. Cohen & Donner, 2013). Similar cueing effects have been found with other conflict tasks and type of cues, ranging from semantically informative word cues preceding the target (Alpay, Goerke, & Stürmer, 2009;Wühr & Kunde, 2008), to the spatial location (Corballis & Gratton, 2003;Crump, Gong, & Milliken, 2006) or color (Lehle & Hübner, 2008) of the target itself.
One potentially relevant source of contextual information has received surprisingly little attention in the conflict-cueing literature: time. Temporal contingencies between events are ubiquitous in our natural environment and provide information about which actions to take and when. For example, while approaching a traffic light, seeing it change from green to yellow triggers a cascade of temporal predictions (e.g., how long is the light going to be yellow before it turns red, when will I arrive at the traffic light given my speed?), which ultimately results in a decision: Should I stop or not? The literature on temporal orienting has shown that temporally predictable stimuli trigger time-dependent preparatory neural dynamics, as well as faster and more accurate behavioral responses (see Nobre, Correa, & Coull, 2007, for a review). For example, in a color-word Stroop task in which the intervals between the irrelevant (color) and relevant (word) stimulus dimensions were predictable rather than random, subjects were able to strategically allocate attention in time to reduce the cost of Stroop interference (Appelbaum, Boehler, Won, Davis, & Woldorff, 2012). Although these accounts relate to temporal predictions about when a stimulus will occur and when to respond to it, it is less clear whether temporal predictions can be made about when conflict is most likely to occur. In other words, can temporal information be used as a cue to predict conflict?
To our knowledge, only one prior study has specifically addressed this question. In a letter-flanker task, Wendt and Kiesel (2011) varied the contingencies between the proportion of incongruent trials and the duration of a pretarget fixation cross (the "warning signal," also sometimes called the "foreperiod"), such that subjects could predict the likelihood of upcoming conflict on the basis of temporal information. According to the authors, this is a purely endogenous form of proactive control: The internally generated estimation of the fixation-cross duration provides the conflict-predicting information, not the exogenous presentation of the fixationcross per se. Their behavioral findings seemed to indicate that subjects were indeed able to use these temporal cues to prepare for upcoming conflict, but only when long (1,200-ms) instead of short (200-ms) durations were associated with a high probability of conflict.
The goal of the present study was twofold. First, we aimed to replicate the temporal-cueing effect observed by Wendt and Kiesel (2011), using another type of conflict task. Second, we reasoned that measuring electroencephalography (EEG) while using temporal cues would provide a valuable tool to investigate the online neural dynamics of cognitive control during temporal conflict anticipation. Here, we used a color-location Simon task (Simon & Rudell, 1967), in which we manipulated the duration of a pretarget fixation cross so as to predict with 80 % validity the congruency of the upcoming trial, analogous to the Wendt and Kiesel paradigm. Specifically, we hypothesized that (1) the behavioral conflict effect would reduce when conflict rather than nonconflict was cued, and (2) that conflictrelated midfrontal theta activity, often observed as being locked to the response (Cohen & Cavanagh, 2011), would shift to the pretarget conflict-predicting intervals. Finally, we assessed in three follow-up behavioral experiments whether our findings extended to (i) nontemporal symbolic cues, (ii) explicit word cues, and (iii) noninformative versus deterministic cues.
Materials and method Subjects
For the EEG experiment, 34 subjects participated in exchange for €20 or course credit. The data of four subjects were excluded because of excessive muscle artifacts in the EEG signal, problems during the EEG recording, or performance at chance level (accuracy~50 %) in one or more blocks of the task. Thus, in total the data of 30 subjects were included in the analyses (age range 19-32 years, M = 22.9; 24 females, six males, two left-handed). For the three follow-up behavioral experiments, a total of 51 subjects participated (Follow-Up Exp. 1: N = 20, 14 females, six males; age range 18-31 years, M = 22.2; Exp. 2: N = 16, ten females, six males; age range 19-24 years, M = 21.4; Exp. 3: N = 15, 12 females, three males; age range 18-30 years, M = 23.3). One subject was excluded from the analysis of Follow-Up Experiment 2 because of chance-level performance. In all experiments, the subjects did not report neurological and psychiatric disorders or the use of psychotropic drugs, and all reported having normal or corrected-to-normal vision. Subjects signed an informed consent form before participation. The experiments were approved by the local ethics committee, and all procedures complied with the relevant laws and institutional guidelines.
Task
In all experiments, subjects performed a modified version of the Simon task. In each trial, a colored circle (hereafter referred to as the "target") appeared on a light grey screen. Subjects were instructed to respond as quickly as possible with their left thumb for blue and yellow targets, and with their right thumb for red and green targets (or vice versa; color-response mapping was counterbalanced across subjects). Targets subtended 0.70 deg of visual angle (dva) and appeared for 100 ms at 5.02 dva left or right from a fixation point, which consisted of a small dark gray square of 0.10 dva in the center of the screen. Trials ended upon responding or after a response window of 1,000 ms had passed, in which case feedback on response speed was presented, with the words "respond faster!" The trial end was followed by an intertrial interval of 1,000 ms, during which the fixation point remained on screen.
Response conflict was induced on incongruent trials, in which the location of the target corresponded to the spatially incompatible response hand (e.g., if the blue target, which required a left-hand response, appeared right of the fixation point). On congruent trials, the target location always corresponded to the spatially compatible response hand. One half of trials were incongruent, and congruent and incongruent trials were presented in random order (see below for randomization procedure).
Each trial started with a nonconflict pretarget cue that predicted the congruency of the upcoming trial. In one EEG experiment, and three follow-up behavioral experiments, we manipulated the nature of this pretarget cue. First we describe the EEG experiment in depth, followed by our EEG measurement and analysis approach. At the end of this section, we then describe the follow-up experiments.
In the EEG experiment, the pretarget cue was a white fixation cross (0.55 dva), superimposed upon the fixation point, of variable duration. From here on we refer to this fixation cross as the "warning signal" (WS). The WS duration could be either short (400 ms) or long (1,400 ms), after which the fixation point reappeared for 300 ms (the "pretarget interval" or PTI; see below), followed by the target.
Crucially, the association between WS duration and trial congruency was determined by an experimental between-subjects condition: 15 subjects performed the early-conflict condition, in which 80 % of the short WSs were followed by incongruent trials (indicating high conflict probability), and 20 % of the long WSs were followed by incongruent trials (indicating low conflict probability). The other 15 subjects performed the lateconflict condition, in which these proportions were reversed: 20 % of the short WSs were followed by incongruent trials (low conflict probability), and 80 % of the long WSs were followed by incongruent trials (high conflict probability). In both conditions, the short and long WSs were presented equally often, keeping the overall proportion of incongruent trials at 50 %, and the temporal expectation of target occurrence balanced. The order of congruent and incongruent trials, together with the order of the WSs, was pseudorandomized, such that there was never a repetition of the same combination of WS, trial type, and stimulus properties (e.g., two consecutive times an incongruent trial with a blue circle presented on the left, preceded by a 1,400-ms WS). See Fig. 1a and b for an overview of the experimental design.
The primary motivation for using a between-subjects design was to avoid transfer effects of, or switch costs between, the learned association between WS and conflict across conditions. That is, the association is opposite in the early-and late-conflict conditions, and subjects were uninformed about the nature of the association (see below). Moreover, the between-subjects manipulation allowed us to increase the number of trials per condition. Although we acknowledge the potential limitations of a between-subjects design (i.e., low numbers of subjects per early-/late-conflict condition, and possible group differences that were unaccounted for), we believe that these do not outweigh the importance of controlling for transfer effects and switch costs. Moreover, and to foreshadow some of our results, the between-subjects factor Group did not show interaction effects with either conflict or conflict probability.
The motivation for using an additional PTI of a fixed 300 ms after the WS was to temporally isolate the WS, thereby making it more salient. Moreover, we reasoned that the PTI would control for confounding effects of temporal orienting (Nobre et al., 2007). That is, both after a short WS of 400 ms and a long WS of 1,400 ms, the target always appeared 300 ms after PTI onset. Thus, even though conditions differed in total pretarget duration, the PTI was meant to "reset" a temporal hazard function of target onset (inferring the probability of target onset given that it has not yet occurred) across all conditions. In contrast, uncertainty remained with respect to conflict: For instance, after 400 ms during the long WS in the lateconflict condition, the upcoming trial was only an incongruent conflict trial in 80 % of the time. In other words, we were interested in conflict expectation, not stimulus occurrence expectation.
Before the start of the experiment, subjects were informed about the different durations of the fixation cross, but not about the association of these durations with congruency. Subjects completed one practice block of 50 trials during which feedback on accuracy ("Correct," "Incorrect") was provided upon response in each trial. In the practice block, the temporal cues had a validity of 100 %, to enhance learning of the cue-conflict contingencies. The main task consisted of ten blocks of 100 trials. Between consecutive blocks, there were self-paced breaks during which feedback on task performance (average reaction time and accuracy) was shown on screen. After the main task, subjects were asked whether they noticed the duration-conflict associations, and if so, whether they had used a particular strategy in preparing for conflict based on the temporal cues. Although this was not assessed quantitatively, one subject indeed noticed the association, but did not report having used a particular strategy. One other subject noticed that "there was something about" the WS durations, but could not formulate what. Yet another subject who performed in the early-conflict condition did not notice the WS-conflict association in particular, but did report having used the strategy of trying to pay more attention and respond faster when the target came early. All other subjects explicitly reported not having noticed the cue manipulation, nor having used any strategy.
EEG data collection and preprocessing
During this first experiment, EEG data were acquired at 512 Hz from 64 channels (using a BioSemi ActiveTwo system; http://biosemi.com) placed according to the international 10-20 system, under and above the left eye for vertical EOG, to the left and right sides of the left and right eyes, respectively, for the horizontal electrooculogram (EOG), and from both earlobes for referencing. Offline, the EEG data were high-pass filtered at 0.5 Hz and epoched from -3.2 to 2 s, locked to target onset. These wide ranges avoided edge artifacts resulting from time-frequency decomposition (see below). All epochs were linearly baseline-corrected with a 200-ms pretarget baseline and visually inspected for artifacts. Those epochs containing electromyographic or other artifacts not related to eye blinks were manually removed, resulting in an average of 63 rejected epochs per participant (SD = 40). On the resulting epochs, an independent component analysis was performed with the EEGLAB software package (Delorme & Makeig, 2004) in MATLAB (The MathWorks). Components related to eye blinks or artifacts in the signal that could be clearly distinguished from brain activity were removed from the data. On average 1.33 components (range = 1-4) were removed. The EOG signal was included in the independent component analysis but was left out of the further analyses. Next, the surface Laplacian of the EEG data was estimated (Perrin, Pernier, Bertrand, & Echallier, 1989), which is equivalent to the current source density approach (Kayser & Tenke, 2006). This method has previously been applied for sharpening EEG topography and performing synchronization analyses (Cavanagh et al., 2010;Cohen et al., 2009;van Driel, Ridderinkhof, & Cohen, 2012). The Laplacian accentuates local effects while filtering out distant effects due to volume conduction (i.e., deeper brain sources that project onto multiple electrodes, thereby obscuring neurocognitively modulated long-range functional connectivity; Oostendorp & Oosterom, 1996;. For estimating the surface Laplacian, we used a 10th-order Legendre polynomial, and lambda was set at 10 -5 .
EEG time-frequency decomposition
The target-locked epoched EEG time series were decomposed into their time-frequency representations with custom-written MATLAB scripts, by convolving them with a set of Morlet wavelets with frequencies ranging from 1 to 50 Hz in 40 logarithmically scaled steps. These complex wavelets were created by multiplying perfect sine waves (sine wave = e i2πft , where i is the complex operator, f is the frequency, and t is time) with a Gaussian (Gaussian = e -t2/2s2 , where s is the width of the Gaussian). The width of the Gaussian was set to four cycles [s = 4/(2πf)], in order to have a good trade-off between temporal and frequency resolution. The fast Fourier transform (FFT) was applied to both the EEG data and the Morlet wavelets, and these were then multiplied in the frequency domain (equivalent to convolution in the time domain), after which the inverse FFT was applied. From the resulting complex signal Z t (down-sampled to 40 Hz), an estimate of frequency-specific power at each time point was defined as [real(Z t ) 2 + imag(Z t ) 2 ], and an estimate of the frequency-specific phase at each time point was defined as arctan[imag(Z t ) / real(Z t )]. The trial-averaged power was decibel normalized (dB Power tf = 10 * Log 10 [Power tf / Baseline Power f ]), where for each channel and frequency the conditionaveraged power signal during an interval of -250 to -50 ms relative to WS onset served as the baseline activity.
Intersite phase clustering (ISPC) measures the similarity between pairs of channels of their time-frequency phase values across trials. This measure of phase synchronization is thought to reflect interregional functional connectivity (Fries, 2005;Siegel, Donner, & Engel, 2012). ISPC is computed as follows: where N is the number of trials, n is the trial number, ϕ is the phase angle, and k and j are the two channels. ISPC can range from 0 (no phase synchrony between channels) to 1 (identical phase angles between channels) for each time-frequency (tf) point. ISPC values were baseline transformed into percent signal change (100 * [(ISPC tf -ISPCbase f ) / ISPCbase f ]), using the same baseline time window as for trial-averaged power. We used a condition-specific baseline for ISPC in order to control for spurious results induced by differences in the number of trials for the different trial types. Time-frequency decomposition was performed both target-locked and response-locked (i.e., the time series of each EEG epoch were re-sorted to be time-locked to the buttonpress).
Electrodes, frequency bands, and time windows of interest
To a priori select channels and time-frequency windows of interest for further statistical analyses, we took the following approach: First, we computed condition-average (i.e., averaging over groups, congruencies, and conflict probabilities) topographical maps of response-locked theta (3-8 Hz) power, for several time points around the response. This revealed a clear locus of theta activity around electrodes Cz and FCz; these electrodes were pooled as one midfrontal electrode pair. Next, we computed a condition-average time-frequency map of this midfrontal region, which revealed a clear "hotspot" of theta-band (3-to 8-Hz) activity from -200 to 100 ms relative to the response, which we selected as our main time-frequency window of interest (see Fig. 2a and b). Note that because of condition averaging, this selection procedure was orthogonal to, and thus not biased by, potential condition differences between (1) congruent or incongruent trials, (2) the conflict probability conditions, and (3) interactions between these factors. For ISPC analyses, the midfrontal electrodes FCz/Cz now served as "seeds" of targeted synchronization in the theta band. First, we plotted a condition-averaged topographical map of ISPC over the same time-frequency window as used for the power analysis. This revealed two bilateral regions of interest of functional connectivity with our midfrontal electrode pair: one over lateral frontal sites (AF3/AF4), and one over lateral parieto-occipital sites (CP5/CP6). Subsequent time-frequency maps of ISPC between these two target regions and the midfrontal region indeed showed strong connectivity in the theta frequency band around the time of the response (Fig. 3a). Importantly, the selection procedure of target electrodes for this seeded-synchrony analysis was data driven, and unbiased because of our condition-averaging approach (see above). Although interregional connectivity could be expected, given the communicative cognitive control signals between, for example, MFC and DLPFC (see the introduction), the exact electrodes were determined through this approach.
In addition to reactive (i.e., response-locked) control mechanisms, we were also interested in cue-related, proactive (i.e., during the WS/PTI) control processes. To this end, we used the same frequency band (3-8 Hz) and electrode pair (FCz/ Cz) as for the response-locked power analysis, and looked for pretarget dynamics of theta power during the WS and the PTI. We reasoned that during the first 400 ms of either the short or the long WS, subjects could not infer its predictive value. This time window could thus always be regarded as noninformative. The PTI, on the other hand, could always be regarded as an informative time window: After WS offset (irrespective of whether this was after 400 or 1,400 ms), subjects could infer whether conflict probability was high or low. The interval from 400 to 1,400 ms after WS onset provided only for the long-WS trials an informative cue about conflict probability, as well: For the late-conflict group, a long WS cued high conflict probability, whereas for the early-conflict group, a long WS cued low conflict probability. Importantly, evaluating the effect of conflict probability during this informative time window of the long-WS trials was by definition a between-group comparison. First, we computed the average midfrontal theta-band activity over these three time windows (0-400, 400-1,400, and 1,400-1,700 ms relative to WS onset), separately for high and low conflict probability. Second, as an exploratory analysis, we additionally plotted separate midfrontal time-frequency maps of the short-and long-WS trials, averaged over the two groups and thereby over cued conflict probability, to identify different time-frequency windows of interest (Fig. 4a). On the basis of these plots, we computed the average activity in the alpha (8-14 Hz) and beta (15-25 Hz) frequency bands during the same noninformative, informative, and PTI time windows, and performed the same statistical analysis (see below) as for theta activity.
Finally, we tested whether cued conflict probability could have an effect on spatial attention toward the stimulus location, since this was the conflicting dimension in the Simon task. Spatial attention has been shown to elicit posterior alpha (8-14 Hz) suppression contralateral to the attended hemifield (Sauseng et al., 2005;Thut, Nietzel, Brandt, & Pascual-Leone, 2006). Thus, we analyzed epochs on the basis of presentation side (left/right), conflict probability (high/low), and target congruency (congruent/incongruent). Topographical maps of alpha-band power at several posttarget time windows revealed a clear decrease in alpha activity at electrodes PO7/O1 for right-presented stimuli, and PO8/O2 for left-presented stimuli (Fig. 5a). Time-frequency maps of contralateral minus ipsilateral activity of these channels (i.e., the average of PO7/O1 -PO8/O2 for right stimuli, and PO8/O2 -PO7/O1 for left stimuli, collapsed across conflict probability and congruency) confirmed this alpha suppression effect to be present during a 270-to 550-ms window (Fig. 5b). Lateral alpha power was thus defined as the average power at these posterior channels and time-frequency window.
Statistical analyses
The aim of our analyses were (1) to test whether the temporal cue could be used to reduce the conflict effect, and (2) to assess whether conflict-related theta-band activity, reflecting control processes (Cavanagh & Frank, 2014;Cohen, 2014), would already have commenced during the conflict-predicting intervals, thus not exclusively being present during the time of the response. We therefore analyzed both behavior (as reaction times [RTs] and accuracy) and brain activity (power and ISPC) using repeated measures analysis of variance (ANOVA) with the within-subjects factors of interest Current Trial Congruency (congruent vs. incongruent) and Conflict Probability (high vs. low). Since the target congruency of a previous trial has been shown to influence behavior on a current trial (e.g., Egner, 2007;Gratton et al., 1992;Kerns et al., 2004), we also included Previous Trial Congruency (congruent vs. incongruent) as a factor, to account for confounding influences of previous congruency. Furthermore, although we were not interested in a possible effect of WS duration, per se, we included the between-subjects factor Group as well, so that possible group differences or interactions would indicate stronger temporal-cueing effects for longer or shorter WS durations. In all ANOVAs, when the assumption of sphericity did not hold, the Greenhouse-Geisser correction was applied, although we report the original degrees of freedom for ease of interpretation. Post-hoc dependent-samples t tests were performed to explore any interaction effects. Error and posterror trials were excluded from all analyses, except for the behavioral analysis of accuracy. For brain activity, we computed power and ISPC for each subject averaged over the time-frequency windows and electrodes, as specified above.
In the pretarget analysis, we conducted a separate independent-samples t test for activity during the informative window of the long WS, in which the late-and early-conflict groups were directly compared, reflecting a comparison of high and low conflict probability, respectively. In the posterior alpha lateralization analysis, we used a repeated measures ANOVA with the within-subjects factors Laterality (contra vs. ipsi), Conflict Probability (high vs. low), and Current Trial Congruency (congruent vs. incongruent), and the between-subjects factor Group.
To assess the relation between our behavioral and electrophysiological effects, we performed Spearman rankcorrelation tests. For the correlations, a single measure was computed for the difference in the conflict effects (incongruent [I] minus congruent [C]) induced by conflict probability (conflict-cueing effect): (I -C) high -(I -C) low . This measure was computed for RTs, power, and ISPC (for the RT analysis, within-subjects standardized RTs were used in order to make the measure more comparable between subjects; for power and ISPC, the decibel and percent change corrected values were used, respectively). On the basis of the previous literature (Cohen & Donner, 2013;Cohen & Ridderinkhof, 2013), we expected to find positive correlations, and thus set the statistical test of significant correlation to be one-tailed.
Finally, to evaluate the time course of one of our significant ANOVA effects of EEG power (see below), we performed time-wise permutation testing with cluster-based thresholding, as a correction for multiple comparisons. Specifically, the permutation test transformed the average condition difference power value at each time point from decibels into a z value with respect to a null distribution of surrogate condition difference values, obtained by swapping condition labels for a random half of subjects at each of 1,000 permutations. The resulting z scores were thresholded at p < .05. With an additional 1,000-iteration permutation test, a distribution of cluster sizes of contiguous significant time points under the null hypothesis of no condition difference was computed, and only clusters that exceeded the 95th percentile of this distribution were retained.
Single-trial regression analysis
The above-described analyses were all performed on the basis of trial-averaged data. Additionally, it would be revealing to take into account within-subjects intertrial variability (Cohen & Cavanagh, 2011;Pernet, Sajda, & Rousselet, 2011): In this way, one could more readily infer that proactive control triggered by conflict-predicting time intervals, and reactive control triggered by actual conflict, are dynamic, single-trial adaptive processes. To this end, we assessed whether the online neural dynamics directly reflected (i) anticipated conflict during the conflict-predicting intervals, (ii) experienced conflict at time of the target, and (iii) the validity of the conflict expectation at the time of the target.
First, we computed, for each subject, single-trial midfrontal theta power, averaged over three time windows: the PTI (-300 ms to target onset) as the conflict-predicting interval, the response-related interval (-200 to 100 ms, response-locked), and the noninformative time window (0-400 ms post-WSonset) as a control interval. Second, we determined the trialtype labels: Each trial had a WS that predicted either low or high conflict probability, a target that was either congruent or incongruent, and the predictive cue could either match (e.g., high conflict probability followed by an incongruent trial) or not match (e.g., high conflict probability followed by a congruent trial) the eventual level of conflict. Next, we tested whether single-trial pretarget and response-related theta power was a reliable predictor of these trial-type labels. For each subject, we fitted three logistic regression models for the average single-trial power values of the three different time windows. In turn, each resulting β term consisted (in addition to the intercept) of three regression weights that corresponded to the degree to which theta power predicted each trial-type label. For each subject, these regression weights were binarized as 1 if the regression weight indicated that increased theta power over trials predicted a high-conflict-probability cue, an incongruent trial, or a match between cue and congruency, and as 0 if increased theta power predicted a low-conflictprobability cue, a congruent trial, or a nonmatch between cue and congruency. The regression weights were tested at the group level against .5 (reflecting chance-level predictive value of the binarized regression weights) using one-sample Mann-Whitney U (also known as the Wilcoxon rank sum) nonparametric t tests, which were considered significant if they exceeded a Bonferroni-corrected threshold of p < .0167 (i.e., .05 divided by the three time windows tested). See Cohen and Donner (2013) for a more detailed description of this approach. The rationale behind the binary recoding of regression weights was that because condition labels are binary, a continuous beta weight of theta power predicting such a binary variable would be less intuitive. We confirmed that repeating the analyses with continuous regression weights led to the same pattern of results.
Follow-up behavioral experiments
To assess whether the effects of the temporal cues extended to nontemporal cues, we conducted three additional behavioral (no EEG) experiments. The tasks and procedures were identical to those of the EEG experiment, apart from the type of cue used. All of the task parameters and statistical analyses of behavioral performance were in line with those from the EEG experiment, except where noted.
Follow-up Experiment 1: nontemporal implicit cueing In this experiment, the cue consisted of a horizontal black bar (height 0.25 dva) presented at fixation. The cue duration was always 900 ms (i.e., the average of the 400-and 1,400-ms WS durations in the EEG experiment), and was followed by the PTI and target. Crucially, the width of the horizontal bar cue was either 0.5 or 1.26 dva. In the short-conflict condition, a short (vs. long) horizontal bar predicted the upcoming trial to be incongruent (vs. congruent) with 80 % validity. In the longconflict condition, the cues had the opposite meanings. Randomly, ten subjects were assigned to the short-conflict condition, and the other ten were assigned to the longconflict condition. The rationale behind this experiment was that the width (spatial length) of the bar should be analogous to the duration (temporal length) of the WS from the EEG experiment, in serving as a cue. However, here the prediction of the cue could be inferred upon its presentation (because the difference in width could be instantaneously perceived), whereas in the EEG experiment this could only be inferred after a certain time had passed (because the WS was perceptually identical in all trials, except for its duration).
Follow-up Experiment 2: probabilistic semantic cueing In this experiment, the task was identical to the task of the first follow-up experiment, except that we used the semantically meaningful words "HARD" and "EASY" as the conflictpredicting cues (with the same 80 % validity), instead of the horizontal bars. These cues allowed for a complete withinsubjects design (i.e., no Group factor was needed to counterbalance the cue-probability mapping across subjects). This experiment started with two practice blocks: a noncue version of the Simon task (with an intertrial interval of 1,000 ms) of 25 trials, and a 100 %-validity practice block of 25 trials using the word cues. The experiment consisted of six blocks of 100 trials each, separated by self-paced breaks.
Follow-up Experiment 3: informative versus uninformative cueing In this experiment, the task was identical to the task of the second follow-up experiment, except for the cue validity and the inclusion of a neutral cue. Here, the word "HARD" was always followed by an incongruent trial, the word "EASY" was always followed by a congruent trial, and the word "NEUTRAL" was followed 50 % of the time apiece by an incongruent and by a congruent trial. Because the informative cues were 100 % valid in this experiment, this design allowed for a comparison of the conflict effects following the informative cues versus following a neutral, uninformative cue. Thus, in this experiment the factor Cue Type had the two levels informative versus uninformative, instead of high versus low conflict probability as in the other experiments.
Results
Behavioral results from EEG experiment On average, the subjects responded correctly on 92.76 % (SD = 3.04) of the trials, with an average response speed of 467.40 ms (SD = 57.12). On correct trials (also excluding posterror trials), RTs increased significantly on incongruent as compared to congruent trials [F(1, 28) = 32.56, p < .001], reflecting the classic conflict effect induced by the irrelevant spatial dimension of the stimulus. In addition, we found a significant interaction between current and previous trial congruency [F(1, 28) = 26.92, p < .001]. As can be seen in Fig. 1c (upper panel), the conflict effect on RTs decreased when the preceding trial was incongruent [t(29) = 1.32, p = .12], relative to when it was congruent [t(29) = 7.18, p < .001]. This replicates earlier findings of the congruency sequence effect (Egner, Ely, & Grinband, 2010;Gratton et al., 1992). Both the conflict effect and the conflict sequence effect were present in accuracy, as well: Subjects performed worse on incongruent than on congruent trials [F(1, 28) = 8.74, p = .006], and this decrease in performance was attenuated when the previous trial was incongruent [F(1, 28) = 11.88, p = .002].
Importantly, our manipulation of pretarget conflict cueing based on temporal information showed an unexpected finding (Fig. 1c, lower panel). When the WS duration predicted high conflict probability, the conflict effect on RTs increased, as compared to when the WS duration predicted low conflict probability [F(1, 28) = 32.56, p < .001]. For accuracy, we found a similar, though nonsignificant, effect [F(1, 28) = 3.78, p = .062]. Moreover, these effects did not depend on whether the subjects performed the early-or the late-conflict condition, as is indicated by the absence of an interaction with the between-subjects factor Group [RT: F(1, 28) = 3.23, p = .083; accuracy: F < 1]. In other words, when conflict could be anticipated on the basis of the duration of a fixation cross (irrespective of whether this predicting interval was short or long), this hampered conflict resolution.
We further explored this negative effect of cueing on conflict by examining RT distributions through delta plots, in which the conflict effect (the difference in RTs between incongruent and congruent trials) is plotted as a function of average RT. This approach has been shown to be sensitive to variations and dynamics in conflict effects that are otherwise lost in regular trial-average scores (Ridderinkhof, 2002;van den Wildenberg et al., 2010). Figure 1d shows that throughout the RT distribution, the conflict effect was stronger for highthan for low-conflict-probability cueing [Congruency × Conflict Probability interaction, F(1, 29) = 24.0, p < .001]. Irrespective of conflict probability, on the other hand, the conflict effect became reduced with longer RTs [Congruency × RT Bin interaction: F(3, 27) = 75.95, p < .001], as is evidenced by a negative slope of the delta plot [F(1, 29) = 142.4, p < .001], which is consistent with previous reports of the Simon task and is interpreted as the selective suppression of locationbased response capture (Burle, van den Wildenberg, & Ridderinkhof, 2005;Ridderinkhof, 2002). Since congruency, RT bin, and conflict probability did not interact [F(3, 27) = 1.99, p = .16], selective suppression was not influenced by the temporal cueing of conflict.
The effect of conflict probability on current trial congruency did not further interact with previous trial congruency, which suggests that the mechanism that was driving the conflict-cueing effect was different from the mechanism that mediated conflict adaptation (Alpay et al., 2009;Egner, 2007). This is an important finding, because conflict adaptation, as reflected by the CSE, can be considered a form of anticipatory control, as well (see the introduction). Furthermore, we observed no main effect of conflict probability, and in general the two groups did not differ in RTs (all Fs < 1). The early-conflict group did perform better than the late-conflict group in terms of accuracy [F(1, 28) = 4.62, p = .04]. An additional analysis of the possible influence of the WS duration of the preceding trial on the RT at the current trial showed that there was no main effect of current and previous trial WS duration (all Fs < 1), nor an interaction between these factors [F(1, 28) = 1.96, p = .17], signifying that our manipulation of different WS durations, which resulted in different intervals between successive targets, did not affect general, nonspecific preparation to respond to these targets (Los & Agter, 2005).
Together, these behavioral dynamics point to an effect of conflict cueing that is opposite from what has been reported previously (Wendt & Kiesel, 2011): When the probability of conflict could be inferred on the basis of temporal information, behavioral responses to conflict further deteriorated.
Response-related EEG time-frequency power
On the basis of a condition-orthogonal contrast of responserelated oscillatory power against baseline, we chose a twochannel (FCz and Cz) midfrontal pair of electrodes for our main analyses (see the Materials and method section). As is shown in Fig. 2a
FCz/Cz I-C theta
Power ( 100 ms surrounding the buttonpress. Average activity in this time-frequency window concurred with our behavioral findings described above (Fig. 2c). First, we found a conflictrelated (i.e., I -C) increase in midfrontal theta power [F(1, 28) = 26.39, p < .001], which was stronger when the previous trial was congruent than when it was incongruent [F(1, 28) = 20.75, p < .001], corroborating previous findings (e.g., Cohen & Donner, 2013;Pastötter, Dreisbach, & Bäuml, 2013). Second, the midfrontal conflict-related theta power was modulated by conflict probability [F(1, 28) = 10.82, p = .003], independent of previous trial congruency (F < 1): Conflict-related theta power was stronger after high than after low conflict probability, mimicking the (unexpected) behavioral cueing effect. Indeed, the cueing effect on theta power correlated across subjects with the cueing effect on behavior (Fig. 6a): Subjects who exhibited more conflict-related behavioral slowing after a high-than after a low-conflict-probability cue also showed a stronger conflict-related increase in midfrontal theta power after a high-than after a low-conflict-probability cue (r = .31, p = .049 [r = .39, p = .020 when excluding one marked outlier]). Interestingly, this cross-subject correlation seemed to be driven mostly by the effect of conflict probability on congruent trials (r = .33, p = .037 [r = .38, p = .024, when excluding the earlier outlier]), and less on incongruent trials (r = .26, p = .084 [r = .25, p = .097, when excluding the outlier]).
No further interaction effects with group occurred, nor any main effects of conflict probability or (previous trial) WS duration (all ps > .1). Thus, in line with the behavioral findings, temporal cueing of conflict showed a specific effect of increased local conflict-related theta-power dynamics, again independent from the actual duration of the cue.
Because we restricted this analysis to an a-priori-chosen time window, we next explored the time course of this effect, which is illustrated in the line plot in Fig. 2d. The effect of current trial congruency on midfrontal theta clearly dropped to zero before and after the response. However, over a longer time window of -500 to 50 ms around the response, conflictrelated theta power was significantly elevated when high conflict probability rather than low conflict probability was cued (as revealed by time-wise permutation testing with cluster-size thresholding). Thus, the modulation of conflict-related midfrontal theta power by conflict cueing was already present around target onset, and may even have extended to a pretarget time window (average RTs were below 500 ms). A pretarget conflict-related effect may seem odd, given that actual conflict is not yet known, but careful inspection of Fig. 2d shows that around this time, the two lines that are separated on the basis of cued conflict probability together average out to zero. Thus, this preresponse-and possibly pretarget-effect is most likely driven by the cued likelihood of upcoming conflict; this
Response-related intersite phase clustering
In addition, we were interested in whether conflict cueing modulated interregional connectivity, as well. To test this, we computed intersite phase clustering (ISPC; see the Materials and method section) between the midfrontal electrode pair used for the power analysis (here, thus, used as "seed") and all other electrodes, which revealed theta-band synchronization between this region and a bilateral prefrontal region (AF3/AF4), as well as a bilateral centro-parietal region (CP5/CP6; see Fig. 3a). Importantly, the selection of these electrodes was data driven and unbiased, because this selection was orthogonal to potential condition differences. Although midfrontal connectivity with lateral prefrontal electrodes could be expected on the basis of earlier findings (e.g., Cohen & Cavanagh, 2011), the finding of midfrontal-centro-parietal connectivity was not hypothesized a priori. Using the same time-frequency windows that were used for the power analysis, a similar repeated measures ANOVA revealed a marginal effect of current trial congruency for the lateral prefrontal region only [F(1, 28) = 4.11, p = .052; centro-parietal: F(1, 28) = 1.36, p = .25], where incongruent trials elicited stronger ISPC than congruent trials. Previous and current congruency did not further interact [lateral prefrontal: F(1, 28) = 2.04, p = .17; centro-parietal approaching significance: F(1, 28) = 3.79, p = .062].
Similar to the power results, the cueing effect on centroparietal (FCz/Cz-CP5/CP6) theta-band connectivity correlated with the cueing effect on behavior (Fig. 6b): Subjects that showed stronger conflict-related RT slowing after a high-than after a low-conflict-probability cue also showed a stronger conflict-related increase in theta ISPC after a high-than after a low-conflict-probability cue (r = .33, p = .036). Again, as with power, this correlation was stronger in degree when considering only congruent trials (r = .48, p < .004), and was absent for incongruent trials (r = -.04, p = .59). The theta ISPC between midfrontal and lateral prefrontal regions (AF3/AF4) showed no significant correlations (all ps > .1).
Cue-related pretarget EEG time-frequency power
The results above provide evidence that the conflict-predicting temporal cue affected both behavioral performance and the associated brain dynamics, though in the opposite direction from the one expected. To examine whether this could be explained by changes in pretarget cue-related activity, we plotted time-frequency power locked to the WS onset, for both short-and long-WS trials, collapsed over groups (i.e., averaged over conflict probability). As can be seen in Fig. 4a, this revealed modulations in the theta, alpha (8-14 Hz), and beta (15-25 Hz) bands. We hypothesized that this would give us a To test this, we first restricted our analysis to the theta band, and computed midfrontal theta power in three time windows of interest: the first 400 ms over both short-and long-WS trials (the "noninformative" time window), 400-1,400 ms for long-WS trials only (the "informative" time window), and the PTI for both short-and long-WS trials (see the Materials and method section for the rationale behind these time window labels). As expected, during the noninformative window, midfrontal theta power did not differ between the high-and low-conflict probability conditions [F(1, 28) = 0.96, p = .33], nor did it interact with group [F(1, 28) = 0.12, p = .74]. However, during the informative window, subjects for whom the long WS predicted high conflict probability showed stronger midfrontal theta activity than did subjects for whom the long WS predicted low conflict probability [t(29) = 2.38, p = .024]. During the subsequent PTI, this difference between high and low conflict probability persisted, for both short and long WSs [i.e., a main effect of conflict probability: F(1, 28) = 4.91, p = .035, without an interaction with group, F < 0.1]. Thus, pretarget midfrontal oscillatory dynamics showed a preparatory effect of conflict cueing with a hazard function characteristic: When the conditional probability of conflict increased (vs. decreased) over time, given that the WS had not yet ended, midfrontal theta power concomitantly increased (vs. decreased).
Second, we explored alpha-and beta-band power, because the condition-averaged time-frequency maps of long and short WS also showed activity in these bands (see Fig. 4a). During the same time windows used for the theta-band analysis, alpha power was not modulated by any of our factors, nor by their interactions (all ps > .1). However, beta power showed an interaction between conflict probability and group during the PTI [F(1, 28) = 4.39, p = .048], in which a post-hoc independent-samples t test revealed that after a low-conflictprobability cue, beta suppression was stronger for the earlyconflict group than for the late-conflict group [t(28) = 2.54, p = .017]. That is, beta suppression was stronger after a long WS than after a short WS, because the long WS was a lowconflict-probability cue for the early-conflict group, and the short WS was a low-conflict-probability cue for the lateconflict group. This result can be explained by an effect of duration on beta suppression, which typically develops in strength over time when preparing for a motor response (de Jong, Gladwin, & 't Hart, 2006). Interestingly, this effect was absent when both a short and a long WS indicated a high probability of conflict [t(28) = 1.51, p = .14].
In sum, these results provide evidence for a neural signature of conflict anticipation (increase in pretarget midfrontal theta power), triggered by temporal cues that predict conflict.
Within-subjects single-trial regression
Conflict anticipation triggered by conflict-predicting time intervals, and reactive control triggered by actual conflict, should manifest at the single-trial level. We hypothesized that the degree of midfrontal theta power would fluctuate over the course of the trial, depending on whether (i) the temporal cue predicted conflict, (ii) the trial subsequently contained a conflict target, and (iii) the prediction of the cue was valid. To this end, we performed a logistic regression analysis (see the Materials and method section), in which we used single-trial midfrontal theta power averaged over three time windows (the noninformative window, the PTI [see Fig. 4b], and the response-related window used for the general power analysis [see Fig. 2b]) to predict these condition labels. The results are shown in Fig. 7, where on the y-axis the percentage of subjects is shown who exhibited a positive relationship between theta and the condition label. A Wilcoxon signed-rank test corroborated the trial-averaged group-level results. First, theta activity during the noninformative window remained at the chance prediction level (all ps > .4). Second, during the PTI, theta power predicted the type of cue (p = .011), where stronger theta was associated with high-conflict-probability cues. Finally, during the response-related window, theta power predicted the actual congruency (p < .001), where stronger theta power was associated with incongruent trials. Interestingly, stronger response-related theta was also predictive of the cue-congruency match (p = .001), reflecting at the singletrial level the group-level interaction between conflict probability and current trial congruency described above. For example, when a high-conflict-probability cue was followed by an incongruent trial, such a trial was likely to result in stronger theta activity; and similarly, a low-conflict-probability cue followed by a congruent trial was also likely to result in stronger theta.
In sum, this analysis revealed evidence of single-trial conflict anticipation based on temporal information, expressed in midfrontal theta-band activity. Moreover, the single-trial activity corroborated the seemingly contradictory cueing effect around the response, of increased conflict-related midfrontal theta power when this conflict could be anticipated.
Target-related lateralized alpha
We next tested whether the cueing of conflict could have affected low-level processing of stimulus features as well. In the Simon task, the feature that is of particular interest is the target location, as this is the irrelevant dimension that leads to Fig. 7 Single-trial midfrontal theta regression results. Plotted are subjectaverage binary beta weights (0 = negative prediction, 1 = positive prediction) as a function of time window, shown separately for the trial-type predictors conflict probability, congruency, and their interaction. Regression models including these three predictors were fitted per time window, on the single-trial midfrontal theta power averaged over the respective time window. The gray dotted horizontal line denotes chance prediction by the model. Colored asterisks denote significant regressions between single-trial theta and trial type (Wilcoxon rank sum t tests, p < .017) response conflict. We reasoned that posterior alpha suppression contralateral to side of stimulus presentation would be a strong correlate of lateralized attentional processing (Sauseng et al., 2005;Thut et al., 2006). The spatial location of the stimulus indeed elicited a strong contralateral alpha suppression over parieto-occipital regions [F(1, 28) = 39.70, p < .001], around 300 ms after target onset. This means that, for example, when a stimulus was presented on the left side of the screen, alpha power suppression was relatively stronger over right parieto-occipital sites than over left parieto-occipital sites (see Fig. 5a-c) This alpha lateralization effect was modulated by current trial congruency [F(1, 28) = 15.43, p = .001], in which congruent trials elicited stronger alpha lateralization than incongruent trials [t(29) = 3.67, p = .001; Fig. 5d]. However, conflict probability did not interact with stimulus location (F < 1), nor was there a three-way interaction between conflict probability, current trial congruency, and stimulus location (F < 1). Importantly, these null findings suggest that cueing conflict did not affect bottom-up processing of sensory features, as it did not modulate a neural index of spatial attention (Sauseng et al., 2005;Thut et al., 2006). On the other hand, these results do show how spatial attention was reduced (as indicated by relatively less lateralized alpha power suppression) after targets of which the spatial location was incongruent with the required response. Given the absence of an effect of cueing, however, this spatial attention effect could be regarded reactive rather than proactive.
Behavioral results of follow-up experiments
We set out to further pin down this surprising finding by means of three follow-up experiments, in which we varied the symbolic and probabilistic nature of the cues (see Materials and method above) in the same Simon task.
Follow-Up Experiment 1: nontemporal cueing In short, this task was identical to the temporal-cueing task, except that the cues consisted of horizontal bars of equal duration, with the width forming a cue for conflict probability. As expected, this task resulted in slower responses on incongruent than on congruent trials [F(1, 18) = 22.79, p < .001], and this conflict effect was reduced when the previous trial was incongruent as compared to when it was congruent [F(1, 18) = 42.98, p < .001].
The same pattern was found for accuracy (both ps < .05). Importantly, this task elicited the same contradictory effect of conflict cueing [F(1, 18) = 4.64, p = .045; Fig. 8a]. Similar to the manipulation of time intervals as cues, when the nontemporal symbolic information (i.e., the width of a bar) cued high conflict probability, the conflict effect in RTs increased, as compared to when the cued conflict probability was low [t(19) = 2.09, p = .050]. Again, this effect did not interact with group [F(1, 18) = 2.22, p = .15], nor with previous trial congruency (F < 1). For accuracy, we did not observe an interaction between the current trial congruency and conflict probability in this experiment [F(1, 18) = 1.11, p = .31]. Follow-Up Experiment 2: probabilistic semantic cueing To investigate whether the symbolic nature of the cue is an important variable in the results above, we repeated the same experiment, except that the cues comprised words that were semantically informative about conflict (e.g., "HARD"), while still predicting conflict with 80 % certainty. Although we found the same general conflict and sequence effects (all ps < .05), any effect of cueing was absent. That is, no main effect of conflict probability emerged [RT: F(1, 14) = 1.36, p = .26; accuracy: F < 1], nor any interaction with current and previous trial congruency (all Fs < 1; Fig. 8b).
Follow-Up Experiment 3: informative versus uninformative cueing Introducing an uninformative cue (the word "NEUTRAL") that predicted conflict with 50 % probability, together with the "HARD" and "EASY" words, which now predicted the upcoming congruency as informative cues, with 100 % validity, showed a pattern of results that replicated earlier findings (Strack et al., 2013). First, subjects responded faster [F(1, 14) = 22.88, p < .001] and more accurately [F(1, 14) = 9.60, p = .008] when the cues were informative than when they were uninformative (Fig. 8c). Second, the conflict effect for RTs was higher following informative than following uninformative cues [t(14) = 3.58, p = .003], which was reflected by an interaction between the factors Cue Type and Current Trial Congruency [F(1, 14) = 12.80, p = .003]. In other words, people benefited from the informativeness, or validity, of the cue in preparing for incongruent trials [t(14) = 3.91, p = .002], but this effect was even stronger when preparing for congruent trials [t(14) = 5.12, p < .001]. These interactions were not present for accuracy (all Fs < 1).
Discussion
In this study, we hypothesized that if temporal information derived from between-trial intervals correlated with the probability of future instances of conflict (Wendt & Kiesel, 2011), subjects could use this contingency to prepare for upcoming conflict through anticipatory proactive control (Correa et al., 2009). We predicted that the conflict effect (increased RTs and decreased accuracy for incongruent vs. congruent trials) would be reduced when conflict could be expected on the basis of the duration of a "warning signal" (an intertrial fixation cross). Surprisingly, the present data point to the exact opposite conclusion. In fact, the conflict effect was present only when conflict probability was cued to be high (80 % incongruent trials), and disappeared when conflict probability was cued to be low (80 % congruent trials).
Although this result was contrary to our predictions and in sharp contrast with previous findings of conflict-reducing effects of temporal cueing (Wendt & Kiesel, 2011), this pattern of behavioral results was internally consistent with several distinct manifestations of EEG dynamics, and generalized to other task settings. First, we obtained strong evidence of increased conflict-related midfrontal theta-band (3-8 Hz) power, and stronger conflict-related interregional theta synchrony, specific to situations of high conflict likelihood. These neurophysiological underpinnings of cognitive control (Cohen, 2014) correlated across subjects with the behavioral cueing effect.
Second, in addition to temporal cues, nontemporal, symbolic cues (here, horizontal bars of different width) increased the conflict effects as well. Only when the cues were semantically meaningful words that were 100 % valid (e.g., the word "hard" always appeared before an incongruent trial), we found a behavioral benefit with respect to a neutral, uninformative (50 % valid) cue. However, this effect was most pronounced for congruent trials.
Frontal theta dynamics reflect both anticipatory proactive and posttarget reactive control
Our general EEG results of increased frontal theta power as well as interregional phase synchrony after conflict are in accordance with a growing body of findings that have tied frontal theta-band activity to various cognitive control processes, including conflict adaptation (Cohen & Cavanagh, 2011;Pastötter et al., 2013), error processing (Luu, Tucker, & Makeig, 2004;van Driel et al., 2012), task switching (Cunillera et al., 2012), and reinforcement learning (Cavanagh et al., 2010;van de Vijver, Ridderinkhof, & Cohen, 2011). An important contribution of this article to the literature is that midfrontal theta increases can already be observed before a conflict target, elicited by a conflictpredicting cue. Moreover, by using time intervals as such cues, we observed that these anticipatory dynamics waxed and waned around temporal windows during which the cue became informative. These intertrial events were perceptually identical except for duration, showing that midfrontal theta activity can, in addition to reflecting posttarget control processes, be linked to an endogenously generated conflict anticipation process that is based on an internal representation of time. The surprising finding is that this conflict anticipation did not produce adaptive behavior; it is thus questionable whether these anticipatory processes could be regarded as proactive control. However, a recent study has shown that under certain circumstances, cue-induced cognitive control can indeed impair rather than facilitate behavior (Bocanegra & Hommel, 2014).
Although we observed the "classic" effect of responselocked conflict-related increases in midfrontal theta power after target onset (Cohen, Ridderinkhof, Haupt, Elger, & Fell, 2008;Nigbur, Ivanova, & Stürmer, 2011), this effect emerged only when conflict probability was cued to be high. This cue-conflict interaction was present at the single-trial level and paralleled the behavioral findings, and both effects correlated across subjects. Moreover, the theta effect was present well before the response, and thus may be a result of the pretarget cue-related increase in midfrontal theta. Although this interpretation is post hoc, it may provide an explanation of our unexpected findings: It is possible that the cue-related theta effect is, in terms of the underlying mechanism, qualitatively different from conflict-related theta (Cohen, 2014). This view is in accordance with several studies demonstrating that anticipatory activity in the medial frontal cortex can be independent from, and can dampen, subsequent conflict-related medial frontal activity (Aarts, Roelofs, & van Turennout, 2008;Brown, 2009;Ide, Shenoy, Yu, & Li, 2013;Luks, Simpson, Dale, & Hough, 2007;Oliveira, Hickey, & McDonald, 2014). In our task, these processes may have interfered around the time of the response (i.e., during action selection), resulting in less efficiently applied reactive control. Varying the interval between cue and target, thereby teasing apart these processes in time, may be a way to further investigate this hypothesis.
In addition to conflict-related local power, we reported interregional theta phase synchrony between midfrontal and lateral frontal (Cohen & Cavanagh, 2011;Hanslmayr et al., 2008), as well as posterior (Anguera et al., 2013;Cohen & van Gaal, 2013), sites. This largescale functional connectivity was stronger after incongruent than after congruent targets, exclusively following a cue that was associated with high conflict probability; after a low-conflict-probability cue, interregional thetaband connectivity reversed, becoming stronger after congruent than after incongruent targets. This seems inconsistent with a recently proposed interpretation of frontal theta phase synchrony reflecting the top-down implementation of control in response to general signals of "surprise" (Cavanagh & Frank, 2014). That is, the more obvious hypothesis, that unexpected (i.e., surprising) events should require relatively more control, would predict the exact opposite. Nonetheless, our connectivity effects were remarkably strong and were consistent with the local theta power effects, in terms of both the group-level effects and the cross-subject correlations.
From an anatomical perspective, mid-lateral frontal theta synchrony has been proposed to reflect MFC-DLPFC functional connectivity, which increases after conflict has been encountered (Cohen & Ridderinkhof, 2013). The current axiom in the cognitive control literature is that the MFC monitors for possible instances of conflict, and upon conflict detection, communicates the need for increased control to the DLPFC, which further implements control through top-down signals to motor and task-relevant sensory areas (Botvinick et al., 2004;Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999;Kerns et al., 2004;MacDonald et al., 2000;Ridderinkhof et al., 2010;Ridderinkhof, Ullsperger, et al., 2004;Ridderinkhof, van den Wildenberg, Segalowitz, & Carter, 2004). However, direct regulatory top-down signals from MFC to guide behavior in situations of conflict have also been observed (Cohen et al., 2009;Danielmeier et al., 2011;Kennerley, Walton, Behrens, Buckley, & Rushworth, 2006;Ridderinkhof, Ullsperger, et al., 2004), suggesting a more integrative function of the MFC (Shenhav, Botvinick, & Cohen, 2013). Our findings of theta synchrony between midfrontal and posterior parietal regions are in accordance with this view. This debate notwithstanding, implemented control signals, be they directly from MFC activity, or in concert with DLPFC, should increase after uncertain, highconflict situations; here, we found that these signals became stronger when a prediction (incongruent or congruent) was met. We now turn to possible alternative explanations for this unexpected finding.
Temporal cues and attention to time
One possible, albeit speculative, explanation of our findings could be that time intervals are special in serving as cues triggering specific anticipatory activity (Sperduti, Tallon-Baudry, Hugueville, & Pouthas, 2011; but see the next paragraph). Given that the intertrial fixation stimuli were valid predictors only with respect to their duration, subjects may have increased their attention to the passage of time especially when this duration signaled conflict. Research on "temporal orienting" has shown that attention to time can boost bottomup processing of perceptual information (Cravo, Rohenkohl, Wyart, & Nobre, 2011Jepma, Wagenmakers, & Nieuwenhuis, 2012;Nobre et al., 2007;Rohenkohl, Cravo, Wyart, & Nobre, 2012), which can interfere with cognitively controlled action selection (Correa, Cappucci, Nobre, & Lupiáñez, 2010), presumably because the irrelevant conflicting information is processed to a stronger degree. Thus, in our study, the temporal cues that predicted high conflict likelihood may have resulted in more instead of less conflict, by enhancing sensory processing of the irrelevant (conflicting) spatial location of the stimulus, through increased attention to time. However, this explanation would have predicted that neural activity related to spatial processing would increase after cues that signaled high conflict probability. In contrast, we found that contralateral posterior alpha suppression decreased after incongruent targets; these alpha dynamics were not affected by cueing. This finding is consistent with lateralized alpha power reflecting an index of top-down control over spatial attention (Klimesch, 2012;Sauseng et al., 2005;Thut et al., 2006), and should here be interpreted as reactive, because the top-down signal was observed after conflict was encountered.
Moreover, our behavioral follow-up experiment in which we used nontemporal, symbolic instead of temporal cues also argues against the explanation of time intervals being special.
That is, when the pretarget stimuli comprised horizontal bars that differed in spatial length instead of duration (i.e., "temporal length"), subjects were again not able to use these cues to improve behavior, as was evidenced by a similar increase in conflict effects after high-conflict-probability cueing. Nonetheless, the use of time as a source of information for abstract inferences (such as predicting future conflict likelihood) has received little emphasis (Appelbaum et al., 2012;Wendt & Kiesel, 2011), and thus the underlying processes of encoding this temporal information as a usable contextual cue remain largely unknown.
Alternative explanations
Our behavioral results initially seem to be in contrast with those from studies that have shown beneficial effects of cueing in conflict tasks (Crump et al., 2006;Fischer, Gottschalk, & Dreisbach, 2014;Ghinescu, Schachtman, Stadler, Fabiani, & Gratton, 2010;Gratton et al., 1992;King, Korb, & Egner, 2012). However, a careful examination of this literature provides some leverage for relating our study to previous findings.
First, several studies have reported the effect of cueing to be most prominent on congruent trials (Aarts et al., 2008;Alpay et al., 2009;Klein & Ivanoff, 2011;Stoffels, 1996;Strack et al., 2013;Wühr & Kunde, 2008). This is consistent with our findings. For example, we found that the behavioral effect of cueing correlated with both midfrontal theta power and midfrontal-parieto-occipital theta phase synchrony, only for congruent and not for incongruent trials. Second, some studies have failed to show clear benefits of cueing in general (Goldfarb & Henik, 2013;Luks et al., 2007), or on incongruent trials specifically (Strack et al., 2013), and, depending on the specific task settings, have even reported opposite effects (Alpay et al., 2009;Wühr & Kunde, 2008). Third, the types of cues and conflict tasks have differed widely across studies, while often lacking a detailed rationale as to why these settings were chosen. For example, although some have argued that a sufficiently long time interval is required between cue and target in order to generate an expectation about upcoming conflict (Correa et al., 2009;Monsell, 2003), others have shown that varying the cue-target interval does not modulate the cueing effect (Wühr & Kunde, 2008). Indeed, the observation that features of the target itself can trigger conflict adaptation "on the fly" (King et al., 2012;Lehle & Hübner, 2008) argues for a fairly rapid and flexible form of proactive control.
Over and above the timing of the cue and target, the nature of the conflict paradigm itself may be crucial in determining whether cues can be used for proactive control. Superficially, conflict tasks such as the Stroop, flanker, and Simon tasks appear very similar, since they all produce qualitatively similar behavioral conflict effects; however, the exact neurocognitive processes that produce these effects may be markedly different (Hommel, 2011). In the Simon task, the task-irrelevant dimension of the target location needs to be processed before the task-relevant dimension of color can be processed. That is, the target needs to be located before the color can be determined. In contrast, in the flanker task, the task-relevant information, which is the central target, can be processed directly, without processing the task-irrelevant peripheral distractors. It might be that conflict anticipation only translates to improved proactive control in conflict paradigms in which the location of the conflicting, task-irrelevant dimension is known a priori. Indeed, pretarget cueing of conflict has been shown to result in behavioral improvement in the flanker task (Correa et al., 2009).
In the Stroop task, the task-irrelevant and task-relevant stimulus dimensions share the same location (e.g., the word RED shown in green). A previous study (Appelbaum et al., 2012), however, showed that indeed, when these dimensions are untangled in time (e.g., the word RED first appears in white, and only subsequently changes color) and when these moments in time were predictable, performance improved. Thus, the temporal and spatial predictability of the conflicting stimulus dimension seems important. Applying this argument to our findings, it could be that expecting conflict on the basis of a conflict-predicting cue results in further increased instead of decreased conflict, because one is not able to a priori suppress spatial attention when the conflicting stimulus location is unknown. In other words, conflict is increased by both the cue and the target, in an additive manner. However, it is harder to envisage how this would explain reduced conflict after a low-conflict-probability cue.
Another alternative explanation for increased behavioral conflict effects after cues signaling high conflict probability is that these cues may trigger a generalized cautious response mode of proactive slowing, possibly through an increased decision threshold. However, several features of the data argue against this account. First, general proactive slowing after a warning cue that signals high conflict likelihood would still predict a decreased conflict effect. That is, responses to lowconflict (congruent) trials are usually fast, which would predict that responses to these trials would be most affected by a cautious response mode by becoming slower. In contrast to this prediction, we found that responses to congruent trials after cues that predicted high conflict probability became faster. Second, high-conflict trials by themselves are already characterized by slower responses; thus, a proactive cautious response mode should affect these trials less; in contrast, we found that responses to incongruent trials became even slower when these were cued, as compared to when congruent trials were cued. Third, one could argue that, by means of a speedaccuracy trade-off (see Egner, 2007), a cautious, conservative response mode would be expressed both by slower RTs and higher accuracy. In other words, the effects on RTs should be similar to the effects on accuracy. In contrast to this prediction, we observed an inverse pattern in accuracy as compared to RTs, in relation to cueing.
From symbolic, probabilistic cueing to semantic, deterministic cueing Our follow-up experiments suggest three additional variables in conflict cueing that may prove important in disambiguating our findings. First, the semantic level of the cue may influence whether and how proactive control can develop (Fischer et al., 2014;Umbach, Schwager, Frensch, & Gaschler, 2012). In our study, time intervals and horizontal bars had, in contrast to the words "HARD" and "EASY," no a priori relationship with conflict. Using the word cues, we found no cueing effect when these cues were probabilistic (i.e., predicting with 80 % validity upcoming conflict), which is in accordance with results from another study (Alpay et al., 2009). Other studies have varied in the semantic level of the cue. For example, it can be argued that a red cross and a green checkmark (Correa et al., 2009) contain intrinsic information about conflict to a stronger degree than do contextual target features (e.g., the color of a flanker stimulus; Vietze & Wendt, 2009). In addition, the semantic level of arbitrary symbols can change depending on the task instructions and training (Ghinescu et al., 2010), which may alter task strategies and conscious experience of (pretarget) conflict. It has been shown that the latter influence can modulate the behavioral conflict effect in an opposite direction (Desender, Van Opstal, & Van den Bussche, 2014).
Second and third, the validity of the cue may be important (Lai & Mangels, 2007;Vossel, Thiel, & Fink, 2006), which may or may not require the inclusion of a neutral cue. In our study, changing the cueing paradigm from probabilistic (80 % validity) to deterministic (100 % validity) resulted in behavioral improvements, especially on congruent trials and in comparison with neutral cues, replicating previous findings (Alpay et al., 2009;Strack et al., 2013). This effect can be more easily explained: Neutral cues predict uncertainty, resulting in more cautious response strategies for both congruent and incongruent trials. On the other hand, valid, explicit cues result in faster responses in general, which favors the faster "direct" route through which the irrelevant stimulus dimension (location) is processed, over the slower "deliberate" route that processes the relevant stimulus dimension (color) (Ridderinkhof, 2002;van den Wildenberg et al., 2010). Thus, although conflictpredicting cues speed up behavior, this increased impulsivity results in a stronger conflict effect. Interestingly, a study by Cavanagh, Zambrano Vazquez, and Allen (2012) did not show effects of cue validity on behavior, and they observed a very modest decrease of midfrontal theta EEG dynamics for informative relative to uninformative cues. In designing future studies involving probabilistic temporal and spatial cues, researchers may wish to include a neutral cue, because this could provide more insight into whether the obtained costs of cueing may be related to either impulsivity or cautiousness.
Conclusions
We found a behavioral cost of time-based conflict anticipation, which was mirrored in frontal theta EEG dynamics and replicated in other cue and task settings. Previous findings on pretarget cueing have been mixed, and some of our results can be linked to these anomalies. However, how exactly informative cues can hurt instead of help performance in terms of what is cued needs further research. | 2019-04-05T18:11:13.160Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "5a84d124b679411639ca6ac9a5656f26db3fd6cc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.3758/s13415-015-0367-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "22693f25e4e910f768e6b57e1abb35ff270cfbca",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": []
} |
16380380 | pes2o/s2orc | v3-fos-license | Current and charge distributions of the fractional quantum Hall liquids with edges
An effective Chern-Simons theory for the quantum Hall states with edges is studied by treating the edge and bulk properties in a unified fashion. An exact steady-state solution is obtained for a half-plane geometry using the Wiener-Hopf method. For a Hall bar with finite width, it is proved that the charge and current distributions do not have a diverging singularity. It is shown that there exists only a single mode even for the hierarchical states, and the mode is not localized exponentially near the edges. Thus this result differs from the edge picture in which electrons are treated as strictly one dimensional chiral Luttinger liquids.
In an approach initiated by Wen [11] (see also Ref. [9]), which is different from ours, a one-dimensional edge action is added to the original action in order to assure the gauge invariance for the Chern-Simons gauge field for restricted geometries. This has been followed by a number of authors [5,[12][13][14][15] who tried to explain the FQH effect based only on the 1d theory (edge picture) [16,17]. It has an attractive feature, if correct, that the celebrated Tomonaga-Luttinger liquids are realized at the edges of FQH systems [18][19][20]. In those theories, however, apparently the global properties like distributions of current, charge and electromagnetic fields are assumed to be insignificant. Thus one of the main consequences is that the charges and currents are localized near the edges.
Recently Nagaosa and Kohmoto [21] examined boundary conditions for FQH liquids and it was shown that the gauge invariance is not violated if one imposes a physically suitable boundary condition. Thus one needs not to modify the original action as was done by Wen. In this way, it was possible to study the edge and bulk properties on an equal footing. They showed the fractional quantization of Hall conductance by considering both edge and bulk.
We begin with the composite fermion picture of the FQH effect in Sec. II. Then the Chern-Simons effective field theory to describe the composite fermions and the boundary condition are introduced in Sec. III.
Following the line of Ref. [21] the charge and current distributions of the FQH liquids in steady states are discussed in Sec. IV. It is shown that the effect of the interaction can not be treated perturbatively from the non-interacting situations. The approach we are taking for this correlated electron problem is totally non-perturvative. The following two geometries are considered: (i) half-plane (subsection A) and (ii) Hall bar (subsection B).
(i) An exact solution is obtained using the Wiener-Hopf method. This method was used, in the same geometry, by Thouless [29] to solve the equations of MacDonald, Rice and Brinkman [30] for the non-interacting integer quantum Hall liquid.
(ii) The asymptotic behavior near the edges are obtained using the theorems of the singular integral equations ( Hilbert transformation). It is shown that there exists only a single mode even for the hierarchial states. This mode is not localized exponentially along the boundaries and our results are not consistent with the edge picture of the FQH effect.
It is also shown that the charge distribution is finite over the sample including the edges This is indispensable to obtain a well-defined theory, since the electron number density must be positive. If the charge density has singularities [30], one can not avoid negative divergence in the electron number density which leads to an ill-defined theory.
II. COMPOSITE FERMION PICTURE OF FRACTIONAL HALL LIQUIDS
Filling factor is defined by where N e is the number of electrons and N φ is the number of flux quanta. For ν = 1 the first Landau level is totally filled and the higher levels are all empty. Let us see why Hall liquids with inverse filling factor equal to an odd integer might be special. Recall that in this case the background magnetic field contains an odd number of magnetic flux quanta per electron.
The first argument we cite is due to Jain. [22,23] Imagine an adiabatic process in which we somehow move some of the flux quanta so that p units of flux are attached to each electron. For p even, the additional Dirac-Aharonov-Bohm phase associated with moving one electron around another is e iπp = 1 and so the statistics of the electron is unchanged.
The electrons are now moving in a reduced magnetic field B eff = B − 2πpn where n is the number density of electrons. (Note that in our convention, the unit of flux is 2π.) The filling factor has been increased to ν eff , given by ν −1 eff = (B − 2πpn)/2πn = ν −1 − p. For ν eff = m an integer, we have ν −1 = p + m −1 and Thus, fractional Hall systems with ν = m/(mp + 1) (p even) may be adiabatically changed into an integer Hall system with filling factor m, as was also emphasized by Greiter and Wilczek [24]. Note that this argument gives us more than we had hoped for. The case we wanted to understand, with ν −1 = an odd integer, is obtained for m = 1.
In this way a Hall liquid with ν = m/(mp + 1) with m an integer and p an even integer, is related to the integer Hall liquid. We may thus want to argue that since the integer Hall liquid is incompressible the fractional Hall liquid is also incompressible.
Another argument, historically earlier than Jain's argument, is due to Zhang, Hansson, and Kivelson [1]. Imagine attaching all of the flux to the electrons. Thus, each electron gets attached to it an odd number of flux quanta. By our preceding argument, the electrons with the attached flux quanta become bosons. We now have bosons moving in the absence of a background magnetic field. Since bosons can condense according to Bose and Einstein, we conclude that a Hall liquid with inverse filling factor equal to an odd integer is energetically favored.
III. EFFECTIVE FIELD THEORY
The situation described in Sec. II above may be effectively represented by the Chern-Simons gauge field theory coupled with external electromagnetic potential. The Chern-Simon term plays a role of attaching fluxes to electrons. By solving the equation of motion and the Maxwell equation consistency, we can obtain the charge density, current and potential profiles.
The effective Chern-Simons gauge Lagrangian density in the dual representation [25][26][27] is where a Iµ is the Cherm-Simons gauge field, the integer-valued symmetric matrix K is written where I is the m × m identity matrix and C is the m × m matrix in which every element is unity, namely, The matrix K specifies the coupling among the Chern-Simons gauge fields and is also related to the filling factor by [28]. The Maxwell term (1/g)f Iµν f µν where g is the coupling constant and c is the velocity of the Bogoliubov mode [2]. The vector potential A µ (µ = 0, x, y) of the electromagnetic field is coupled to the µ-th component of the charge current density which is given by where the contribution from I-th conserved current densities is Note that the vector potential for the constant external magnetic field B 0 has been already taken into account in the structure of the K matrix, and is not included in A µ .
Similarly a Iµ and the density J 0 I are measured from their average values in the following discussion.
The Lagrangian density is integrated over a sample S. On the boundary ∂S we impose α=x,y where n = (n x , n y ) is the unit vector normal to the boundary. This boundary condition simply expresses the physical condition that the current can not flow through the boundary ∂S. Since it is a physical requirement, it is obviously invariant with respect to a gauge transformation a Iµ → a Iµ + ∂ µ φ I . A remarkable fact is that the Chern-Simons term in Lagrangian density (3.1) is also gauge invariant after integration over S with the boundary condition (3.4).
IV. CHARGE AND CURRENT DISTRIBUTION
We consider a FQH liquid with filling ν = m/(mp + 1) where m is an integer and p is an even integer. To study steady-state distributions of charges and currents, we need the equation of motion derived from the effective action for the Lagrangian density (3.1). It which is expressed in terms of the current density only and is gauge invariant. It is explicitly Let us investigate the stational distributions of potential and charge in a Hall bar which is uniform in the y-direction, then one may put ∂ t J µ I = ∂ t A µ = 0 in (4.2). From the homogeneity of the system in the y-direction, impose ∂ y J µ I = ∂ y A µ = 0. Thus we have Combining these, we have an equation for J 0 : Here the potential A 0 is connected to the charge current J 0 (x) ≡ I J 0 where ξ is a constant having the dimension of velocity [ L T ]. Once J 0 is determined, the current J y is obtained from (4.3) or (4.4).
To diagonalize these matrix equations, introduce an orthogonal matrix The matrix C is diagonalized as Then (4.5) becomes decoupled equations The densities J 0 I − J 0 m (I = 1, · · · , m − 1) which are orthogonal to J 0 (x) and sometimes called as "neutral modes" are unphysical degrees of freedom, since they do not have electromagnetic couplings.
Therefore there is only a single mode even for the hierarchical FQH states and it is not exponentially localized near the edge as shown below. This result is in sharp contrast with the claims of Wen [12] and Macdonald [17] that there exist a number of edge branches in the hierarchical FQH liquids.
By solving (4.7), all the long-range behavior of the electronic density and current can be obtained. We have two characteristic length scales , (4.9) and (4.10) These two scales should be much larger than the magnetic length scale since our starting point is the long-range effective theory described by the Lagrangian density (3.1). The length scale λ 1 appears as the localization length of an edge mode in [21]. , This edge mode, however, does not exist when the Hall conductance is quantized, i.e. when the longitudinal voltage drop is zero. In what follows, we will denote the ratio of the two scales as and study the effects of the parameter η.
In the Lagrangian density (3.1), the particle-particle repulsive interaction is represented by the Maxwell term (more specifically, by the spatial part c 2 f 2 Ixy ). Thus if η = λ 1 /λ 2 = 0, (4.6) and (4.7) represent a non-interacting case. Remarkably these equations with η = 0 are essentially identical to those of MacDonald, Rice and Brinkman [30] obtained to study the charge and potential profiles in the integer quantum Hall effect. It is quite unexpected since our method is based on the effective field theory and is completely different from theirs.
A. Half-plane
The Wiener-Hopf method is used to obtain the charge, current and potential profiles of the FQH liquids of an infinitely wide Hall bar with a single edge located at x = 0. For the integer quantum Hall liquid i.e. η = 0, Thouless [29] analytically solved the equations of So the effect of the interaction can not be treated perturbatively from the non-interacting situations (η = 0). The approach we are taking for this correlated electron problem is totally non-perturbative.
In the present geometry, (4.6) and (4.7) are written To apply the Wiener-Hopf technique, we separate Extend (4.12) to the region x < 0 as where θ(x) is the step function: θ(x) = 1 for x > 0 and θ(x) = 0 for x < 0, and prime denotes differentiation with respect to x.
By substituting z = λ 2 k to (4.24) we obtain where D(λ 2 k) converges in the upper half complex k-plane and N(λ 2 k) converges in the lower half complex k-plane. Rewrite (4.23) as Now we study the asymptotic behavior of J 0 (x) for η > 0. The differential equation for From this, we obtain the asymptotic forms of ϕ(k; η) as To study the behavior of J 0 on the edge, we estimate Here we used (4.27), (4.29) and the constants c i , c ′ i are obtained by combining α ± and β ± suitably. From this, it is expected that the charge density takes a finite value at the edge (x = 0). In other words, J(x) has no divergence if η > 0. Since we have the Maxwell term in the effective Lagrangian which partially comes from Coulomb repulsion between electrons, it is a natural consequence. Furthermore, we can obtain the fourier transform of f (k), using 64 − · · · is the zeroth Bessel function of the first kind. From these and the results of Thouless [29], we have This behavior is shown in Fig. 1. It can be shown from (4.4) that Thus the asymptotic behavior of the current distribution near the edge is rescale the parameters as λ i /L. We will write these rescaled parameters as λ i for simplicity.
The derivative of (4.6) is represented by the Hilbert transformation as where T x denotes the Hilbert transformation Here p.v. dy denotes the principal value integral. From (4.6) , (4.7) and (4.36) the equations for currents J 0 (x), and J y (x) are i) λ 1 = 0 (η = 0): Suppose that the charge distribution J 0 (x) has singularities at the edges x = ±1, as it has in the half-plane Hall bar. The singularities must be integrable, namely, J 0 (x) ∼ (1 ± x) −α with 0 < α < 1. It can be shown from Theorem II in Appendix that if α = 1/2, the r.h.s. of (4.38) has singularities ∼ (1 ± x) −α−1 which leads to a contradiction.
Thus the sigularlity is not allowed except α = 1/2. The factor cot(απ) in Theorem II vanishes if α = 1/2 and the above argument against the existence of singularities does not hold.
The inverse operation of the Hilbert transformation (Theorem III in Appendix), gives a systematic expansion of J 0 (x) for λ 2 > 1 as where j(x)'s satisfy the relation At the edges j 0 (x) has singularities of power −1/2 and j n (±1) = 0 for n ≥ 1. Theorems I and III in Appendix give the explicit solutions for j 0 and j 1 as Using (4.39), we have The current distribution (4.43) is symmetric. It is contrasted with the edge picture in which the currents flow in the opposite directions at the two edges. The density, current and potential profiles thus obtained is plotted in Fig. 2.
ii) λ 1 > 0 (η > 0): In this case the singularities of the charge distribution at the edges is suppressed due to the second derivative term in the r.h.s. of (4.38).
If J 0 (x) has a logarithmic singularities J 0 (x) ∼ log(1 − x) − log(1 + x) (which we will call as a "simple" logarithmic singularity), one can prove that (4.38) does not hold since For another logarithmic singularities like higher power of logarithm, for example, to find explicit Hilbert transformations becomes more cumbersome. However, it is expected that from a power counting argument. The second derivative of this logarithmic singularity gives a singularity with power −2. Then J 0 (x) can not have any logarithmic singularities which is a contradiction. Thus we assert Proposition II. A solution of (4.38) with λ 1 > 0 (η > 0) has neither a power singularity nor a (simple) logarithmic singularity at the edges (x = ±1).
Since the Coulomb interaction repels particles each other, a divergent singularity at an edge is physically unacceptable. This observation is consistent with Proposition II. Thus we claim that the charge density is finite at the edges. Note that there is a singularity at the edge in the noninteracting case η = 0 ( [29]). This is caused by the absence of particle repulsion and disappears once an interaction is taken into account.
In order to obtain J 0 (x) we expand it in powers of λ 2 as By substituting this into (4.38), we obtain and The solution of the inhomogenious differential equation (4.46) is where Ei(x), Shi(x) and Chi(x) are the exponential, hyperbolic sine and hyperbolic cosine integrals, respectively, which are explicitly written and we have used the Hilbert transformation Using (4.6) and (4.39), we have Note that the current density J y in (4.53) has no divergence at the edges and it is again symmetric. It is contrasted with the edge picture in which the currents flow in the opposite directions at the two edges. The lowest order approximation of A 0 (x) is obtained by using The electron density is plotted in Fig. 3 (a) in the lowest order approximation (λ 2 = 0) (4.45) and (b) with the correction (4.48). The current distribution is plotted in Fig. 4 (a) in the lowest order approximation (λ 2 = 0) and (b) with the correction (4.53).
Note that in the lowest order approximation, both the electron density and the current is localized near the edges exponentially with the localization length λ 1 This behavior, however, completely disappears again once the corrections are taken into account as seen from Fig. 3 (b) and Fig. 4 (b) where the bulk currents do not vanish.
The potential is plotted in Fig. 5. Note that in the lowest order approximation, A 0 (x) has a maximum and a minimum near the edges in contrast to the non-interacting case η = 0 (4.44). These maximum and minimum of A 0 (x) survive even if the next leading contribution is taken into account as seen from Fig. 5 (b).
ACKNOWLEDGMENTS
It is pleasure to thank Y. Avishai for very important help. We also thank T. Eguchi and N. Nagaosa for useful discussions.
APPENDIX A: THEOREMS OF THE SINGULAR INTEGRAL EQUATIONS
The following Theorems I, II and III [31] are used in the main text.
Theorem II. Let f (x) be an L p -function (p > 1) which in a small neighborhood (−1, −1+ δ) (δ > 0) of the point x = −1 can be written in the form where A is a constant, g(x) vanishes at x = −1 and satisfies (uniformly) a Lipschitz condition of positive order ǫ, i.e.
Then the Hilbert transform of f (x) has the asymptotic representation if 0 < α < 1, and the asymptotic representation If the point x = −1 is replaced by the point x = +1, then all remains the same except that cot(απ) is changed into − cot(απ) and − log(1 + x) is changed into + log(1 − x).
Theorem III. If a given function f (x) belongs to the class L 4/3+ǫ for sufficiently small ǫ > 0, the equation has the solution where C is an arbitrary constant. Electron density J 0 (x) which has singularities with power −1/2 at the edges x = ±1.
(b) Current 8π g(1+mp) J y (x). Note that it is symmetric. (c) Potential 1 πξ A 0 (x). Note that it is a monotonic function and the electric field is in the same direction in the sample. | 2014-10-01T00:00:00.000Z | 1996-07-18T00:00:00.000 | {
"year": 1996,
"sha1": "2548f8c2f9f9fd8ed16ef2837e6c07467ddf400a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9607130",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2548f8c2f9f9fd8ed16ef2837e6c07467ddf400a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
86651422 | pes2o/s2orc | v3-fos-license | PRECIPITATING FACTORS CONTRIBUTING SEVERE HYPOGLYCEMIA AMONG TYPE-2 DIABETIC PATIENTS ADMITTED IN A TERTIARY CARE HOSPITAL OF BANGLADESH
Background: Hypoglycemia is the commonest acute emergency in the diabetic population. This study intends to find out the factors precipitating severe hypoglycemia in type-2 diabetes patients admitted in a tertiary care hospital in Bangladesh. Methods: This cross-sectional study was conducted among 311 diabetic patients admitted in BIRDEM general hospital with hypoglycemia (plasma glucose concentration of <70mg/dl or 3.9mmol/l) with or without altered level of consciousness or neurological recovery of the patient occur after normalization of blood sugar , from a period of March 2014 to April 2015. After obtaining informed written consent a questionnaire focusing on the probable factors of severe hypoglycemia was supplied to the subjects or attendants at bedside. Relevant reports were collected from record book. Results: Mean age of the study respondents was 49.02(±15.99) years which ranged from 18 to 90 years with male predominance (59%). Mean duration of diabetes was 8.5(±5.4) years with range from 3 to 15 years. Majority (85.5%) had more than 6.5% of HbA1c. Severe hypoglycemia was revealed mostly (61.41%) in insulin users. Among them 74(33.5%) and 79(35.7%) patients were using premixed and self mixed regimes respectively. Factors causing severe hypoglycemia were assessed by multivariate analysis. Significant association was found in meal related factors (p<0.001)(missed meal, delayed or inadequate meal), insulin related factors (p<0.001) (Newer dose, miscalculation, faulty technique, defective absorption). Other factors were renal impairment (p<0.001), gastroparesis (p<0.01) and aging (p =0.04). Conclusion: Lifestyle factors are the most important precipitating causes of hypoglycemia. Therefore, self and periodic re-evaluation of patient’s knowledge, attitude and practice towards hypoglycemia is of utmost importance to prevent this common but life-threatening complication.
Introduction
Hypoglycemia is the most feared and a common barrier in achieving strict glycemic control. This was observed in two landmark studies, the Diabetes Control and Complications Trial (DCCT, 1997) and U.K. Prospective Diabetes Study (UKPDS, 1998), which demonstrated the benefits of early intensive glycemic control in type 1 and type 2 diabetes, respectively. 1,2 Large clinical trials have shown significant cardiovascular and cerebrovascular morbidities associated with hypoglycemia. 3 Hence individualized targets of glycemic status has been emphasized by American Diabetes Association(ADA) particularly in patients with long duration of DM and co morbidities to reduce the risk the hypoglycemia. 4 According to ADA (2013) severe hypoglycemia is an event requiring assistance of another person to administer actively carbohydrate, glucagon or other resuscitative actions. Plasma glucose measurements may not be available during such an event, but neurological recovery attributable to the restoration of plasma glucose to normal is considered sufficient evidence that the event was induced by a low plasma glucose concentration. 10 The symptoms of hypoglycemia are markedly varied. Florid symptoms can develop in normal range of blood sugar in long standing poorly controlled groups where as it may be nonspecific or less intense even in severe hypoglycemia with increasing age , in type 1 diabetic or recurrent attacks of hypoglycemia and with autonomic neuropathy. 5 In 2014 American Association of Clinical Endocrinology conducted a survey on symptomatic hypoglycemia among 2530 type 2 diabetic patients in USA which revealed about half of the study population were unaware of the precipitating factors and sequel of symptomatic hypoglycemia. 6 In this background, awareness and knowledge regarding factors causing hypoglycemia need to be addressed properly to prevent recurrence of hypoglycemia in future. So, we conducted this study to reveal the precipitating factors of severe hypoglycemia among type 2 diabetic patients admitted in a tertiary care hospital (BIRDEM) of Bangladesh.
Materials and Methods
This cross-sectional study was conducted among the diabetic patients admitted in BIRDEM general hospital with hypoglycemia (plasma glucose concentration of <70mg/dl or 3.9mmol/l) with or without altered level of consciousness or neurological recovery of the patient occur after normalization of blood sugar from a period of March 2014 to April 2015. Hypoglycemia in non diabetic and pregnant women was not approached for the study. Sample size was calculated at 95% confidence interval with 5% precision which arrived 164 using prevalence 70%. 7 After obtaining informed written consent from the subjects after recovery or attendants a questionnaire was given to be filled up. The questionnaire was focused on all the probable factors of severe hypoglycemia. Relevant reports were collected from record book. This study was approved by the ethical committee of the institution.
Results:
Total 339 patients were approached in the study. Of them 18 refused to be enrolled. Data from the remaining 311 patients were collected. In figure 1 frequency of different insulin regimes are shown. Hypoglycemia was more frequent those using premixed (33.5%) and self mixed(35.7%) regimes respectively. Least (4.97%) was observed in patients with long acting analogue only.
Discussion:
Key component of Diabetes care is to maintain target blood sugar to prevent or delay diabetes related complication. However, iatrogenic hypoglycemia is a major barrier to achieve this. In this hospital based study three hundred and eleven diabetic patients with hypoglycemia were included. Among them 184 were male (59.2%) and rest were female (40.8%). Highest percentage (33.3%) of patients belonged to the age group of 51 to 60 years. Several studies also support this finding. (3,7,8) . It indicates extreme age is an important factor of hypoglycemia. This might have been because of age related cognitive impairment, multiple co morbidities, attenuated sympathoadrenal response to hypoglycemia. This study found no statistically significant difference between incidence of hypoglycemia in between demographic distribution, however the incidence was much higher (74.3%) in less educated persons than who are highly educated. Better communication and repeated discussion appropriate to patient's literary level may reduce the risk. Although low literacy is a risk factor for severe hypoglycemia, it is presumably modifiable. Duration of diabetes was not found statistically significant. In this study more than eighty percent patients who experienced severe hypoglycemia had Hba1c >7%.Similar reports found in a study based on Hba1c and hypoglycemia (9) . Possible explanation might be lack of proper self management education in this class of subject. In Chicago, June 2013 , ADA described association between Hba1c with self reported hypoglycemia though no causal pathways between this two were established. They concluded hypoglycemia was common irrespective of glycemic status. (10) Association of treatment modalities with hypoglycemia revealed a significant association with patients treated with insulin. Among them overall 60% were treated with premixed and self mixed regimes whereas only long acting insulin user had significantly lower frequency of hypoglycemia (4.97%). The hallmark study by UKPDS found 70-80% of hypoglycemic episodes in insulin treated patients (2) . Other larger trial ACCORD and ADVANCE who found fatal vascular outcomes in intensively treated patients with insulin following hypoglycemia. (3,11) Appropriate insulin technique and self adjustment was not properly addressed in this study population which might develop such remediable events.
Factors considered to be responsible for hypoglycemia were assessed in this study which demonstrated missing meals and insulin related factors eg; intensification of dose, error by patient or physician, miscalculation of dose, faulty technique, deliberate overdose, injection site issue were most common. Though commonest but these factors are presumably modifiable by collaborative approach. Background diseases contributed to severe hypoglycemia were renal and liver impairment. It is known that exogenous insulin remains in longer duration with unpredictability as renal and hepatic metabolism declines and contribution of glucose through gluconeogenesis is also reduced which causes recurrent hypoglycemia (12,13) . In this study 22.18% had renal impairment and 4.5% had hepatic dysfunction. Autonomic neuropathy was found to be another important precipitating factor (p 0.01) in this study. Autonomic neuropathy associated hypoglycemic unawareness is another alarming situation which should be addressed by physician with great emphasis to prevent severe hypoglycemia induced fatal outcomes. (14,15) Even though most diabetics in this study had more than 5 years of duration, knowledge on hypoglycemia was not satisfactory. More than half knew about symptoms and what to do while an episode. Less than half knew what were the precipitating factors for severe hypoglycemia, only 32.79%were aware of complications related to it. Very few (30.22%) knew SMBG as an appropriate tool to prevent further attacks by self adjustment. Education for patient empowerment should be emphasized for better understanding on self management strategies. Particularly the interactive relationship of hypoglycemia with their medication, meal plan, physical activity and special situation like sick days management should be clearly explained to limit hypoglycemic episodes thus improving standard of living in diabetics. (16) Conclusion: In most of the cases, lifestyle factors are the precipitating causes of hypoglycemia, which are readily recognizable and easily modifiable. Therefore, appropriate diabetic education and periodic reevaluation of patient's knowledge, attitude and practice towards hypoglycemia is of utmost importance to prevent this common but life-threatening complication. | 2019-03-28T13:33:22.733Z | 2019-01-22T00:00:00.000 | {
"year": 2019,
"sha1": "e38f293f8d1a10c91043e2fa2ec5e5a32131f733",
"oa_license": null,
"oa_url": "https://www.banglajol.info/index.php/BJMED/article/download/39920/30246",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "71218d4c90b4b142a7c63bee1b0eaa6ae9c148bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34988790 | pes2o/s2orc | v3-fos-license | Roles of the cytoskeleton in regulating EphA2 signals
The lateral organizations of receptors in the cell membrane display a tremendous amount of complexity. In some cases, receptor functions can be attributed to specific spatial arrangements in the plasma membrane. We recently found that one member of the largest subfamily of receptor tyrosine kinases (RTKs), EphA2, is organized over micrometer length scales by the cell’s own cytoskeleton, and that this can regulate receptor signaling functions. Spatial organization of the receptor was found to be highly associated with invasive character, and mechanical disruption of receptor organization altered key down-stream events in the EphA2 signaling pathway. In this Addendum article, we put forth possible models for why EphA2 and other receptors may employ mechanical and spatial inputs mediated by the cytoskeleton. We speculate that this class of input may be common, and contributes to the intricacies of cellular signaling.
T he lateral organizations of receptors in the cell membrane display a tremendous amount of complexity. In some cases, receptor functions can be attributed to specific spatial arrangements in the plasma membrane. We recently found that one member of the largest subfamily of receptor tyrosine kinases (RTKs), EphA2, is organized over micrometer length scales by the cell's own cytoskeleton, and that this can regulate receptor signaling functions. Spatial organization of the receptor was found to be highly associated with invasive character, and mechanical disruption of receptor organization altered key down-stream events in the EphA2 signaling pathway. In this Addendum article, we put forth possible models for why EphA2 and other receptors may employ mechanical and spatial inputs mediated by the cytoskeleton. We speculate that this class of input may be common, and contributes to the intricacies of cellular signaling.
Receptor Spatial Organization
The spatial organization of receptors in the cell membrane spans multiple length scales, from the molecular to the size of the cell itself. Signaling assemblies consisting of tens, to tens of thousands of molecules can apparently function as cooperative units. Hierarchical organization of signaling receptors can directly feed into signaling pathways to regulate collective cell signaling outcomes. [1][2][3] For example, T-cell receptor activation was found to be dependent on the spatial organization within the immunological synapse, 1,[4][5][6][7][8][9] and in a recent report we recently found Roles of the cytoskeleton in regulating EphA2 signals Khalid 3 There are distinct biophysical mechanisms that regulate receptor spatial organization and associated biochemical functions. The most commonly studied is direct protein-protein interaction. For example, the ligand-induced dimerization of RTKs is widely considered as the prototypical mechanism for their activation. 2,[10][11][12][13] Another effecter that influences protein organization is lipid-membrane driven separation of proteins into discreet assemblies. The formation of such lipid membrane compartments may be based on the immiscibility of specific lipid components in the plasma membrane [14][15][16] or mechanical bending effects at an intermembrane junction. 17 A third cellular regulator of protein organization is the network of cytoskeleton filaments which can act as scaffolds with the aid of adaptor proteins for corralling or directly moving receptors across the cell membrane. 1,3,7,18 The interplay between these mechanisms exerts hierarchal and dynamic control of receptor organization and cell function. The role of the cytoskeleton is typically studied in the context of adhesion proteins such as integrins, and its role in the arrangement of free floating membrane proteins is poorly defined. 18,19 This is because the connectivity between free floating receptors and the cytoskeleton is not clear and little is known about these associations.
The EphA2 Signaling Pathway
RTKs play important roles in receiving and amplifying signals from other cells artICLe addendum artICLe addendum Interestingly, cells with spatially mutated EphA2 receptor organization showed: (a) altered f-actin morphology and (b) a decrease in the recruitment and the localization of the ADAM10 metalloprotease. Given the roles of secretases (such as ADAM10), and the cytoskeleton, the physical reorganization of the EphA2 receptor may have wide implications across multiple signaling pathways. ADAM10, for example, plays an important role in EphA receptor signaling since it takes part in ephrin ligand shedding; thus allowing for release of the physical tether between adjacent cells engaged in juxtacrine signaling. The spatial mutation strategy avoids off-target effects that are common when using pharmacological or genetic inhibition and thus it provided a direct link between receptor transport, ADAM10 recruitment and actin dynamics.
Why Mechanical Force? Biological Roles for Receptor Transport
The direct consequences of EphA2 transport are two-fold (see Fig. 1). The first is altering the size and the distribution of EphA2-ephrin-A1 clusters across the cell-cell junction. One may be tempted to draw parallels with the immunological synapse where the location of the T-cell receptor (TCR) can affect phosphorylation states. 5,7,8 This is not the case, and we did not observe clear differences in EphA2 phosphorylation as a function of receptor spatial organization. 3 The cellular mechanism of receptor transport may be similar, but signal outputs are different. A second consequence of EphA2 transport is the potential to apply mechanical strain on the EphA2-ephrin-A1 complex. If the ligand or the receptor encounters resistance to lateral transport, then the complex will experience tension and may undergo a conformational switch. The type of signaling mechanism (mechanotransduction model) is commonly cited for proteins involved in cellular adhesion such as the integrin family. [28][29][30][31] An important question pertains to why the EphA2 pathway might incorporate sensitivity to force. The formation of complex tissues with controlled form and tensional homeostasis implies that cells can in live cell couples, and therefore have not been explored in detail.
Seeking Signals: Cytoskeleton Transport of Ligand-Bound EphA2
Using live-cell fluorescence microscopy techniques we found that the EphA2 receptor rapidly formed clusters as a result of ligand binding. Clusters grew and coalesced until they were transported to the center of the cell-supported membrane junction. The motion of the EphA2ephrin-A1 clusters was highly correlated to the motions of the actin cytoskeleton. This was measured using two-color total internal reflection fluorescence microscopy (TIRFM) tracking of ephrin-A1 and enhanced green fluorescent (EGFP)-actin. Eph receptors are known to play a role in remodeling the actin cytoskeleton and to elicit actomyosin contraction through the Rho family of guanosine triphosphate hydrolases (GTPases). 24 Ephrin-A1 stimulation of EphA2 is reported to lead to RhoA-dependent actomyosin contractility, which is in agreement with the observed cellular phenotypes in our experiments. 3,26,27 Interestingly, we found that the translocation of ligand-bound EphA2 followed that of the actomyosin contractility, thus suggesting a physical association between them. In order to identify the mechanism of actin reorganization and its connection with EphA2 transport, the Rho-kinase inhibitor Y27632 was used to block the cytoskeleton contraction pathway. 27 Analysis of the results revealed that the mechanism of ligand-receptor transport was mediated through a Rhodependent pathway that actively transports EphA2 receptor clusters.
To elucidate the relation between EphA2 receptor motions and specific signaling cascades we used the "spatial mutation" strategy. 3,5,7,8 In this approach, physical barriers fabricated onto the underlying substrate guide mobility of molecules in the supported membrane. These structures additionally impede the lateral motion of cell surface receptors through their action on bound ligands in the supported membrane. The technique is highly specific because only ligandbound EphA2 receptors expressed on the cell surface are spatially reorganized. and from the immediate environment. The Eph family of receptors constitute the largest subfamily of RTKs, and these contribute to cellular development and morphogenesis in a wide range of tissues. Abnormal expression and function of the EphA2 receptor is implicated in a range of human malignancies including breast, lung and ovarian cancers. In particular, 40% of human breast cancers overexpress EphA2, which is associated with a poor prognosis and the development of drug resistance. [20][21][22] The ligand to EphA2 is a membraneassociated GPI-linked protein expressed on the surface of adjacent cells. 22,23 Because both the ligand and receptor are in membranes, EphA2 binding and activation can only proceed through direct physical contact between cells. Structural studies of EphA receptors indicate that ligandbinding can lead to dimerization and the formation of higher order aggregates. 22,23 Clustering of Eph-ephrin complexes is thought to be enhanced by specific domains. 24 These include the fibronectin type III repeats, the SAM domain of the Eph receptors and by PDZ domain proteins. 25 Ligand-induced clustering of the EphA2 receptor results in autophosphorylation and recruitment of downstream signaling molecules through Shc and Grb2 adaptor proteins. Receptor activation leads to stimulating the PI3K, Akt and MAPK pathways and will result in recruitment of the c-Cbl adaptor protein and a disintegrin and metalloprotease 10 (ADAM10) which regulate signaling through receptor degradation.
As is the case with most studies on such juxtacrine signaling systems, activation of EphA2 is often achieved with soluble ligands that are pre-clustered. We hypothesized that ephrin-A1 bound to synthetic lipid membranes would provide for a better mimic of the natural cell-cell junction geometry and might reveal additional features of this signaling process. 3 This interface presents active ephrin-A1 ligand molecules that are fluid in two dimensions and thus captures some of the native geometry. We found that membranebound ephrin-A1 triggers the EphA2 receptor on living cells and allows for quantifying receptor translocation. Such quantitative measurements are difficult association of EphA2 to the cytoskeleton. The ERM family of intracellular proteins (which includes ezrin, radixin and moesin) are possible candidates since they are known to mediate dynamic binding between actin filaments and the cytoplasmic face of several transmembrane proteins. 38 ERM proteins are known to display diversity in their functions across different cell lines. For example, ezrin and moesin play an active role in the human T cell activation pathway by influencing the spatial organization of the immunological synapse. 1,7,17,39 A multidisciplinary approach that combines advances in biophysical chemistry, optical microscopy/nanoscopy and cell biology will be required to identify and characterize the proteins mediating this coupling.
Given the mechanical sensitivity of the EphA2 pathway, it seems plausible that many other receptor pathways are susceptible to mechano-regulation. We speculate that receptor transport is a general mechanism used by a range of cells and receptors and, in different contexts, may be used to achieve specific goals. The advent of physical methods, such as the spatial mutation strategy, marks a clear path toward investigating the subtleties of mechanical/spatial transduction, and one that involves the confluence of biophysics, surface chemistry and cell biology. sense and react to very subtle changes in the mechanical properties of their environments. 28,32,33 In this regard, much attention has been focused on the integrins and associated focal adhesion proteins as master regulators of force. 31,34 However, our work suggests that other receptors may incorporate sensitivity to force and we propose that mechanotransduction may be a common motif in signaling pathways. Mechanical aspects of the cellular microenvironment will change the spatial organization and the tension forces acting on all receptors whose ligands are surface associated. Correspondingly, it is likely that natural selection processes have explored this component of signal regulation.
In the case of EphA2, the increased ADAM10 recruitment found in the unrestricted EphA2 receptor clusters suggests that there may be enhanced levels of ligand cleavage and endocytosis of the ligand-receptor complex. The accepted mechanism for termination of ephrin forward signaling involves the regulated cleavage of ligands by the ADAM10 protease. 35,36 Enhanced rates of EphA2 endocytosis would consume available ephrin-A1 ligand and may bias the inputresponse function of the entire system. Importantly, the EphA2 receptors interact with other signaling pathways, such as the chemokine receptors, integrins and cadherins, 37 which suggests that increased EphA2 receptor endocytosis may affect other signaling cascades.
Irrespective of the actual biological purpose for these behaviors, the experimental tools developed-the spatial mutation strategy-offer a route to uncovering mechanisms and signaling roles for receptor spatial organization. The technique enables single cell manipulations and facilitates quantitative characterization. 8 Importantly, this approach differs fundamentally from conventional tools employed for deconstructing the signaling roles of the cytoskeleton. For example, the most widely used approach consists of drug targeting of the cytoskeleton, which affects many signaling pathways and thus lacks specificity.
An open question to be addressed is the mechanism mediating the physical | 2018-04-03T02:47:23.051Z | 2010-09-01T00:00:00.000 | {
"year": 2010,
"sha1": "45eb55d7b059e3078702ee97e4cc315b2f28b9a0",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/cib.3.5.12418?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "65d3db1585ade6e3a29b3bd1a8e67f9cc4f0d446",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
24959879 | pes2o/s2orc | v3-fos-license | Molecular determinants of nuclear receptor-corepressor interaction.
Retinoic acid and thyroid hormone receptors can act alternatively as ligand-independent repressors or ligand-dependent activators, based on an exchange of N-CoR or SMRT-containing corepressor complexes for coactivator complexes in response to ligands. We provide evidence that the molecular basis of N-CoR recruitment is similar to that of coactivator recruitment, involving cooperative binding of two helical interaction motifs within the N-CoR carboxyl terminus to both subunits of a RAR-RXR heterodimer. The N-CoR and SMRT nuclear receptor interaction motifs exhibit a consensus sequence of LXX I/H I XXX I/L, representing an extended helix compared to the coactivator LXXLL helix, which is able to interact with specific residues in the same receptor pocket required for coactivator binding. We propose a model in which discrimination of the different lengths of the coactivator and corepressor interaction helices by the nuclear receptor AF2 motif provides the molecular basis for the exchange of coactivators for corepressors, with ligand-dependent formation of the charge clamp that stabilizes LXXLL binding sterically inhibiting interaction of the extended corepressor helix.
Nuclear receptors (NRs) are a large class of DNA-bound transcription factors, many of which are regulated by the binding of specific ligands and which control numerous critical biological events in development and homeostasis (for review, see Beato et al. 1995;Maegelsdorf et al. 1995). Over the past few years, it has become clear that the transcriptional functions of unliganded and liganded receptors are regulated by coactivators and corepressors that associate with the carboxy-terminal ligand-binding domain (LBD) (for review, see Horwitz et al. 1996;McKenna et al. 1999;C.K. Glass and M.G. Rosenfeld, in prep.). Ligand-dependent transcriptional activation by NRs has been found to depend on a highly conserved motif in LBD, referred to as AF2 (Danielian et al. 1992;Durand et al. 1994; Barettino et al. 1994;Tone et al. 1994). Crystal structures of the LBDs of multiple NRs have revealed that they are folded into a three-layered, anti-parallel, ␣-helical sandwich. A central core layer of three helices is packed between two additional layers of helices to create a molecular scaffold that establishes a ligand-binding cavity at the narrower end of the domain.
In the unliganded retinoid X receptor (RXR) structure, the AF2 helix extends away from the LBD (Bourguet et al. 1995), whereas in the agonist-bound retinoic acid receptor ␥ (RAR␥), thyroid hormone receptor ␣ (TR␣), estrogen receptor (ER), and peroxisome proliferator-activated receptor ␥ (PPAR␥) LBD structures, the AF2 helix is tightly packed against the body of the LBD domain and makes direct contacts with ligand (Renaud et al. 1995;Wagner et al. 1995;Brzozowski et al. 1997;Darimont et al. 1998;Nolte et al. 1998;Shiau et al. 1998). These studies have suggested that ligand-dependent changes in the conformation of the AF2 helix result in the formation of a surface (or surfaces) that facilitates coactivator interactions.
A surprising number of coactivators associate with NRs in a ligand-dependent manner and may combinatorially and/or sequentially be involved in transcriptional activation (for review, see Freedman 1999;McKenna et al. 1999;C.K. Glass and M.G. Rosenfeld, in prep.). A surprising number of these putative coactivators interact based on the presence of helical motifs containing an LXXLL core consensus (Le Douarin et al. 1996;Heery et al. 1997, Torchia et al. 1997aDing et al. 1998). Cocrystal structures of PPAR␥ with a region of steroid receptor coactivator-1 (SRC-1) containing two LXXLL motifs, and of liganded ER and thyroid hormone receptor form (T 3 R) with a peptide comprising one LXXLL motif of glucocorticoid receptor interacting protein-1 (GRIP-1) (Darimont et al. 1998;Nolte et al. 1998;Shiau et al. 1998), indicate that a critical conserved glutamic acid residue in receptor AF2 helices and a critical conserved lysine residue in helix 3 of the LBD make hydrogen bonds to the backbone amides and carbonyls of leucine 1, and leucine 5, respectively. These contacts form a "charge clamp" that positions and orients the hydrophobic face of the LXXLL helix, allowing the leucine residues to pack into a hydrophobic pocket formed by the surfaces of receptor helices 3, 4, 5 (PPAR␥) or 3, 5, 6 (T 3 R). A critical determinant of coactivator binding is the length of the LXXLL helix, which fits precisely between the conserved glutamate and lysine residues upon closure of the AF-2 in the presence of ligand. Residues outside the core motif appear to provide receptor and ligand-dependent specificity (Darimont et al. 1998;McInerney et al. 1998;Mak et al. 1999). The structure of the PPAR␥-SRC-1 cocrystal indicated that two LXXLL motifs from a single SRC-1 molecule interacted with the AF-2 domains of both subunits of the LBD dimer .
Intriguingly, the structures of the ER LBD bound to the antagonist raloxifene or dihydroxytamoxifen (OHT) demonstrate a distortion in the position of the AF2 helix (Brzozowski et al. 1997;Shiau et al. 1998). Because of the presence of an additional side chain in these antagonists, the AF2 helix is unable to pack normally and, instead, is translocated to a position that overlaps with the site of coactivator interaction. This conformation prevents coactivator binding and conversely facilitates corepressor binding (for review, see Wurtz et al. 1996;C.K. Glass and M.G. Rosenfeld, in prep.).
A search for proteins that could function as corepressors of the thyroid hormone receptor TR and RAR led to the molecular cloning of cDNAs encoding nuclear receptor corepressor (N-CoR) Kurokawa et al. 1995;Zamir et al. 1996), a retinoid X receptor (RXR) interacting protein 13 (RIP 13) (Lee et al. 1995), and the highly related factor SMRT [silencing mediator for retinoic acid and thyroid hormone receptors] (Chen and Evans 1995), or T3-associated factor (TRAC2) (Sande and Privalsky 1996). Both N-CoR and SMRT interact with unliganded RARs and TRs via a bipartite nuclear receptor interaction domain in a manner that is enhanced by antagonists or removal of the AF2 domain. Several lines of evidence indicate that NCoR and SMRT are required for the active repression functions of unliganded retinoic acid and thyroid hormone receptors (Chen and Evans 1995;Hö rlein et al. 1995;Seol et al. 1996;Zamir et al. 1996;Li et al. 1997;Wong and Privalsky 1998). N-CoR and SMRT are also effective corepressors of Rev-Erb (Zamir et al. 1996), COUP-TF (Shibata et al. 1997), and DAX1 (Crawford et al. 1998). Although unliganded steroid hormone receptors do not appear to interact effectively with N-CoR or SMRT, clear interactions are observed in the presence of antagonists (Vegeto et al. 1992;Lanz and Rusconi 1994;Xu et al 1996;Jackson et al. 1997;Smith et al. 1997;Lavinsky et al. 1998;Zhang et al. 1998a,b), and these interactions appear to be essential for full antagonist activity (Lavinsky et al. 1998;Norris et al. 1999).
In this paper we investigate the molecular mechanisms that determine interactions of corepressors with unliganded TR and RAR and their dissociation by ligand. Our data suggest that the N-CoR/SMRT corepressors interact with unliganded nuclear receptors in a fashion analogous to that utilized by coactivators with liganded receptors but that amino-terminal extension of conserved N-CoR interaction helices, when compared to the LXXLL consensus for coactivator interaction motifs, constitutes a critical distinction in the alternative ligand-independent binding of corepressors and liganddependent recruitment of coactivators to nuclear receptors.
Receptor interaction domains in N-CoR
The domain structure of N-CoR is diagrammed in Figure 1A, illustrating the two carboxy-terminal regions involved in NR interactions. N-CoR has been suggested to bind to unliganded receptors in vivo and to be released on ligand binding. This premise is based on the effects of ligands on interactions with DNA-bound receptors in in vitro , and yeast two-hybrid experiments (Chen and Evans 1995), although ligand-dependent release of NCoR is less evident when evaluated on NRs in solution. We therefore first wished to confirm the ligand-dependent release of N-CoR from RARs bound to the RAR promoter in cells utilizing the chromatin immunoprecipitation (ChIP) assay (Braunstein et al. 1993;Luo et al. 1998). In this experiment we utilized N-CoR-specific anti-IgG, to immunoprecipitate sheared, chromatinized DNA, prepared from cells cultured in the presence or absence of all-trans retinoic acid. As shown in Figure 1B, N-CoR could be cross-linked to RAR promoter in the absence, but not in the presence, of ligand. This observation provides evidence that on endogenous, regulated chromatinized transcription units, N-CoR is physiologically associated with unliganded, but not liganded, DNA-bound RAR. Therefore, exchange of N-CoR occurs on specific promoters in the intact cell, as exemplified by the regulated RAR promoter.
The carboxy-terminal NR interaction domain (ID-C) previously has been suggested to be localized to a 60amino-acid region, located between amino acids 2240-2300 (Chen and Evans 1995;Hö rlein et al. 1995), although an amino-terminal region spanning from amino acids 2040-2239 was later identified (Zamir et al. 1997;Cohen et al. 1998). To further explore the molecular basis for association of the corepressor complex with unliganded receptors, and to provide an explanation for ligand-dependent dissociation, we systematically mapped the amino-terminal interaction domain (ID-N) of N-CoR. Binding experiments using a series of 50-amino-acid overlapping fragments from amino acids 2040-2180 of N-CoR, fused to GST, suggested that residues from amino acids 2040-2090 encompassed the most potent interaction region (Fig. 1C). Further mapping revealed that residues 2060-2080 were critical for interaction in vitro (Fig. 1D). Based on previous identification of residues 2268-2298 as the critical region in the ID-C, potential sequence alignments between the two regions were considered. One alignment ( Fig. 2A) appeared to be the most likely and was further supported upon consideration of corresponding sequences in both murine and human N-CoR and SMRT. In light of the similarity in this alignment segment with the LXXLL coactivator motif (Heery et al. 1997;Darimont et al. 1998;Ding et al. 1998), structural predictions of these regions were evaluated using several algorithms, including the self-optimized prediction method (Geourjon and Deleage 1994), which predicted an extended helical structure for these putative N-CoR and SMRT recognition domains that is at least one helical turn longer than the LXXLL helix ( Fig. 2A).
Cooperative ID recruitment to DNA-bound receptor heterodimers
Based on this alignment, a series of mutations in ID-C and ID-N were generated to test the potential importance of the predicted leucine and isoleucine residues. Sequences spanning amino acids 2062-2084 or 2268-2289 were each capable of detectable, specific interactions with unliganded TR (data not shown). This was further explored using a mammalian two-hybrid approach involving recruitment of a VP16-RAR fusion protein (Lipkin et al. 1996) to a GAL4/T 3 R carboxy-terminal fusion protein. In this assay effective interaction was observed for the ID-C peptide, but the ID-N peptide region could interact only weakly, with either TR or RAR. Therefore, The experiments reproducibly revealed the presence of N-CoR on the RAR promoter in the absence of ligand, but not in the presence of RA; no detectable precipitation of promoter was observed with control preimmune IgG (ct IgG). (C) GST pull-down assay testing binding of overlapping fragments of ID-N to T 3 R. Sequences spanning amino acids 2040-2090 are required for effective interaction. (D) Mapping of the critical residues in the 2040-2090 interaction domain. Cluster mutation of the indicated five adjacent amino acids to alanine residues was performed across the interval. GST pull-down analysis revealed that residues spanning amino acids 2060-2080 were quantitatively the most critical for interaction, with some contribution from amino acids 2080-2090. a region from amino acids 1954-2215 spanning the ID-N domain was utilized in the two-hybrid assay ( Fig. 2A). Simultaneous mutation of L1/I5/I9-L9 to alanine residues in either ID-N or ID-C abolished interaction with unliganded NR.
To further examine the possibility of cooperativity between the amino-and carboxy-terminal interaction motifs, we introduced mutations into the L1, I5, and L/I 9 residues of both ID-C and ID-N into an N-CoR carboxyterminal sequence (amino acids 2053-2453) that encompassed the two interaction domains. Interaction of wildtype and mutant N-CoR with DNA-bound RAR/RXR heterodimers was assessed using the avidin-biotin complex DNA (ABCD) assay Kurokawa et al. 1995). Mutations of the L1/I5/L9 residues in the ID-N domain abolished detectable binding, whereas mutations of the comparable residues in the ID-C domain markedly diminished the interaction (Fig. 2B). These data are consistent with a model in which both partners of the DNA-bound heterodimer can bind one of the two N-CoR corepressor interaction motifs, with cooperative recruitment of unliganded receptors (Fig. 2B). Addition of ligand caused the release of N-CoR, consistent with previous ABCD data Heinzel et al. 1997;Zamir et al. 1997).
We therefore wished to investigate whether peptides corresponding to the minimal binding regions could inhibit TR binding to N-CoR in vitro or in the context of SMRT ID-N and ID-C motifs reveals a conserved LXX I/H IXXX I/L extended helix compared to that of the LXXLL motif of SRC1 or the AF2 domain of RXR. Clustered mutation of these residues in ID-C or in a region (1954-2215) encompassing ID-N resulted in loss of interactions in GST pull-down assays or a mammalian two-hybrid assays, confirming the critical importance of L1, I5, and I9 residues. (B) ABCD analysis of N-CoR binding to RXR/RAR heterodimers on a DR+5 element. An N-CoR interaction region spanning amino acids 2053-2453 was bound effectively in the absence, but not in the presence, of RA; mutation of the three conserved L and I residues in either ID-N or in ID-C markedly diminished or abolished N-CoR interaction with the DNA-bound receptor heterodimer. (C) Synthetic peptides used for competition studies. (D) Peptide competition of NCoR binding by T 3 R: Addition of ID-C peptide gives clear competition at 50 µM, the RXR AF2 peptide does not compete even at higher concentration; the SRC1-LXD2 peptide gives a detectable slight competition at high concentrations. (E) Peptide competition of GAL4/T 3 R carboxyl terminus fusion protein-dependent inhibition of UAS × 3/tk-lacZ reporter in single cell nuclear microinjection assays in Rat-1 cells. The nuclear receptor interaction domain (amino acids 570-843 and 626-783) of SRC1 was expressed as a GAL4 fusion protein under control of the CMV promoter.
GENES & DEVELOPMENT 3201
Cold Spring Harbor Laboratory Press on July 19, 2018 -Published by genesdev.cshlp.org Downloaded from the intact cell. We made synthetic peptides of 23 and 22 residues, corresponding to the amino-and carboxy-terminal motifs, respectively. We also evaluated the effects of a 22-amino-acid sequence corresponding to LXD2, the LXXLL domain motif of SRC-1, documented previously to inhibit activation events in the cell ) as well as a 20-amino-acid peptide corresponding to RXR AF2 motif that was effective for biochemical competition ) on N-CoR interactions with unliganded receptor. As seen in Figure 2D, even the minimal ID-C peptides could inhibit the ability of the N-CoR carboxyl terminus (amino acids 2040-2300) to bind T 3 R, whereas the RXR AF2 peptide did not compete. The LXD2, although effective in blocking coactivator function ) was only minimally effective as a competitor. The ability of a minimal corepressor motif peptide to inhibit N-CoR interaction was studied further using the single cell nuclear microinjection assay, comparing the ability of each peptide to inhibit active repression by a GAL4/T 3 R carboxyl terminus fusion protein (Fig. 2E). In this assay a GAL4/T 3 R fusion protein inhibited expression of the UAS/tk reporter around fivefold. Both the N-CoR minimal ID-N and ID-C peptides proved capable of effectively inhibiting active repression function, despite the relatively weak interactions of these minimal interaction domains in vitro. In contrast, the RXR AF2 peptide failed to relieve repression function of T 3 R-CЈcarboxy-terminal. Injection of either the SRC-1 LXD2 peptide or overexpression of a transcription unit encoding the SRC-1 nuclear interaction domain (amino acids 626-783) also failed to reverse N-CoR-dependent T 3 R repressor function (Fig. 2E).
Residues critical for ID-C and ID-N interactions
Based on these data we further explored the residues that might be required for interactions of either domain with unliganded T 3 R by mutation of single or adjacent amino acids in the ID-C (Fig. 3A). Mutations at the extreme amino or carboxyl terminus of the LBD were found to not inhibit binding to the T 3 R (Fig. 3B). Clustered mutations of amino acids 2271-2275 or mutation of L1 (amino acid 2277) caused only partial loss of binding; in contrast, mutations of amino acids 2282-2285 or 2286-2290 strongly inhibited binding, as did mutation of L9 (amino acid 2285) (Fig. 3B). The mammalian two-hybrid assay was utilized to confirm that similar requirements for interaction occur in cells: the wild-type, 22-aminoacid ID-C sequence interacted with RAR, but mutation of L1, I5, or L9 abolished this interaction. As in the case of the biochemical assay, point mutation of the extreme amino-or carboxy-terminal residues did not affect interaction (Fig. 3C). However, clustered mutations of the five residues flanking the core motif site at the amino terminus, as well as the carboxyl terminus, abolished interaction, as did mutation of the L1 (amino acid 2285) residue, indicating that for RAR, L1 is a critical residue and that flanking amino acids modulate receptor interactions.
In the case of ID-N, the 23-amino-acid (2062-2084) core motif exhibited weak but detectable binding to T 3 R, which was enhanced with a 43-amino-acid region (2053-2094) and fully effective interactions were observed with a region extending from amino acid 1954 to 2215. In the context of the 1954-2215 ID-N, L1, I5, and I9 were each found to be required for NR binding (Fig. 3E). A cluster mutation of the residues 2060-2064 diminished interaction only mildly (Fig. 3E), whereas mutation of residues 2071-2072 (asparitic acid; histidine), intervening in the core motif between the L1 and I5 residues, abolished interactions. In contrast, a mutation of the residues carboxy-terminal to the I9 residue (amino acids 2078-2080), which are not predicted to be an extended helix, caused only a small decrease in the interactions as determined using the mammalian two-hybrid assay. Together, these data suggest that residues within the LXXXIXXXI/L core are critical for corepressor NR interaction, although interactions are clearly strengthened in the context of additional amino-or carboxy-terminal sequences.
Determinants on the T 3 R for N-CoR interaction
The similarity of the two N-CoR core nuclear receptor interaction domains with coactivator interaction motifs and the ability of the AF2 helix to inhibit corepressor interaction strongly suggested that coactivators and corepressors utilize an overlapping interaction surface. Inspection of the T 3 R crystal structure (Wagner et al. 1995;Feng et al. 1998) indicated that the corepressor interaction surface might involve interactions with either helices 1, 3, 5, 6, or 11. We therefore evaluated the effects of arginine substitutions of specific residues in these helices, based on amino acid positions most likely to represent sites available for interaction. Several mutations in the amino terminus of helix 1 did not significantly affect binding to N-CoR, whereas the previously investigated mutations of conserved residues at the carboxyl terminus (A223, H224, T227 mutated to glycine) fully abolished N-CoR binding ). Here, we show the same effect mutating A223, H224 to glycine or to arginine, and even the single mutation of H224 to alanine diminishes binding (Fig. 4A,B). Mutations of residues in helixes 5 and 6 that are critically involved in coactivator binding (Feng et al. 1998;Nolte et al. 1998) were found to markedly disrupt binding of N-CoR (Fig. 4B). Thus, mutation of V279 and K283, required to position the coactivator helix, impaired or abolished interactions of N-CoR with the T 3 R (Fig. 4A,B). Additional residues that disrupt coactivator binding (C293/I297, C304/I307), also disrupted or impaired binding of the corepressor interaction domains. These data were confirmed in the intact cell using a two-hybrid interaction assay with VP16 N-CoR(2053-2453) to detect protein-protein interactions with the T 3 R carboxyl terminus (data not shown). Finally, a detailed analysis of helix 11 was performed, introducing mutations throughout the length of the entire helix to test the possibility that the extended corepressor activation helix contacted specific residues in helix 11. This mutational analysis failed to detect any residues that appeared to specifically affect corepressor binding (Fig. 4A,B). As a control, the ability to bind the nuclear receptor interaction domain of SRC-1 was evaluated, finding, as predicted, that mutations of helix 5/6 disrupted coactivator binding (Fig. 4C), and blocked li-gand-dependent activation in cotransfection assays (data not shown), consistent with previous reports (Feng et al. 1998).
In parallel, each mutant form of the T 3 R was evaluated for its ability to function as an N-CoRdependent repressor on a UAS × 3/tk-lacZ reporter. Mutations in the coactivator binding pocket that disrupted corepressor binding in vitro also abolished active repression, whereas mutations in helix 11 that did not affect binding also did not disrupt active repression function (Fig. 4D and data not shown). Together, these data suggest that a series of critical residues in the coactivator binding pocket are essential also for binding and function of corepressor (N-CoR).
These observations raise the intriguing question of how the corepressor recognition helix interacts with the coactivator-binding site without the requirement for the AF2 charge clamp that stabilizes coactivator interactions. The structural prediction of an amino-terminally extended helix in the corepressor interaction motif (Fig. 5A) suggested an essential role of these residues. This possibility was tested introducing single or double amino acid substitutions into the ID-N and converting each residue to glycine to break the putative helical extension (Fig. 5B). Conversion of either or both amino acids to glycine abolished interaction with T 3 R by binding assay (data not shown), or in the mammalian twohybrid assay (Fig. 5C). Therefore, our data are compatible with the suggestion that the corepressor binding motif represents a three amino acid amino-terminal helical extension (LXX) (residues 1-3) beyond a core I/H IXXX I/L (residues 4-9) that permits binding to the same hydrophobic pocket of the receptor occupied by coactivators. To determine whether this extension was sufficient for discrimination of corepressor and coactivator interaction motifs, we tested whether converting the carboxy-terminal protein of the helix IXXXL (residue 5-9) to a consensus LXXLL "coactivator consensus" motif, could also mediate binding. Interestingly, with this modification, no binding was observed (Fig. 5C), consistent with a model in which an extended helix is prevented from binding to the coactivator pocket in the presence of ligand because it is too long to be accommodated by the charge clamp.
Discussion
The ligand-dependent exchange of corepressor for coactivator complexes appears central to regulation of gene expression by NRs. Many of the numerous proposed coactivators of nuclear receptors have proven to share a core recognition domain consisting of a short ␣ helix of consensus sequence LXXLL (Ding et al. 1998;Heery et al. 1997;Torchia et al. 1997;Voegel et al. 1998). This helix appears to be oriented and positioned by a conserved glutamic acid residue in the AF2 helix and a conserved lysine at the end of helix 3. Upon ligand binding, these residues form a charge clamp that makes contacts with the polypeptide backbone at the ends of the LXXLL helix and allows packing of leucine residues into the (Mut1, Mut2, Mut3) of ID-N or to convert residues 5-9 into an LXXLL consensus (Mut4). (C) Mammalian two-hybrid assay of the interaction between wild-type and mutant GAL4 N-CoR(1954-2215 and VP16/RAR using a UAS × 3/p36 luciferase reporter. (D) Ribbon diagram of the corepressor extended helix (in red) predicted to contact the hydrophobic (coactivator) binding pocket formed by helices 3, 5, and 6. An idealized helix [sequence (A5)LAAIIAAALRL] was built and transformed it into the coactivator binding site by superimposing the LAAII residues onto the corresponding LXXLL residues of the coactivator peptide using the PPAR␥/SRC-1/BRL49653 complex as a model ). This idealized helix position was then transformed onto T 3 R by superimposing PPAR␥ onto T 3 R. The carboxy-terminal end of the helix is pointed at helix 1 and the amino-terminal end of the helix is sterically blocked by the AF-2 helix (in yellow) position. The binding of the shorter coactivator helix of GRIP-1 to the same pocket is represented in green. Below is shown an expanded view of AF-2 (yellow), corepressor (red), and coactivator (green) helices. (E) Model of the ligand-dependent exchange of corepressor for coactivator. The two related N-CoR interaction helices are suggested to cooperatively be recruited into the helix 3, 5, 6 binding pocket of RXR/RAR or RXR/T 3 R heterodimers on DNA, with no requirement for the conserved glutamic acid residues of the AF2 helix. Ligand binding induces exchange for coactivators, which contain the short LXXLL helical motifs, requiring the conserved glutamic acid residue of the AF-2 helix for effective orientation and positioning into the receptor binding pocket. The position of a series of mutations introduced into T 3 R, involving residues in helixes 1, 3, 5, 6, and 11, is imposed on the known structure of the T 3 R␣ LBD (Wagner et al. 1995), with the ligand removed and the position of AF2 rotated. The effect of the mutations on N-CoR binding is listed. (B) Analysis of these mutations in GST pull-down assays using GST-N-CoR(2040-2300) and 35 S-labeled T 3 R. (C) Similar analysis of the effect of T 3 R mutations on GST-SRC(631-763) binding. (D) Repressor function of mutated T 3 R in a single cell nuclear microinjection assay was performed in Rat-1 cells using a GAL4/T 3 R carboxy-terminal fusion protein and a UAS × 3/tk-lacZ reporter. hydrophobic receptor pocket formed by helices 3, 4, 5, and 6 (Darimont et al. 1998;Nolte et al. 1998;Le Douarin et al. 1996;Shiau et al. 1998). A critical determinant of the specificity of coactivator interaction is that the charge clamp can only accommodate a helix of a particular length. Furthermore, the cocrystal structure of a por-tion of the SRC-1 nuclear receptor interaction domain containing two LXXLL motifs on a PPAR␥ LBD dimer ) supports the suggestion that each member of the receptor homo-or heterodimer pair of DNA-bound NRs can cooperatively recruit one molecule of p160 coactivator. This model has raised intriguing Cold Spring Harbor Laboratory Press on July 19, 2018 -Published by genesdev.cshlp.org Downloaded from questions of whether a similar strategy might be utilized in recruitment of the corepressor.
In this paper we have provided evidence that, in an analogous fashion, N-CoR contains two related, but putatively extended helical motifs with amphipathic properties, in which critical spacing of hydrophobic residues constitute a structural determinant of high-affinity interaction with the unliganded RARs and TRs. These motifs appear to share the consensus LXX I/H IXXX I/L, which is predicted to represent an amino-terminally extended helix, when compared to the known LXXLL coactivator helices (Fig. 5B). Based on both biochemical and functional assays, this amino-terminal extension in the N-CoR interaction motif appears to be required for effective binding to unliganded receptor. As would be expected from results with the LXXLL coactivator helical motifs (Darimont et al. 1998;McInerney et al. 1998), residues flanking this motif are of quantitative importance, consistent with additional contacts to stabilize binding.
The critical determinants of corepressor binding appear to reside in the "coactivator" receptor binding pocket formed by the helices 3, 5, and 6. Thus, the corepressor uses, at least in an overlapping fashion, the hydrophobic pocket that is required for coactivator binding. This raises the question why the AF2 helix is fully inhibitory for N-CoR binding to most NRs, whereas it only quantitatively diminishes interactions in the case of the TRs and RARs. Even in the case of RARs and TRs, antagonists that cause distinct placement of the AF2 helix increase binding and function of the corepressor (Lavinsky et al. 1998), indicating that the AF2 helix is inhibitory, and that the conserved glutamic acid residue of AF2 critical for coactivator binding and function is not required in the case of corepressor binding. The aminoterminal extension of the corepressor helix has been modeled on the T 3 R carboxyl terminus ( Figure 5D). We suggest that the extended helix functions to displace the AF2 helix out of the pocket and to make contact with the receptor coactivator pocket. This is in contrast to coactivator LXXLL motif, which actually requires the AF2 helix to be effectively positioned for packing into the hydrophobic coactivator-binding pocket. As shown in Fig. 5D, I/L9 acts as a fulcrum for motion of the helix and predictions made by energy minimization suggest that the presence of an isoleucine at position 5 is essential to allow L1 to drop deep into the receptor pocket; in this model, I5 gives a better fit than L5 against the sloping wall of the receptor.
Although it was clearly possible that N-CoR contact with helix 11 facilitates binding, our data strongly suggest that helix 11 of the NR is not a component of the corepressor binding contact. Thus, these observations suggest that the extended helix of the corepressor permits binding into the hydrophobic pocket without any requirements for the glutamic acid residue within the AF2 helix, critical for positioning LXXLL coactivator motifs.
Modeling of the corepressor LXX L/I IXXX I/L helix (Fig. 5B) indicates that it cannot make contact with helix 1, although a break in the carboxy-terminal helix could permit contact of more carboxy-terminal residues with helix 1. However, the positions of the H and T residues (Wurtz et al. 1996) are more consistent with the idea that these residues of TRs and RARs interact with and affect the precise placement of other helices sufficiently to facilitate N-CoR binding to unliganded receptors. Modeling also suggests that the AF2 helix, displaced by corepressors, might interact with the carboxyl terminus of helix 1, further facilitating corepressor binding. In receptors such as the estrogen receptor, we propose that the repositioning of the AF2 helix by tamoxifen, as opposed to the unoccupied or agonist-bound receptor, moves the AF2 helix to a position that now permits corepressor binding into the hydrophobic pocket, even without the structural feature of TR and RAR helix 1 Le Douarin et al. 1996). Intriguingly, using tamoxifen-bound ERs and an unbiased phage display selection assay, novel related peptides binding to ER were selected (Norris et al. 1999). Several of these peptides tested do not compete with N-CoR for binding to the TR and RAR (V. Perissi et al., unpubl.), suggesting that a distinct surface may be involved for corepressor binding, consistent with the proposal by Norris et al. (1999), that there may be distinct receptor interaction for tamoxifen mediated partial agonist function, perhaps by binding distinct coactivators (Imhof and McDonnell 1996).
Thus, we suggest that a critical evolutionary adaptation of the LBD has been the selection of a LXXLL helix, critical in ligand dependent coactivator binding, and an extended LXX H/I IXXX L/I helix, which has acquired the properties required to permit corepressor binding in the absence of ligand, and that cooperative recruitment on DNA-bound receptor heterodimers occurs in each case (Fig. 5E).
ChIP assay
ChIP assay for acetyl-histone H4 was conducted as per Upstate Biotechnology protocol ChIP assay . For N-CoR association with the RAR promoter 293 cells (2 × 10 6 cells/10-cm dish) were serum stripped for 24 hr and treated with 10 −6 M RA for 10 min, protein complexes were cross-linked to DNA with 1% formaldehyde for 30 min at 37°C. Cell pellets, were resuspended in SDS lysis buffer, sonicated, and precleared with salmon sperm DNA/protein A agarose. Samples of supernatants were used for input measurement and the rest was incubated with anti-N-CoR antibody (Santa Cruz, CA) at 4°C overnight. Immune complexes were isolated and cross-linking reversed at 65°C for 4 hr. Samples were subjected to a proteinase K digestion and DNA was extracted and precipitated. Detection of the promoter was determined by PCR amplification with specific primers.
DNA-dependent protein-protein interaction (ABCD) and GST pull-down assays
ABCD assay and GST pull-down assay were performed as described previously (McInerney et al. 1998). | 2018-04-03T04:03:08.309Z | 1999-12-15T00:00:00.000 | {
"year": 1999,
"sha1": "490f7ca802d261ebe58fdc4feaee64fa3be34255",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/13/24/3198.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3ff283352bb12d2ac184567d56425e1d3004f1fe",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
8223347 | pes2o/s2orc | v3-fos-license | Planning horizon affects prophylactic decision-making and epidemic dynamics
The spread of infectious diseases can be impacted by human behavior, and behavioral decisions often depend implicitly on a planning horizon—the time in the future over which options are weighed. We investigate the effects of planning horizons on epidemic dynamics. We developed an epidemiological agent-based model (along with an ODE analog) to explore the decision-making of self-interested individuals on adopting prophylactic behavior. The decision-making process incorporates prophylaxis efficacy and disease prevalence with the individuals’ payoffs and planning horizon. Our results show that for short and long planning horizons individuals do not consider engaging in prophylactic behavior. In contrast, individuals adopt prophylactic behavior when considering intermediate planning horizons. Such adoption, however, is not always monotonically associated with the prevalence of the disease, depending on the perceived protection efficacy and the disease parameters. Adoption of prophylactic behavior reduces the epidemic peak size while prolonging the epidemic and potentially generates secondary waves of infection. These effects can be made stronger by increasing the behavioral decision frequency or distorting an individual’s perceived risk of infection.
INTRODUCTION
Human behavior plays a significant role in the dynamics of infectious disease (Ferguson, 2007;Funk, Salathé & Jansen, 2010). However, the inclusion of behavior in epidemiological modeling introduces numerous complications and involves fields of research outside the biological sciences, including psychology, philosophy, sociology, and economics. Areas of research that incorporate human behavior into epidemiological models are loosely referred to as social epidemiology, behavioral epidemiology, or economic epidemiology (Manfredi & D'Onofrio, 2013;Philipson, 2000). We use the term 'behavioral epidemiology' to broadly refer to all epidemiological approaches that incorporate human behavior. While the incorporation of behavior faces many challenges (Funk et al., 2015), one of the goals of behavioral epidemiology is to understand how social and behavioral factors affect the dynamics of infectious disease epidemics. This goal is usually accomplished by coupling models of social behavior and decision making with biological models of contagion (Perrings et al., 2014;Funk, Salathé & Jansen, 2010).
Many social and behavioral aspects can be incorporated into a model of infectious disease. One example is the effect of either awareness or fear spreading through a population (Funk et al., 2009;Epstein et al., 2008). In these types of models, the spread of beliefs or information is treated as a contagion much like an infectious disease, though the network for the spread of information may differ from the biological network (Bauch & Galvani, 2013). Other models focus on how individuals adapt their behavior by weighting the risk of infection with the cost of social distancing (Fenichel et al., 2011;Reluga, 2010) or other disincentives (Auld, 2003). Still others model public health interventions (e.g., isolation, vaccination, surveillance, etc.) and individual responses to them (Del Valle et al., 2005). Many of these models sit at the population level, incorporating the effects of social factors and abstracting away details about the individuals themselves.
The SPIR model (Susceptible, Prophylactic, Infectious, Recovered) developed here, in contrast, is an epidemiological model that couples individual behavioral decisions with an extension of the SIR model (Kermack & McKendrick, 1927). In this model agents that are vulnerable to infection may be in one of two states, susceptible or prophylactic, which is determined by their behavior. Agents in the susceptible state engage in the status quo behavior while agents in the prophylactic state employ preventative behaviors that reduce their chance of infection. We use a rational choice model to represent individual behavioral decisions, where individuals select the largest utility between engaging in prophylactic behavior (e.g., hand-washing or wearing a face mask) or non-prophylactic behavior (akin to the status quo). We also allow for the fact that individuals may not perceive the risk of getting infected accurately, but rather receive distorted information, for example, through the media.
We use the SPIR model to understand how an individual's planning horizon-the time in the future over which individuals calculate their utilities to make a behavioral decision-affects behavioral change and how that, in turn, influences the dynamics of an epidemic.
One of our key findings is that individuals choose to engage in prophylactic behavior only when the planning horizon is ''just right.'' If the planning horizon is set too far into the future, it is in an individual's best interest to become infected (i.e., get it over with); if the horizon is too short, individuals dismiss the future risk of infection (i.e., live for the moment). What counts as ''just right'', too short, and too far (i.e., the time scale of the planning horizon) depend on the disease in question. We explore two hypothetical contrasting diseases, one with long recovery time and acute severity, and another with short recovery time and mild severity.
State-transition diagram of agents in the Disease Dynamics Model. S, P, I, and R represent the four epidemiological states an agent can be in: Susceptible, Prophylactic, Infectious, and Recovered, respectively. The parameters over the transitions connecting the states represent the probability per time step that agents in one state move to an adjacent state: i is the proportion of infectious agents in the population; b S and b P are the respective probabilities that an agent in state S or P, encountering an infectious agent I, becomes infected; g is the recovery probability; ρ is the reduction in the transmission probability when an agent adopts prophylactic behavior that linearly relates b S and b P , i.e., b P = ρb S ; d is the behavioral decision making probability; and W(i) is an indicator function returning value 1 when the utility of being prophylactic is greater than the utility of being susceptible and 0 otherwise (see details in Behavioral Decision Model).
The SPIR model
The SPIR model couples two sub-models: one, the Disease Dynamics Model, reproducing the dynamics of the infectious disease, and another, the Behavioral Decision Model, that determines how agents in the disease dynamics model make the decision to engage in prophylactic or non-prophylactic behavior.
Disease dynamics model
The disease dynamics model reproduces the dynamics of the infectious disease in a constant population of N agents. Each agent can be in one of four states: Susceptible (S), Prophylactic (P), Infectious (I), or Recovered (R). The difference between agents in states S and P is that the former engage in non-prophylactic behavior and do not implement any measure to prevent infection, while the latter adopt prophylactic behavior which decreases their probability of being infected (e.g., wearing a mask, washing hands, etc.). The adoption of prophylactic behavior, however, imposes costs on individuals that may prevent them from engaging in such behavior. In Western countries, for example, wearing a mask, other than uncomfortable, may signal a lack of trust in others or an unhealthy state, thus reducing social contacts. Agents in the infectious state I are infected and infective, while those in the recovered state R are immune to and do not transmit disease. The transition between states is captured with the state-transition diagram shown in Fig. 1. For reference, all the parameters and variables in the SPIR model are listed and defined in Table 1. This sub-model assumes that in each time step, three types of events occur: (1) interactions among agents and any infections that may result, (2) behavioral decisions s Proportion of susceptible agents in the population.
p Proportion of prophylactic agents in the population.
i Proportion of infectious agents in the population.
r Proportion of recovered agents in the population.
b S Probability that an agent in the susceptible state becomes infected upon interacting with an infectious agent.
b P Probability that an agent in the prophylactic state becomes infected upon interacting with an infectious agent. ρ Reduction in the transmission probability or rate when adopting prophylactic behavior: b P = ρb S (0 ≤ ρ ≤ 1). Note that we refer to 1 − ρ as the protection.
g Probability an infectious agent recovers. W(i) Indicator function returning value 1 when the utility of adopting prophylactic behavior is greater than the utility of adopting non-prophylactic behavior and 0 otherwise. to engage in prophylactic or non-prophylactic behavior, and (3) recoveries. All agents interact by pairing themselves with another randomly selected agent in the population. Given four possible states, there are ten possible pairwise interactions. However, only two types of interactions can change the state of an agent: S,I and P,I . For the interaction S,I , the susceptible agent S is infected by the infectious agent I with probability b S . For the interaction P,I , the prophylactic agent P is infected by the infectious agent I with probability b P , where b P ≤ b S . The probability b P is linearly related to probability b S by the coefficient ρ (i.e., b P = ρb S ). Thus (1 − ρ) is the protection acquired by adopting prophylactic behavior. Assuming well-mixed interactions, the proportion of infectious agents i represents the probability that an agent is paired with an infectious agent and the per time step probability of an agent in state S or P being infected is ib S or ib P , respectively. In addition to interacting, susceptible and prophylactic agents have probability d per time step of making a behavioral decision to engage in prophylactic or non-prophylactic behavior. The agents' behavioral decision is reflected in the indicator function W(i) (see details in Behavior Decision Model), and they engage in the prophylactic behavior when W(i) = 1 (i.e., adopt state P) and the non-prophylactic behavior when W(i) = 0 (i.e., adopt state S). Infectious agents have probability g per time step of recovering. We implemented an agent-based version of the disease dynamics model using the Gillespie algorithm (Gillespie, 1976) (source code available in Supplemental Information 1).
Our agent-based model (ABM) is a useful tool for exploring the aggregated effects of individual decision making, including scenarios where populations are heterogeneous (e.g., some individuals may be more risk-averse than others). A drawback of this model is that it requires simulations that can be computationally intensive. The stochasticity of the ABM further exacerbates the computational burden when mean results are desired. If, however, we assume that the population is well-mixed, the dynamics can be generated using a system of ODEs (see Sec. S7 in Supplemental Information 1 for a comparison): where s, p, i, and r are the proportion of susceptible, prophylactic, infectious, and recovered agents in the population. The parameters β, γ , and δ are transmission, recovery, and decision rates, whose equivalent probabilities are, respectively, transmission (b S ), recovery (g ), and decision (d) ( Table 2). The parameter ρ refers to the reduction in transmission rate when adopting prophylactic behavior. We convert between rates and probabilities using equations x = −ln 1 − y and y = 1 − e −x , where x and y are rate and probability values respectively (Fleurence & Hollenbeak, 2007). One unit of continuous time in the ODE model corresponds to N time steps in the ABM.
Behavioral decision model
Recall that agents have per step probability d of making a behavioral decision; here we specify how those decisions are made. The behavioral decision model, in principle, can be any model that enables agents to decide whether or not to engage in prophylactic behavior. Our decision model is a rational choice model that assumes agents are self-interested and rational; thus they adopt the behavior with the largest utility over the planning horizon, H. Note that the planning horizon is a construct used to calculate utilities and it does not affect the time until an agent has an opportunity to make another decision within the disease dynamics model. The planning horizon is the time in the future over which agents calculate their utilities. In order for the agent to make these calculations, we make the following assumptions about agents tasked with making a decision; (1) Agents have identical and complete knowledge of the relevant disease parameters, b S , b P , and g . (2) Prophylactic behavior has the same protection efficacy, ρ, for all agents; (3) Agents believe the current prevalence of the disease is i (in the case of no distortion-see below); (4) Agents assume that i will remain at its current value during the next H time steps; (5) Agents compute expected waiting times based on a censored geometric distribution. Specifically, they believe that they will spend the amount of time T X in state X ∈ {S,P}, where the time T X has a geometric distribution with parameter ib X , censored at the planning horizon H. When their time in X is over, agents know they will move to state I where the amount of time they expect to spend in state I has a geometric distribution with parameter g censored at the time remaining, H − T X . When their time in state I is over, they know they will move to state R where they will remain until time H. (6) Agents know the per time step payoff for each state u Y , where Y ∈ {S,P,I,R}, and all agents are assumed to have the same set of payoff values. (7) Agents calculate the sum of the utility from now until time H under the two possible behavioral decisions (D S or D P ).
Perfect knowledge of i. To calculate the utilities when they have perfect knowledge of i, agents use the length of time they expect to spend in each state. We begin by deriving the expected time in state X, where X ∈ {S,P}, and then we use this result to derive the expected time in I and, finally, R. The expected time in state I is conditioned on T X because agents are rational and they average the time they expect to spend on state I over all possible hypothetical combinations involving X and I up to H. To simplify notation, it is helpful to first define the force of infection for state X, f X : f S = ib S and f P = iρb S . For the agent considering hypothetical futures, the planning horizon serves to censor all waiting times greater than H, giving them value H. This leads to the following probability mass function for the time spent in state X should they decide on behavior X (denoted T X|D X ), The expected time spent in state X can be expressed as which simplifies to the desired expectation, Notice that 1 f X − 1 is the expected time to remain in state X before moving to state I for an uncensored geometric with minimum value of zero, while the second parenthetical term rescales the expectation to the interval [1,H].
Next, we derive the expected time spent in I by conditioning on T X|D X .
After considerable algebra (see Sec. S3 in Supplemental Information 1), we get the expectation, Again, the expectation of the uncensored geometric is 1 g − 1 . The second parenthetical term compresses the expected time into the interval between E T X|D X and H. Notice that Eq. (6) is defined only so long as f X = g . When f X = g , we instead have Finally, the agent calculates the expected time in state R by subtracting the expectations for X and I from H, Notice that for each of the expected waiting times calculated in Eqs. (4), (6) and (7), as H goes to infinity, the rescaling terms go to one so that the equations yield the familiar expected values for the uncensored geometrics.
Having calculated these expected waiting times, the agent then calculates the utility for the two possible behaviors using, and Note that when agents calculate expected times for states S and P, they need not consider the possibility of alternating to the other state in the future. This is because they assume a constant i which implies the best strategy now will remain the best strategy at all times during H. Thus, E T S|D P and E T P|D S are both zero. To be clear, this constraint pertains only to calculating utilities; agents are not constrained in how many times they actually switch states during the epidemic. Because decisions reflect the largest utility and because the population is homogeneous, the behavioral decision can be expressed as the indicator function W(i) defined by Distorting knowledge of i. Recall that assumption (3) that underlies the behavioral decision model is that agents know the prevalence of the disease accurately. We relax this assumption to investigate how distorting this information affects the epidemic dynamics. To achieve this, we replace i with i 1 κ in the calculation of utilities where κ serves as a distortion factor. When κ = 1, i is not distorted; when κ > 1, the agent perceives i to be above its real value and when κ < 1 the opposite is true. To implement this distortion, we redefine f X in the expected waiting time equations (i.e., Eqs. (3)- (7)) with f X = i 1 κ b S when X = S and f X = i 1 κ ρb S when X = P.
RESULTS AND ANALYSIS
The SPIR model can be applied to a diverse range of infectious disease epidemics and how they might be impacted by human behavior. To illustrate specific characteristics of the model, however, we focus here on two contrasting diseases characterized by their severity, recovery time, and harm: Disease 1 is acute, has a long recovery time, and may cause chronic harm, and Disease 2 is mild, has a short recovery time, and causes no lasting harm. Table 3 shows the biological and behavioral parameter values used to generate the results discussed next, unless stated otherwise. Note that the results were generated using the ODE model that is more computationally efficient than the ABM, but generates the same results (see Sec. S7 in Supplemental Information 1 for a comparison).
Behavioral decision analysis
Here we analyze the behavioral decision model used by the agents to decide whether or not to engage in prophylactic behavior. In particular, we are interested in identifying the level of disease prevalence above which agents would switch behavior, i.e., a switching point. A switching point is defined as the proportion of infectious agents beyond which it would be advantageous for an agent to switch from non-prophylactic to prophylactic behavior or vice-versa. We can visualize switching points by plotting the expected utility for the susceptible and prophylactic states as a function of the proportion of infectious agents. A switching point is where the expected utilities cross, if they cross. Figure 2A illustrates the situation in which the utilities do not cross, thus there is no switching point. Figure 2B illustrates the situation in which there is a single switching point; below the switching point the susceptible state has the higher utility, whereas above that point the prophylactic state has the higher utility. Figure 2C shows the situation in which the utilities cross twice, thus there are two switching points; between the switching points the prophylactic state has the higher utility, whereas the susceptible state has higher utility on the margins. Figure 2D shows a heat map of switching points for Disease 1 (see Fig. S1 for a heat map for Disease 2). The figure is divided into three regions-A, B, and C-that correspond to the three different utility situations illustrated in Figs A corresponds to the situation in which agents never engage in prophylactic behavior because the utility of being in the susceptible state is never less than the prophylactic state regardless of disease prevalence ( Fig. 2A). This situation occurs for low protection efficacy or short planning horizons. In the case of low protection efficacy, agents do not have an incentive to adopt prophylactic behavior because they expect to get infected regardless of their behavior. Thus, their best strategy is to become infected and then recover in order to collect the recovered payoff as quickly as possible (i.e., ''get it over with''). In the case of short planning horizons, the relative contributions of the expected times of being in the susceptible or prophylactic state dominate the utility calculations, as shown in Fig. 3. The figure illustrates how, when the planning horizon is short, the expected percentage of time spent in the susceptible or prophylactic states are much greater than the expected percentage of time spent in the infectious or recovered state. Given that the susceptible payoff is greater than the prophylactic payoff, agents never adopt prophylactic behavior. The figure also shows how increasing the planning horizon changes the distribution of time spent in each state, which reduces the influence of the difference between the susceptible and prophylactic payoffs on behavioral decision. Returning our focus to Fig. 2D, region B corresponds to the situation in which agents will adopt non-prophylactic or prophylactic behavior depending on the prevalence of the disease (Fig. 2B). If the disease prevalence is smaller than the switching point, the agent opts for the susceptible behavior; otherwise it adopts the prophylactic behavior. Region C corresponds to the situation in which two switching points exist instead of a single one (Fig. 2C). When the proportion of infectious agents is between these switching points, agents adopt prophylactic behavior, while values outside this range drives agents to adopt non-prophylactic behavior. This situation is of particular interest because it shows that Figure 4 Heat maps of switching points under qualitatively different payoff relationships for Disease 1 and Disease 2. The heat maps in (A)-(D) correspond to Disease 1 and the heat maps in (E)-(H) correspond to Disease 2. In (A) and (E) the payoff of being susceptible and recovered are equal, which means that agents recover completely from the disease after infection. In (B) and (F), the recovered payoff is lower than the susceptible payoff, but still greater than the prophylactic payoff meaning that being recovered is preferable to than being in the prophylactic state. In (C) and (G), the payoff for the prophylactic and recovered states are exactly equal. In (D) and (G), the disease is permanently debilitating such that the payoff of the recovered state is less than the prophylactic state. The heat maps of behavioral change assume the following payoffs for u S , u P , u I , u R . the adoption of prophylactic behavior is not always monotonically associated with the prevalence of the disease. The utility calculations that agents use to decide whether to adopt a behavior are complex (see Eqs. (9) and (10)); an exhaustive exploration of the parameter space is not undertaken here. We instead investigate several paradigm cases related to the payoff ordering. We assume that the payoff for the infectious state (u I ) relies upon biological parameters of the disease and always corresponds to the lowest payoff, thus we need only consider the relationship between the other three payoffs. In particular, we are interested in looking at situations where the recovery payoff ranges from complete recovery (case 1) to less than the prophylactic state (case 4). Case 1: u S = u R > u P > u I , Case 2: u S > u R > u P > u I , Case 3: u S > u R = u P > u I , and Case 4: u S > u P > u R > u I .
Because our model consists of a constant population of N agents (i.e., no mortality), cases in which u S > u R represent situations where an individual suffers chronic harm from the disease. Figure 4 displays the switching point heat maps for these different ordering cases of Disease 1 (Figs. 4A-4D) and Disease 2 (Figs. 4E-4H). The most dramatic difference between the two diseases is that changing the payoff for being recovered has a large effect on the agents' behavioral change in the cases of Disease 2, but little effect in the case of Disease 1. The reason for this has to do with the biological parameters of the model, in particular, the disease recovery time for Disease 1 is large (Recovery Time = 65), but small for Disease 2 (Recovery Time = 8). Consequently, an agent expects to spend more time in the recovered state when considering Disease 2 than Disease 1. When weighting these expected times with different payoffs for calculating the utilities, there will be less variation in Disease 1 compared with Disease 2.
The effects of progressively reducing the recovered payoff are more evident for Disease 2. Reducing the recovered payoff means that lower levels of prevalence will be sufficient for agents to change their behavior. In the case of equal value for recovered and susceptible payoffs, agents consider changing behavior only in narrow parameter range of protection efficacy and planning horizon values (Fig. 4E). Progressively reducing the recovered payoff, i.e., moving from case 1 (Fig. 4E) to case 4 (Fig. 4H), the range of parameter values that induce agents to change their behavior expands (i.e., there are large areas of the parameter space in which the agents would consider changing behavior) and the disease prevalence necessary for such change to occur decreases (i.e., gradual change of the color towards blue).
In addition to this numerical analysis, we have also obtained analytical results for case 2 (payoff ordering u S > u R > u P > u I ) that are available in Sec. S4 Supplemental Information 1.
Epidemic dynamics
We turn now to understand how the above conditions for behavioral change may influence epidemic dynamics. Here we are particularly interested in analyzing the effects of the planning horizon H and the decision frequency δ on the dynamics of Disease 1 and Disease 2. Because we assume that the interactions among the population are well-mixed, we execute the simulations using the ODE model for a population of 100,000 agents (initially 99,900 agents in the susceptible state and 100 in the infectious state) with a decision frequency of δ = 0.01. Figure 5 shows the effects of different planning horizons on the epidemic dynamics for both Disease 1 (Fig. 5A) and Disease 2 (Fig. 5B). For short planning horizons (i.e., H = 1), agents do not ever consider changing behavior in either disease. This corresponds to the situation in Region A in Fig. 2A in which being prophylactic is never worth the cost, hence the epidemic dynamics are not affected. Similarly, in the cases of H = 30 for Disease 1 and H = 45 for Disease 2, we notice that neither of the two epidemic dynamics change. The dynamics are not affected because the disease prevalence does not reach the switching point (the switching points are indicated by the dashed lines in Fig. 5).
In the cases of H = 45 and 90 for Disease 1 and H = 30 for Disease 2, however, agents change behavior and thereby affect epidemic dynamics. For Disease 1, the effect is characterized by the decrease on the epidemic peak size and a prolonged duration of the epidemic. Although the dynamics of Disease 2 are also affected, the effect is small because a lower portion of the population crosses the switching point.
In other cases, increasing the planning horizon further may cause agents to never contemplate a change in their behavior. This occurs, for example, when H = 90 for Disease 2. This means that agents willingly assume the risk of getting infected and then recover, which is intuitive given the short recovery time and mild severity of the disease.
To assess the effect of the frequency that agents make behavioral decisions on the epidemic dynamics, we fix the value of the planning horizon for Disease 1 (H = 90) and Disease 2 (H = 30), and vary the decision rate. Figure 6 shows the effects of different decision frequencies on the epidemic dynamics. This figure illustrates how increasing the decision frequency reduces the epidemic peak size while prolonging the epidemic. It additionally may generate multiple waves of infection for Disease 1. These waves are generated because raising the decision frequency means individuals react faster to an increase in prevalence and adopt the prophylactic behavior. This bends the trajectory of disease incidence downward, but the reduction in prevalence causes the pendulum to swing back and individuals return back to their non-prophylactic behavior, thus creating an environment for the resurgence of the epidemic.
Risk perception
Empirical evidence shows that humans change behavior and adopt costly preventative measures, even if disease prevalence is low. This is especially true for harmful diseases with severe consequences to those being infected, such as Ebola or the Severe Acute Respiratory Syndrome (SARS). For example, despite the low level of recorded cases during the 2003 SARS outbreak in China (approximately 5,327 cases), people in the city of Guangzhou avoided going outside or wore masks when outside (World Health Organization, 2003;Tai & Sun, 2007). Combined with the severity of the disease, other factors like misinformation or excess media coverage may distort the perception of disease prevalence (i.e., risk perception), making individuals respond unexpectedly to an epidemic.
Several models incorporate specific mechanisms regulating the diffusion of information about the disease to understand the above factors and how they contribute to the distortion of risk perception (Epstein et al., 2008;Funk et al., 2009;Poletti, Ajelli & Merler, 2012). Here our focus is slightly different. We are interested in understanding the effects that such distorted perception has on the epidemic dynamics. Thus in our model, we have incorporated a distortion factor κ that alters the agents' perception about disease prevalence used in calculating their utilities. For κ = 1, agents have the true perception of the disease prevalence; for κ > 1, the perceived disease prevalence is inflated and κ reflects an increase in the risk perception of being infected; for κ < 1, the perceived disease prevalence is reduced below its true value. Distorting the perception of a disease prevalence can lead to changes in the decision making process, and consequently on epidemic dynamics, as illustrated in Fig. 7 for Disease 1 (see Fig. S2 for Disease 2). Figure 7A shows the switching point assuming agents know the real disease prevalence (κ = 1). By distorting the perceived disease prevalence (κ = 1.5), the real proportion of infectious agents necessary for agents to engage in prophylactic behavior is reduced as shown in Fig. 7B. Hence, the distortion on disease prevalence makes agents engage in prophylactic behavior even when the chance of being infected is low. This affects the epidemic dynamics by reducing the epidemic peak size but prolonging the epidemic and generating multiple waves of infection as shown in Fig. 7C.
DISCUSSION
Individuals acting in their own self-interest make behavioral decisions to reduce their likelihood of getting infected in response to an epidemic. We explore a decision making process that integrates the prophylaxis efficacy and the current disease prevalence with individuals' payoffs and planning horizon to understand the conditions in which individuals adopt prophylactic behavior.
Our results show that the adoption of prophylactic behavior is sensitive to a planning horizon. Individuals with a short planning horizon (i.e., ''live for the moment'') do not engage in prophylactic behavior because of its adoption costs. Individuals with a long planning horizon also fail to adopt prophylactic behavior, but for different reasons. They prefer to ''get it over with'' and enjoy the benefits of being recovered. In both these (B) show the proportion of infectious agents above which the prophylactic behavior is more advantageous than the non-prophylactic behavior given the percentage of protection obtained for adopting the prophylactic behavior (1 − ρ)×100 and the planning horizon H. See Fig. 2 for more details on interpreting switching point heat maps. (A) No perception distortion: κ = 1; (B) A distortion factor κ = 1.5, which reduces the proportion of infectious agents above which the prophylactic behavior is more advantageous. (C) Epidemic dynamics for different distortion factors show how increasing κ reduces the epidemic peak size, prolongs the epidemic and generates secondary waves of infection. situations, the epidemic dynamics remain unchanged because the individuals do not have an incentive to engage in prophylactic behavior even when the disease prevalence is high. For intermediate planning horizons, however, individuals adopt prophylactic behavior depending on the disease parameters and the prophylaxis efficacy. The effects on disease dynamics include a reduction in the epidemic peak size, but a prolonged epidemic.
The time scale of a planning horizon (i.e., what constitutes short, intermediate, and long), however, depends on the disease parameters. While the time scale for Disease 2 is on the order of days, for Disease 1 the time scale is on the order of months to years.
These results are consistent with the findings of Fenichel et al. (2011), who also concluded that behavioral change is sensitive to a planning horizon. The SPIR and Fenichel et al. models generate similar results, but differ in several aspects. In the latter, susceptible agents optimize their contact rate by balancing the expected incremental benefits and costs of additional contacts. Moreover, the agents take into consideration only the payoffs of being susceptible and recovered when optimizing the contact rates. In the SPIR model, however, agents maintain a constant contact rate, yet adopt prophylactic behavior that reduces the chance of getting infected. When agents are deciding to engage in prophylactic behavior, they take into account the payoff of all possible epidemiological states. The fact that we reach the same conclusion using different models further supports the claim that the planning horizon is a relevant decision making factor in understanding epidemic dynamics.
Although associated with the prevalence of disease, the adoption of prophylactic behavior is not always monotonically associated with it. Its adoption depends on the behavioral decision parameters. For severe diseases with long recovery times, e.g., Disease 1, the option of prophylactic behavior is less sensitive to changes in the payoffs (Figs. 4A-4D) compared to less severe diseases with shorter recovery times, e.g., Disease 2 (Figs. 4E-4H). This implies that understanding the payoffs related to each disease is critical to proposing effective public policies, especially because there is not a ''one-size-fit-all'' solution.
Another aspect to highlight is that the beneficial adoption of prophylactic behavior can be achieved through two different public policies: changing the risk perception or introducing incentives that reduce the difference between the susceptible and prophylactic payoffs (i.e., reduce the cost of adopting prophylactic behavior). One problem with increasing the risk perception is that if it is overdone, it leads, in some situations, to the opposite result to the one that is desired. Because individuals perceive their chance of getting the disease as highly probable, they prefer to ''get it over with'' and enjoy the benefits of being recovered. In contrast, the more the prophylaxis is incentivized the better the results, e.g., reduction of epidemic peak size.
Similar to our SPIR model, Perra et al. (2011) andDel Valle, Mniszewski &Hyman (2013) also proposed an extension to the SIR model and included a new compartment that reduces the transmission rate between the susceptible and infectious states. A clear distinction between these models and the SPIR model is that their agents do not take into account the costs associated with moving between the susceptible compartment and this new compartment. While in Perra et al. (2011) agents make the decision to move between compartments based on the disease prevalence, in Del Valle, Mniszewski & Hyman (2013) constant transfer rates are defined to handle the transition.
In addition to these differences, an advantage of the SPIR model with respect to all other models that implement some behavioral change is the distinction between the disease dynamics model and the behavioral decision model. This distinction renders the SPIR model flexible by making it easier to, for example, implement other decision making processes. Consequently, this modular and flexible model architecture facilitates the execution of comparative studies with different behavioral decision models coupled to a common epidemiological model, which we plan to perform as future work. For example, an interesting avenue of investigation is to relax our assumptions about rationality. There are a number of psychosocial theories that have been developed to predict, explain, and change health behaviors, such as the Health Belief Model (Becker, 1974) and the Social Cognition Theory (Bandura, 1986). The effects of these kinds of decision-making processes on epidemic dynamics have not been fully explored. | 2016-11-01T19:18:48.349Z | 2016-08-12T00:00:00.000 | {
"year": 2016,
"sha1": "b067e4d6a8cde1a84b7c2ee7d3292f03642bc77a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.2678",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a09daa4101801d9b65bcc053116505b0022e0640",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256327405 | pes2o/s2orc | v3-fos-license | Maternal omega-3 fatty acids regulate offspring obesity through persistent modulation of gut microbiota
The early-life gut microbiota plays a critical role in host metabolism in later life. However, little is known about how the fatty acid profile of the maternal diet during gestation and lactation influences the development of the offspring gut microbiota and subsequent metabolic health outcomes. Here, using a unique transgenic model, we report that maternal endogenous n-3 polyunsaturated fatty acid (PUFA) production during gestation or lactation significantly reduces weight gain and markers of metabolic disruption in male murine offspring fed a high-fat diet. However, maternal fatty acid status appeared to have no significant effect on weight gain in female offspring. The metabolic phenotypes in male offspring appeared to be mediated by comprehensive restructuring of gut microbiota composition. Reduced maternal n-3 PUFA exposure led to significantly depleted Epsilonproteobacteria, Bacteroides, and Akkermansia and higher relative abundance of Clostridia. Interestingly, offspring metabolism and microbiota composition were more profoundly influenced by the maternal fatty acid profile during lactation than in utero. Furthermore, the maternal fatty acid profile appeared to have a long-lasting effect on offspring microbiota composition and function that persisted into adulthood after life-long high-fat diet feeding. Our data provide novel evidence that weight gain and metabolic dysfunction in adulthood is mediated by maternal fatty acid status through long-lasting restructuring of the gut microbiota. These results have important implications for understanding the interaction between modern Western diets, metabolic health, and the intestinal microbiome.
Background
The gut microbiota is a complex microbial ecosystem lining the intestinal tract that critically regulates host metabolism, immune responses and a number of other key physiological pathways [1][2][3]. The composition and function of the gut microbiota is profoundly influenced by environmental factors such as diet, mode of delivery at birth and antibiotic usage [4,5]. Due to the essentiality of the commensal microbiota in host physiological homeostasis, environmental disturbance of gut microbiota composition and function can play a causal role in weight gain and metabolic dysfunction [3,4,6]. The microbiota is primarily established at birth through vertical transmission from the mother, while some reports also indicate microbial exposures in utero [7][8][9]. Indeed, the maternal vaginal microbiota closely resembles that of the infant microbiota soon after birth [7,10]. Hence, disruption of the initial structuring of the microbiota in early-life may interfere with host metabolism and increase risk of later-life metabolic disease.
Disruption of the normal gut microbiota composition ("dysbiosis") is often characterized by elevated relative abundance of pathogenic bacteria, such as lipopolysaccharide (LPS)-producing Enterobacteriaceae, or depletion of commensal species that maintain gut homeostasis, such as Akkermansia muciniphilia [11][12][13]. Recent research has examined how mode of delivery at birth, breastfeeding duration, and antibiotic use influence this structuring of the offspring microbiota [4,14]. However, little is known about how the maternal prenatal or early-postnatal diet affects the phylogenetic architecture of the offspring microbiota and the subsequent effects this may have for metabolic disease risk in later life. Indeed, maternal nutritional inadequacies adversely affect fetal metabolic programming in the offspring leading to negative health consequences in later life [15]; however, the role of the microbiota in this transgenerational process remains underexplored.
Previous research has shown that a parental Western diet is associated with elevated plasma LPS and heightened colonic immune responses in standard chow-fed offspring [16]. These immune-modulating responses are dependent upon the comprehensive changes induced to the offspring microbiota, as a result of the parental diet. Indeed, maternal high-fat diets also induce compositional changes to the offspring microbiota in non-human primates [17]. Consequently, as the maternal microbiota correlates with that of the infant [7], optimization of maternal diet and microbiota composition may therefore enhance infant microbiota development.
Although it has been shown that high-fat diets contribute to perturbation in gut microbial balance and inflammation-induced obesity, recent evidence suggests that saturated and polyunsaturated fatty acids (PUFA) differ in their interaction with the gut microbiota and subsequent metabolic outcomes [18,19]. Omega-3 (n-3) and omega-6 (n-6) PUFA play opposing roles in the inflammatory response [20]. n-6 PUFA typically upregulate inflammation by acting as precursors to pro-inflammatory eicosanoids, while n-3 PUFA resolve inflammation by competing within the same enzymatic pathway. The evolutionary ratio of n-6/n-3 PUFA has been estimated at 1:1; however, this has increased to 10-50:1 in the modern western diet which many have suggested has contributed to the epidemic of chronic inflammatory diseases such as obesity [21]. Indeed, there is convincing evidence that lowering the n-6/n-3 ratio can restore disrupted metabolism in the context of chronic disease [22,23]. From an early-life perspective, lowering the n-6/n-3 ratio in obese mothers can reduce offspring weight gain and associated inflammatory outcomes in mice [24]. Similar results have been reported for improving insulin sensitivity [25]. Furthermore, observational and intervention studies in humans have found negative correlations for n-3 PUFA status and positive correlations for n-6 status and offspring adiposity [26,27].
Indeed, n-6 and n-3 PUFA appear to have opposing effects on intestinal homeostasis. We have recently reported that, relative to dietary n-6 PUFA, n-3 PUFA reduce Enterobacteriaceae relative abundance, elevate relative Bifidobacterium abundance, and dampen intestinal inflammation partially through the upregulation of certain anti-microbial peptides in mice [28,29]. Similarly, n-3 PUFA deficiency induces a state of gut dysbiosis through alteration of gut microbiota composition and impairment of short-chain fatty acid production [30]. Changes to gut microbiota composition induced by omega-3 deficiency in mice are also associated with behavioral impairments even before adolescence, suggesting an essential role for n-3 PUFA within the developing gut-brain axis [31].
Data on the role of maternal n-3 PUFA on offspring microbiota are limited. Some evidence suggests that high doses of maternal fish oil supplementation impair the immune response in offspring and lead to the overgrowth of pathogenic bacteria [32]. However, these studies utilized crude fish oil in pharmacologically excessive doses (18% energy), and hence, further studies are warranted to examine the relationship between nutritionally relevant maternal n-3 PUFA intake and offspring microbiota. In addition, the role of maternal n-3 PUFA intake in the context of offspring obesity and its association with gut microbiota has not been previously examined. Furthermore, there is a lack of evidence as to the differences between pre-and postnatal n-3 PUFA status for offspring metabolic health.
Based on previous evidence reporting the critical opposing roles that n-3 and n-6 PUFA play in microbiota-associated metabolic endotoxemia [28], we aim to investigate this in the context of maternal n-6/n-3 PUFA status and subsequent effects on obese offspring. To examine this, we utilize the fat-1 transgenic mouse model, which is capable of endogenous conversion of n-6 PUFA to n-3 PUFA [33] and thereby eliminates confounding factors of diet. Therefore, using a single diet for both treatment groups, this model allows the generation of two phenotypes: (a) fat-1 mice with a balanced tissue n-6/n-3 ratio (~1:1) and (b) wild-type (WT) mice with a high n-6/ n-3 ratio similar to the Western diet (> 10:1). By examining the WT offspring of these two maternal genotypes and cross-fostering the offspring to mothers of different genotype at birth, we are able to identify the effect of maternal n-3 PUFA status during gestation and/or lactation on offspring health outcomes. Our results show that a lower maternal n-6/n-3 ratio during gestation and/or lactation significantly reduces weight gain and metabolic disruption in offspring fed a high-fat diet (HFD), which appears to be mediated through persistent restructuring of the gut microbiota. These studies provide novel evidence for the critical role of the maternal diet in early-life gut microbiota development and the subsequent impact on later-life metabolic disease states.
Results
Maternal fatty acid status during gestation and lactation differentially modulate offspring fatty acid profile To examine the effects of varying maternal fatty acid status during gestation and lactation on the tissue fatty acid profile of offspring, we created a cross-fostering protocol and measured offspring fatty acid composition at different time points (Fig. 1). Tissue fatty acid profiles of WT and transgenic fat-1 mothers were assessed in addition to their WT offspring at weaning (4 weeks old) and following 3 months of HFD feeding (Fig. 1a). As expected, the n-6/n-3 PUFA ratio in tail tissue was significantly greater in WT mothers than in fat-1 mothers (p < 0.0001, Fig. 1b). Maternal fatty acid profiles appeared to be transferred to offspring whereby, following 4 weeks of lactation, male offspring whom had a WT mother during both gestation and lactation (WT/WT) displayed a significantly greater n-6/n-3 ratio compared with all other groups (p < 0.0001, Fig. 1c). Conversely, offspring whom had a transgenic fat-1 mother during both perinatal periods (fat-1/fat-1) displayed the lowest tail n-6/n-3 ratio. Interestingly, in the crossover groups, WT/fat-1 offspring (those who had a WT mother during gestation and a fat-1 mother during lactation) had a significantly lower n-6/n-3 ratio than fat-1/WT offspring, suggesting that the maternal fatty acid profile during lactation had a more pronounced effect on the offspring fatty acid profile than the maternal fatty acid profile during gestation.
Interestingly, differences in adolescent tail n-6/n-3 ratio as a result of maternal genotype were eliminated following 3 months of HFD feeding (Fig. 1d), and similarly, there were also no differences in n-6/n-3 PUFA ratio in offspring liver (Additional file 1: Table S3). ) and WT mothers (n = 9) were mated while on a high-n-6 PUFA diet (10% corn oil). At birth, WT offspring were cross-fostered to different mothers for the period of lactation (4 weeks) to produce four experimental groups based on the mothers' genotype (biological mother's genotype/foster mother's genotype): fat-1/WT (n = 8 males, n = 7 females), WT/fat-1 (n = 9 males, n = 10 females), fat-1/fat-1 (n = 9 males, n = 5 females), and WT/WT (n = 10 males, n = 10 females). During lactation, mothers were continued on a high-n-6 PUFA diet. After 4 weeks of lactation, offspring were weaned onto a high-fat diet (HFD) (60% kcal from fat) for 3 months during which body weights were assessed along with a number of other parameters. b WT mothers displayed a significantly greater tail n-6/n-3 ratio compared with fat-1 mothers. c Following lactation for 4 weeks and prior to HFD, WT/WT male offspring had significantly greater tail n-6/n-3 ratio compared with all other groups. d However, after 3 months on a HFD, differences in tail n-6/n-3 fatty acid ratio were eliminated and there were no significant differences between groups. Data shown as mean ± SEM. n = 5-15 per group, n = 1-4 per cage. Bars with different letters are significantly different Perinatal n-3 PUFA exposure reduces weight gain under high-fat diet To examine whether maternal fatty acid status affects offspring weight gain, offspring of WT and fat-1 were fed a HFD (60% kcal from fat) for 3 months. There were no significant differences in offspring weights at week 4 prior to introduction of the HFD. In female offspring, there were no significant differences in weight gain between groups following 3 months HFD feeding (Fig. 2a, b). However, at 10 weeks of age and following 6 weeks of HFD feeding, WT/WT males had gained significantly more weight than WT/fat-1 males (p < 0.05) suggesting that the influence of maternal n-3 PUFA status on offspring weight gain is gender-dependent. This effect of maternal fatty acid status on male offspring weight gain was strengthened with continuation of the HFD whereby WT/WT continued to gain significantly more weight than all other groups throughout the experimental period ( Fig. 2c). At sacrifice, following 3 months of HFD, WT/WT mice had gained 232% body weight, which was significantly greater (p = 0.012) than all other groups (fat-1/WT-189%; WT/fat-1-195%; fat-1/fat-1-197%; Fig. 2d). Importantly, food intake did not differ significantly between groups. There were also no significant differences in body composition between groups (Additional file 1: Figure S1).
Maternal n-3 PUFA dampen markers of metabolic disruption in offspring fed a high-fat diet
We then examined markers of metabolic disruption following HFD feeding to assess whether differences in offspring weight gain induced by maternal n-3 PUFA availability mediated metabolic health. Lipopolysaccharide-binding protein (LBP), a marker of metabolic endotoxemia, was lowest in fat-1/fat-1 offspring (p < 0.05, Fig. 2e). Intestinal permeability (IP) is a measure of gut epithelial integrity relating to the passage of microbial endotoxins such as LPS into circulation. Again, the fat-1/fat-1 group exhibited the lowest IP (p = 0.05, Fig. 2f). However, at this time point, there were no observed differences between groups in concentrations of circulating serum cytokines IL-10, IL-1β, IL-6, MCP-1, or TNFα (Additional file 1: Figure S2A-E). Analysis of subcutaneous adipose tissue inflammation revealed no differences in mRNA expression of TNFa, F4/80 or CCL2; however, TLR4 expression was lower (p < 0.05) in fat-1/WT compared with both fat-1/fat-1 and WT/WT (Additional file 1: Figure S2F-I).
There were subtle differences in glucose tolerance pre-HFD whereby WT/WT offspring had significantly greater circulating glucose (p < 0.05) at two time points during a glucose tolerance test (GTT). These differences were eliminated following HFD feeding (Additional file 1: Figure S3A-C). Insulin sensitivity also appeared to be differentially regulated post-HFD with the fat-1/fat-1 group displaying the lowest insulin area under the curve (AUC) following glucose administration (p = 0.02, Additional file 1: Figure S3D-H).
Maternal n-3 PUFA status has a profound influence on offspring microbiota composition in early life, which persists into adulthood To examine whether differences in weight and metabolic health between offspring groups were mediated by changes in the gut microbiota, 16S rRNA gene sequencing was performed on fecal DNA of mothers and offspring both preand post-HFD feeding. 16S rRNA gene sequencing of the mothers' , offspring pre-HFD, and post-HFD fecal microbiota generated a total of 23 million sequenced reads. Principal coordinate analysis (PCoA) revealed distinct clustering of WT and fat-1 mothers based on microbiota composition (Additional file 1: Figure S5A). Whole microbiome significance testing using PERMANOVA with Bray-Curtis dissimilarity index showed significant differences between fat-1 and WT mothers (p = 0.0044, F = 4.874; Additional file 1: Figure S5A). In the offspring, clustering was evident among treatment groups, as assessed using PERMANOVA with Bray-Curtis dissimilarity index both pre-HFD (p = 0.0006; F = 3.497) and post-HFD (p = 0. 0001, F = 9.821; Fig. 3a). Both prior to and after HFD feeding, offspring clustered significantly according to the foster mother's genotype such that the fat-1/WT and WT/WT groups clustered together, and the fat-1/ fat-1 and WT/fat-1 groups clustered together, according to Bray-Curtis dissimilarity index testing. Assessment of α-diversity using the Shannon Index revealed no significant differences between WT and fat-1 mothers (Additional file 1: Figure S5B) or between post-HFD groups (Fig. 3b). However, microbial diversity as measured by the Shannon Index was significantly reduced in the WT/WT group compared with WT/fat-1 prior to HFD feeding (Fig. 3b).
Relative abundances of phylum level taxa revealed distinct differences between groups (Fig. 3c, d). In mothers, Proteobacteria was the only phylum significantly different between groups following FDR correction (p = 0.05) and was present in greater relative abundance in fat-1 females (Additional file 1: Figure S5C). Taxonomic distribution appeared to be most significantly influenced by lactating/foster mother's genotype as opposed to biological mother's genotype. Hence, Proteobacteria was significantly greater in both WT/fat-1 and fat-1/fat-1 offspring both pre-HFD (p = 0.018) and post-HFD (p < 0.001). Verrucomicrobia were also highest in fat-1/fat-1 offspring post-HFD (p = 0.023). Conversely, Firmicutes were significantly greater in fat-1/WT and WT/WT offspring post-HFD (p < 0.001). Phylum compositional and diversity differences in fecal microbiota between offspring of fat-1 and WT mothers. a Analysis of beta diversity identified significant clustering of offspring groups by foster maternal genotype. b α-diversity as measured by the Shannon index did not differ between post-HFD groups, however, appeared to be reduced in WT/WT compared with WT/fat-1 before HFD feeding. c The Firmicutes:Bacteroidetes ratio was lowest in fat-1/fat-1 offspring. d Fecal microbiota composition differed significantly between offspring groups at phylum level and appeared dependent on foster mother genotype. Significant differences were determined by non-parametric analysis using the Kruskall-Wallis test followed by Mann-Whitney test. n = 5-10 per group, n = 1-4 per cage. Groups with different letters are significantly different Due to the significant PCoA clustering between groups based on foster mother's genotype, offspring were grouped according to the foster mother's genotype and linear discriminant analysis effect size (LEfSe) was performed on these two groups (1. WT/fat-1 + fat-1/fat-1; 2. fat-1/WT + fat-1/fat-1) as a biomarker discovery tool to identify taxa that may be contributing to differences between groups ( Fig. 4a and Additional file 1: Figure S4). LEfSe analysis elucidated the observed phylum level differences such that offspring whom had a WT foster mother were more abundant in species within the Firmicutes phylum (Clostridia) and offspring whom had a fat-1 foster mother were more abundant in species of the Bacteroidetes (primarily Bacteroides) and Proteobacteria phyla (primarily Epsilonproteobacteria) (Additional file 1: Figure S5).
Further analysis at lower taxonomic levels revealed a number of taxa (13 genera pre-HFD and 26 genera post-HFD) that appeared to significantly differ between groups (Fig. 4b). Clostridia and Parabacteroides were significantly greater in fat-1/WT and WT/WT offspring post-HFD (Fig. 4c). Conversely, Akkermansia appeared to be significantly elevated in fat-1/fat-1 offspring post-HFD. Bacteroides were also significantly lower in offspring with WT foster mothers. Interestingly, the reduced Proteobacteria observed in WT mothers and offspring fostered to WT mothers was driven primarily by depleted Epsilonproteobacteria (Fig. 4c) and Deltaproteobacteria, whereas relative abundance of Gammaproteobacteria was unchanged (Additional file 1: Figure S5). Helicobacter, a commensal member of the Epsilonproteobacteria order, Fig. 4 Genus level distribution of fecal microbiota between offspring of fat-1 and WT mothers. a Least discriminant analysis of effect size (LEfSe) on clustered groups based on maternal genotype revealed that Firmicutes were more abundant in offspring of WT foster mothers whereas Bacteroidetes and Proteobacteria were more abundant in offspring of fat-1 foster mothers. b Normalized data of fecal microbiota relative abundance revealed distinct differences between offspring groups. Microbiota composition appeared similar in groups of the same foster mother genotype. Data represents OTUs with significantly different relevant abundances between treatment groups (as determined by Kruskall-Wallis testing and Benjamani-Hocherg multiple correction testing). Each row represents an OTU labeled by lowest taxonomic description and OTU ID, normalized to the row maximum. Data normalized per taxonomic read. c Clostridia displayed significantly higher relative abundance in offspring of WT foster mothers. Bacteroides displayed higher relative abundance in offspring of fat-1 foster mothers. Akkermansia displayed the highest relative abundance in fat-1/fat-1 offspring. Epsilonproteobacteria were almost entirely depleted (< 0.1%) in offspring fostered to WT mothers. n = 5-10 per group, n = 1-4 per cage. Significant differences were determined by non-parametric analysis using the Kruskall-Wallis test followed by Mann-Whitney test and FDR correction by Benjamani-Hochburg testing. Groups with different letters are significantly different which constituted 9-14% of the fat-1/fat-1 and WT/fat-1 post-HFD microbiota, was almost entirely depleted (< 0.1%) in offspring of WT foster mothers.
Network interactions reveal host-microbiome interactions driven by fatty acid status
To assess the overall measure of correlation between the n-6/n-3 ratio-induced metabolic changes and microbiota, an RV coefficient was calculated. An RV coefficient of 0.456 (p = 0.001) was found between the relative abundance of microbial genera (pre-and post-HFD) and host phenotypes (mother and offspring pre-HFD n-6/n-3 ratio, LBP, IP, and total body weight gain).
Network-based analytical approaches have the potential to help disentangle complex host-microbe interactions [34]. Pairwise correlations between offspring n-6/ n-3-induced changes in microbiota and host parameters with significant Spearman's non-parametric rank correlation coefficient were employed to generate correlation networks for both pre-HFD and post-HFD (Fig. 5a, b). Correlation analysis with pre-HFD microbiota data resulted in a correlation network of 92 microbial parameters (taxa and α-diversity indices) and 4 host parameters (pre-HFD n-6/n-3 ratio, final body weight gain, LBP, and IP) for which at least one correlation could be found. The network consists of 118 edges (70 and 48 positive (green) and negative correlations (red) respectively) and 98 nodes (microbial and host parameters). Three modules were observed in the pre-HFD network (1. n-6/n-3 ratio; 2. weight + LBP; 3. IP). Accordingly, the largest module (n-6/n3 ratio) showed high modularity (blue nodes) comprised of both positive (P; red lines) and negative correlations (N; green lines) with a number of taxa, namely Firmicutes (P), Bacteroidetes (N), Proteobacteria (N), Clostridia (P), Epsilonproteobacteria (N), Ruminococcaceae (P), Porphyromonadaceae (N), Bacteroides (N), and Akkermansia (P) (Fig. 5a). A number of similar outcomes persisted in the post-HFD correlation network, whereby body weight was negatively associated with Proteobacteria and LBP negatively correlated with Akkermansia (Fig. 5b). Networks were also constructed between mother's n-6/n-3 ratio, pre-and post-HFD microbiota data, and host parameters (Additional file 1: Figure S7).
To further investigate the identified groups of correlations between changes in microbiota composition and host parameters, we built partial least squares (PLS) models for selected correlations of interest created (Additional file 2; Goodness of fit-Q 2 cum > 0.8). Combination of offspring pre-HFD n-6/n-3 ratios and all microbiota data explained 68, 72, and 61% variability in the body weight gain, LBP, and IP, respectively (Additional file 2: data models 4-6). A model created between all the host parameters (weight gain, LBP, and IP) and all microbiota data and mothers and offspring n-6/n-3 ratio (model 12) explained 45, 32, and 35% variability in the host parameters respectively (Fig. 5c). Variable importance in the projection (VIP) scores for the variables in each PLS model can be found in the Supplementary Data File 6 (M1-14).
Next, parameters contributing to the multivariate PLS models were compared with the corresponding identified modules in the correlation networks. Notably, 42 out of 50 pre-HFD microbial parameters, which exhibited direct correlations with offspring n-6/n-3 ratio in the network analysis (largest module), were presented with VIP values 1 or > 1 in the PLS model (model 8).
The selected microbial parameters have been shown with R 2 and VIP scores in the Supplementary data File 6 (M8). Twenty-eight out of 44 pre-HFD and 6 out of 9 post-HFD microbial parameters having correlations with mother's n-6/n-3 ratio and LBP in the network analysis (largest module in the pre-and post-HFD networks) were presented with VIP values 1 or > 1 in the PLS model (model 3 and M3).
Finally, this was visualized using multiple factor analysis (MFA) that allowed microbial parameters to be superimposed onto the host parameters (Fig. 5d). Superimposed microbiome and host data were separated depending upon the n-6/n-3 ratio of both mother and foster mother. Pairwise comparison (Monte Carlo simulations with a P value = 0.008) between groups with superimposed host and microbiota data showed the importance of lactating mothers n-6/n-3 ratio when offspring were born from WT mothers.
Discussion
In this study, we observed that the perinatal maternal tissue fatty acid profile profoundly and persistently restructures the offspring gut microbiota, which may have long-term implications for metabolic health. Maternal n-3 PUFA significantly reduced offspring weight gain into adulthood during high-fat feeding as has been demonstrated previously in humans and animals [26,35,36]. A number of mechanisms may contribute to this. Firstly, maternal tissue and milk n-3 PUFA correlate with umbilical cord and infant n-3 PUFA [37,38]; therefore, the beneficial effects on weight may be attributed to direct nutrient transport from mother to infant and the subsequent anti-adipogenic effects of n-3 PUFA [39]. The intriguing aspect of these findings is that the fat-1/WT group had a higher n-6/n-3 ratio than the WT/fat-1 group, suggesting that the offspring n-3 status is more influenced by the dietary n-3 PUFA during lactation, via milk, rather than maternal n-3 PUFA status during gestation. It has previously been reported that n-3/n-6 content in milk from fat-1 females is greater than WT females [33]. Additionally, there appeared to be a strong gender difference between male and female offspring regarding weight gain as a result of maternal n-3 status.
Maternal n-3 PUFA reduced weight gain in offspring males but not females. Such gender-dependent differences have been reported previously in humans, an effect which may be mediated by the interaction of female sex hormones with adipogenesis and fatty acid metabolism [40][41][42].
We also observed that maternal fatty acid status influenced immune regulation in the offspring, which may affect weight. It appeared that the lower n-6/n-3 ratio in fat-1 mothers may have dampened maternal and placental inflammation, which induced an anti-inflammatory and anti-obesigenic environment in the offspring. This effect of maternal tissue fatty acid status on offspring inflammation has been demonstrated previously. Using the same fat-1 model, it has been shown that maternal obesity induces maternal and placental inflammation, which is transmitted to the offspring resulting in a number of metabolic disruptions that are not evident in offspring of fat-1 mothers [24]. Indeed, in this study, both IP and LBP were significantly reduced by maternal n-3 PUFA. This interesting finding suggests that the chronic low-grade inflammation that mediates the obesigenic phenotype [43] may be transferred from the mother to the infant during the perinatal period. This inflammatory phenotype appears to originate in the intestines through degradation of the intestinal barrier and hence translocation of bacterial endotoxins. c d a b Fig. 5 Network analyses of microbiome and host metabolic phenotype interactions. Host-microbiota interaction network built from Spearman's non-parametric rank correlation coefficient (P < 0.05) between host parameters (mother and offspring pre-HFD n-6/n-3 ratio, body weight, IP, and LBP) and microbial parameters (pre-and post-HFD OTUs with FDR-corrected p values < 0.05, FIR/BAC ratio, and Shannon ADI) for a pre-HFD and b post-HFD. Each node was colored according to the modularity score and nodes were grouped as three (a) or four (b) modules. Lines represent statistically significant correlations and are colored red for positive and green for negative correlations. c Partial least square (PLS)-regression loading score plot illustrating the association between host parameters (dependent variables-Y) and microbial parameters (explanatory variables-X; red dots). Explanatory variables of interest with variable importance in the projection (VIP) scores 1 or > 1 were labeled with circles on the red dots. Samples from four different groups (fat-1/WT, WT/fat-1, fat-1/fat-1, WT/WT) were observed (green dots) and grouped using circles based on where they clustered on the plot. Leave-one-out cross-validation (LOO-CV) was applied. d Multiple factor analysis (MFA) using Spearman type principal component analysis of host and microbiota data. One end of the each connecting line for an observation indicates the host parameters (differently colored to indicate the groups) and another end (red) indicates the microbiota As has been reported previously, this inflammatory phenotype is induced by a disturbed microbiota [19,44]. The composition of an infant's microbiota is strongly influenced by that of the mother [7]. Therefore, the antiinflammatory fat-1 microbiota that has been described previously [28] may have been transmitted vertically to the offspring thereby reducing the microbiota-associated weight gain. The gut microbiota has a well-established role in energy metabolism and obesity by regulating energy harvest from macronutrients [6]. Interestingly, the observed differences in weight gain in this study were independent of dietary intake, which did not differ, suggesting that the WT/WT microbiota displayed an increased capacity for energy harvest. Previous hypotheses surrounding the "obesigenic" microbiota have concentrated on the energy-harvesting capacity of Firmicutes and the production of short-chain fatty acids (SCFA). Indeed, we observed this effect here whereby Firmicutes and their SCFA-producing families (Lachnospiraceae and Ruminococcaceae) were significantly greater in WT mothers and their foster offspring. We have previously demonstrated that differing ratios of dietary fatty acids significantly alter SCFA production in mice [30]. The role of the microbiota in regulating gut epithelial integrity and the subsequent inflammatory response has also been hypothesized to play a role in obesity [45,46]. Here, we observed that IP and LBP were lowest in fat-1/fat-1 offspring. These changes to IP were independent of changes to tight junction protein expression (Additional file 1: Figure S6). Conversely, Akkermansia was greater in offspring of fat-1 mothers and hence may mediate the protective effect of maternal n-3 PUFA on weight gain in the offspring as has been shown previously in fish oil-fed mice [19]. There is growing evidence that Akkermansia plays a critical role in metabolic health and can reduce weight gain and metabolic endotoxemia in mice and humans [12,13,47]. Epsilonproteobacteria, and its genus Helicobacter, were also significantly depleted in offspring of WT foster mothers in response to a lack of perinatal n-3 PUFA. Our data extend on work of Ma et al. who reported that a maternal HFD in primates induced loss of non-pathogenic Helicobacter and Campylobacter, another member of Epsilonproteobacteria, in offspring [17]. Interestingly, offspring of non-human primates fed a high-fat diet exhibit reduced plasma n-3 PUFA [48].
This unique model and study design also allowed us to distinguish the role of the prenatal versus postnatal maternal tissue fatty acid profile on offspring outcomes. Interestingly, as with the n-6/n-3 ratio, the gut microbiota of the offspring appeared to strongly match that of the lactating/foster mother rather than the biological mother. Indeed, it has previously been shown in crossfostering models that the nursing mother causes a permanent shift in the offspring microbiome [49]. It has been assumed that the fetus is sterile; however, recent studies may suggest otherwise [9]. The biological mother imprints a unique microbiota on the infant at birth [7]. The results reported here suggest that the labile "birth microbiota" is quickly and comprehensively altered by the foster mother, presumably through differences in milk/dietary fatty acids and the foster mother's microbiota following birth. These results would suggest that n-3 PUFA status in milk during lactation has a stronger impact on the infant microbiota than maternal fatty acid status during gestation. Hence, postnatal n-3 PUFA exposure may rescue and recover a "dysbiotic" offspring microbiota induced by the maternal prenatal n-3 PUFA insufficiency. Furthermore, despite the differences in n-3 and n-6 PUFA disappearing in adulthood after HFD, the observed differences in the microbiota persisted, suggesting that maternal fatty acid status and early neonatal feeding regime may have a persistent effect on the offspring microbiota throughout life.
Conclusions
Much evidence exists indicating that obesity and its associated disorders have their origins in the fetal and neonatal periods. As the gut microbiota plays a critical role in the pathogenesis of these disorders and the chronic low-grade inflammation that defines them, nutrition research must now focus on maternal and early-life interventions that target the gut microbiota. This study has demonstrated that maternal fatty acid status persistently restructures the offspring microbiota and the associated metabolic homeostasis related to obesity. In addition, the unique transgenic model used here challenges the concept of a direct diet-microbiota interaction in obesity and instead uncovers the importance of the underlying tissue fatty acid profile in microbiota-metabolic interactions. These results have important implications for the current chronic disease epidemic. Excessive n-6/n-3 ratios in the Western diet have contributed to a transgenerational epidemic of chronic metabolic disease, which may be partially attributed to persistent gut microbiota dysbiosis. Consequently, maternal n-3 PUFA intake, especially during lactation, poses potential as an effective therapeutic measure to restore gut microbiota homeostasis and metabolic disturbances associated with modern chronic disease.
Animals and diets
Generation of transgenic fat-1 mice was performed as previously described [33] followed by backcrossing onto a C57BL/6 background. Fat-1 phenotype was confirmed by gas chromatography flame-ionization detection (GC-FID) following identification of increased tissue n-3 PUFA compared with WT. Fat-1 genotype was confirmed by RT-PCR. Mice were housed in the Massachusetts General Hospital (MGH) animal facility in a biosafety room (level 2) in hard top cages with filtered air. Mice were maintained in a temperature-controlled room (22-24°C) with a 12-h light/dark diurnal cycle. Food and water were provided ad libitum. A subset of 3-month-old female C57BL/ 6 WT mice was purchased from Charles River Laboratories and allowed to acclimatize to the facility conditions for 1 week prior to mating. Fat-1 and WT mating pairs were fed a diet high in n-6 PUFA (AIN-76A with 10% corn oil) from LabDiet in order to maintain fat-1 and WT phenotypes. At postnatal day (PND) 28, male and female offspring were weaned onto a high-fat diet (HFD) with 60% kcal from fat (D12492, Research Diets Inc.). Detailed fatty acid profiles of both diets are outlined in Additional file 1: Table S1. Body weight and food intake were measured weekly using an electrical balance. Body composition (fat mass, lean mass, water mass) was assessed on the day of sacrifice using a Minispec mq bench-top NMR spectrometer (Bruker Instruments). Animals were sacrificed using CO 2 . Dissected tissues were flash frozen in liquid nitrogen. All animal procedures in this study were performed in accordance with the guidelines approved by the MGH Subcommittee on Research Animal Care.
Breeding and cross-fostering
Three-month-old female fat-1 (n = 15) and WT (n = 9) mice were mated with age-matched WT males. Mating pairs were housed in individual cages, and males were separated from the females following confirmation of pregnancy. Within 48-h of parturition, newborn litters were fostered to new mothers until weaning, at PND 28. Briefly, the newborn litter was removed from the biological mother's cage then mixed with the bedding of the foster mother in the hand of the investigator. The litter was then placed in the empty nest of the new foster mother. The foster mother was held above the new litter until she urinated on the litter in order to disguise their scent. The foster mother pairs were chosen such that offspring were fostered to mothers whom had given birth within 48 h to a litter of similar size. The crossfostering procedure was carried out in order to generate offspring of four distinct experimental groups as follows: n6-n3 group-WT biological mother, cross-fostered to fat-1 mother; n3-n6 group-fat-1 biological mother, cross-fostered to WT mother; n3-n3 group-fat-1 biological mother, cross-fostered to new fat-1 mother; n6-n6 group -WT biological mother, cross-fostered to new WT mother. Cross-fostering was carried out in the n3-n3 and n6-n6 groups as a control to the cross-fostering procedure in the other two groups. The study design is outlined in Fig. 1.
At PND 10, the tails of the offspring were clipped with a scissors and genotyping was performed on the tail tissue by RT-PCR. Following confirmation of genotype, fat-1 mice were removed from the litter such that only the WT offspring remained. At PND28, WT offspring were separated from their mothers, grouped in separate cages (randomized by experimental group, n = 1-4 animals per cage, n = 3-4 cages per experimental group) and weaned onto the HFD.
Fatty acid analysis
Fatty acid analysis of tail and liver tissues was performed as previously described [50]. Briefly, frozen tissue samples were ground to a powder under liquid nitrogen using a mortar and pestle. Lipid extraction and fatty acid methylation was performed by the addition of 14% (w/v) boron trifluoride (BF3)-methanol reagent (Sigma-Aldrich) followed by heading at 100°C for 1 h. Fatty acid methyl esters (FAME) were analyzed using a fully automated HP5890 gas chromatography system equipped with a flame-ionization detector (Agilent Technologies, Palo Alto, CA). The fatty acid peaks were identified by comparing their relative retention times with the commercial mixed standards (NuChek Prep, Elysian, MN), and area percentage for all resolved peaks was analyzed by using a PerkinElmer M1 integrator.
Intestinal permeability
Intestinal permeability was performed as described previously [28]. Briefly, mice were fasted for 6 h and then FITC-dextran (70kDA, Sigma-Aldrich, in PBS solution) was administered to mice by oral gavage at a dose of 600 mg/kg body weight. Following gavage, blood samples were collected from the facial vein after 90 min. Serum was diluted with an equal volume of PBS, and fluorescence intensity was measured using a fluorospectrophotometer (PerkinElmer) with an excitation wavelength of 480 nm and an emission wavelength of 520 nm. Serum FITC-dextran concentration was calculated from a standard curve generated by serial dilution of FITC-dextran in PBS.
Serum LBP
Concentrations of lipopolysaccharide-binding protein (LBP) in serum were assayed using a commercial ELISA kit (NeoBioLab, Cambridge, MA) according to the manufacturer's instructions.
Stool DNA extraction and 16S rRNA gene sequencing Bacterial genomic DNA was extracted from mice fecal pellets using the QIAmp DNA Stool Mini Kit (Qiagen, UK) according to the manufacturer's instructions. DNA was quantified and purification was subsequently assessed by measuring absorbance and determining the A260/ A280 ratio. DNA was stored at − 20°C until analysis.
16S rRNA gene sequencing library preparation was performed on DNA samples according to the Illumina 16S rRNA gene sequencing library protocol in order to generate V3-V4 amplicons. DNA samples were subjected to an initial PCR reaction utilizing primers specific for amplification of the V3-V4 region of the 16S rRNA gene (Forward primer 5′ TCGTCGGCAGCGTCAGATGTG TATAAGAGACAGCCTACGGGNGGCWGCAG; reverse primer 5′ GTCTCGTGGGCTCGGAGATGTGTATAA GAGACAGGACTACHVGGGTATCTAATCC). Clean-up and purification of the PCR product was performed using the Agencourt AMPure XP system (Labplan, Dublin, Ireland). Following clean-up and purification, a second PCR reaction was performed in order to incorporate a unique indexing primer pair to each sample (Illumina Nextera XT indexing primers, Illumina, Sweden). The PCR products were purified a second time using the Agencourt AMPure XP system. Quantification of samples was performed using the Qubit broad range DNA quantification assay kit (Bio-Sciences, Dublin, Ireland). Following quantification, samples were pooled in equimolar amounts (10 nM) and sequenced at Clinical-Microbiomics (Copenhagen, Denmark) using Illumina MiSeq 2 × 300 bp paired-end sequencing.
16S rRNA gene sequencing bioinformatics analysis
The 64-bit version of USEARCH 8.1.1825 [51] and mother v.1.36.1 [52] were used for bioinformatic analysis of the sequence data. These were used in combination with customized in-house programs, the process of which is detailed precisely below.
Using USEARCH's-cluster_otus command, sequences were clustered at 97% sequence similarity, using the most abundant strictly dereplicated reads as centroids (" usearch8.1.1825_i86osx64 -cluster_otus rua16s.sorted.fasta -otus rua16s.clustered.fasta -uparseout rua16s.clustered. table"). Using USEARCH's-uchime_ref [53] command, suspected chimeras were discarded based on comparison with the Ribosomal Database Project classifier training set v9 [54]. Using mothur's classify.seqs command, taxonomic assignment of OTUs was performed using the method by Wang et al. [55] with mothur's PDS version of the RDP training database v14 ("classify.seqs(fasta=abrecovery.fasta, template=otu.fasta, taxonomy=trainset14_032015.pds.fasta, method=wang)"). The Wang parameters used were ksize = 8, iters = 100, and cutoff = 0. A bootstrap threshold of 80 was subsequently applied using R and values < 80 were assigned as "unclassified". Alpha diversity was calculated using mothur's Shannon command. To reduce bias from variation in sample read numbers, samples were rarefied to the sample with the lowest read count, 10,597 reads. Rarefaction can introduce bias into data and thereby affect outcomes [56,57]. To ensure our data processing did not affect data outcomes, we performed PERMANOVA with Bray-Curtis dissimilarity testing to examine whether rarifying caused differences to the sequencing data. No significant differences were observed between rarified and non-rarified data. Similarly no significant differences were observed in alpha diversity (Shannon index) or between any single OTU (Kruskall-Wallis followed by Mann-Whitney testing) between rarified and non-rarified data.
Statistical analysis
Statistical analysis was performed using SPSS (v19, NY, USA), GraphPad Prism (v6, CA, USA) and R (v3.2.4). One-way analysis of variance (ANOVA) was performed to assess differences between groups followed by Tukey's or LSD post-hoc test. Repeated-measures two-way ANOVA (time and group) with Tukey's post-hoc test was used for weight gain data. For 16S rRNA gene sequencing data, principal coordinate analysis was conducted using PAST Software (v3.18) with Bray-Curtis dissimilarity testing. To assess whether significant differences existed between specific taxa, non-parametric analysis was performed using the Kruskall-Wallis test followed by Mann-Whitney test. False discovery rate (FDR) analysis was subsequently performed using the Benjamani-Hochburg method and significance was calculated at q < 0.05.
Linear discriminant analysis (LDA) effect size (LEfSe) is a biomarker discovery tool for high-dimensional data that provides effect size estimation [58]. Microbiotabased biomarker analysis was performed with LEfSe using the online galaxy server (https://huttenhower.sph. harvard.edu/galaxy/). LDA scores (> 3.0) derived from LEfSe analysis were used to show the relationship between taxon using a cladogram (circular hierarchical tree). Levels of the cladogram represent, from the inner to outer rings, phylum, class, order, family, and genus.
For host-microbiome interaction analysis, an RV coefficient (a multivariate generalization of the Pearson correlation coefficient) was calculated between the microbial (pre-and post-HFD) and host parameters (mother and offspring pre-HFD n-6/n-3 ratio, LBP, IP, and body weight). The data was rescaled from 0 to 1 before analysis.
Pairwise correlations between each taxon and host parameter were calculated using the Spearman's nonparametric rank correlation coefficient. Network-based analytical approaches have been used previously to disentangle host-microbe interactions [59,60]. Data was rescaled from 0 to 1 before analysis. Based on these correlation coefficients, a correlation network (label adjust and no overlap layout) was built where nodes represent either a taxon or a host parameter. For each taxon and host parameter, an undirected edge was added between the corresponding nodes in the correlation network. Edges (red links indicate positive and dark green links indicate negative associations) represent statistically significant correlations (P < 0.05). Correlations were calculated in PAST software version 2.17 and the network was visualized in Gephi Graph Visualization and Manipulation software version 0.9.2. Nodes were colored based on a "modularity" community detection algorithm. A "module" in the network is a set of nodes connected to each other by many links, while connected by few links to nodes of other groups, so modules are elementary units of any biological network (each assigned a unique color). Degree centrality of nodes was employed as an index of node centrality.
Partial least squares regression (PLS-R) was used to associate the microbial composition with host parameters including jackknife-based variable selection as reported previously [60]. For all models, the data were rescaled from 0 to 1 before PLS-R and centered as well as reduced during PLS-R. Leave-one-out cross-validation (LOO-CV) was applied. The Q 2 cumulated index (Q 2 cum ) was used as a measure of the global goodness of fit, the predictive quality of the models and to test the validity of the model against over-fitting. A Q 2 cum threshold of > 0.8 was applied. The resulting plot displays the dependent variables on the c vectors and the explanatory variables on the w* vectors which allows visualizing the global relationship between the variables. The w* are related to the weights of the variables in the models. The results are also presented in PLS scatter plots for subject clustering and variables. The R 2 (coefficient of determination) indicates the percentage of variability of the dependent variable (Y) which is explained by the explanatory variables (X). The relative importance of each x-variable is expressed by variable importance in the projection (VIP) values. VIP-value 1 or > 1.0 is considered influential and > 1.5 as highly influential. Results of all of the above mentioned statistics were given in the Supplementary file 6. All analyses were performed using precise algorithm in the XLSTAT software version 2017.6.
Associations between the host and microbiome data sets were also assessed by multiple factor analysis (MFA). The methodology of the MFA breaks up into two phases: (i) a principal component analysis (PCA) (Spearman type) successively carried out for each dataset (Dataset 1, host parameters; Dataset 2, pre-and post-HFD microbiota data) which stores the value of the first eigenvalue of each analysis to then weigh the various datasets in the second part of the analysis. (ii) A weighted PCA on the columns of all the datasets leads to each indicator variable having a weight that is a function of the frequency of the corresponding category. After these two phases, the coordinates of the projected points in the space resulting from the MFA are displayed. The projected points correspond to projections of the observations in the spaces reduced to the dimensions of each dataset. Based on the eigenvalues of the weighted PCA, the first two factors (F1/F2) almost covered 60% of the variability in this analysis. To test whether the four groups with superimposed host and microbiome data were separated from each other, Kruskal-Wallis testing was performed on the coordinates of the projected points and p values were obtained using 10,000 Monte Carlo permutations. One end of each line for an observation indicates the host parameters (differently colored to indicate the groups) and another end (red) indicates the microbiota.
Additional file
Additional file 1: Figure S1. Male and female body composition. Figure S2. Serum cytokines and subcutaneous adipose tissue inflammatory gene expression did. Figure S3. Glucose tolerance and insulin tolerance testing. Figure S4. LDA scores following LEfSe analysis of pre-HFD and post-HFD microbiota grouped by foster mother genotype. Figure S5. Mothers microbiota and offspring proteobacteria abundance. Figure S6. Ileal tight junction protein expression. Figure S7. Correlation network of maternal fatty acid status and offspring microbiota. Table S1. Fatty acid profile of diet. Table S2. Tail fatty acid profiles of mothers and offspring before and after high-fat diet feeding. Table S3. Liver fatty acid profiles of offspring after high-fat diet feeding. | 2023-01-29T15:30:51.075Z | 2018-05-24T00:00:00.000 | {
"year": 2018,
"sha1": "66b8d6c429c8ad64183bdb366290bd55d0d7277e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40168-018-0476-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "66b8d6c429c8ad64183bdb366290bd55d0d7277e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
256097643 | pes2o/s2orc | v3-fos-license | Pluripotent stem cell differentiation as an emerging model to study human prostate development
Prostate development is a complex process, and knowledge about this process is increasingly required for both basic developmental biology studies and clinical prostate cancer research, as prostate tumorigenesis can be regarded as the restoration of development in the adult prostate. Using rodent animal models, scientists have revealed that the development of the prostate is mainly mediated by androgen receptor (AR) signaling and that some other signaling pathways also play indispensable roles. However, there are still many unknowns in human prostate biology, mainly due to the limited availability of proper fetal materials. Here, we first briefly review prostate development with a focus on the AR, WNT, and BMP signaling pathways is necessary for prostate budding/BMP signaling pathways. Based on the current progress in in vitro prostatic differentiation and organoid techniques, we propose human pluripotent stem cells as an emerging model to study human prostate development.
The prostate originates from the urogenital sinus (UGS), and cell lineage analysis demonstrates that the UGS is derived from the endoderm and gives rise to the entire urethra [1]. The UGS is partitioned into two parts, the urogenital sinus epithelium (UGE) and the surrounding urogenital sinus mesenchyme (UGM). Recombinant tissue experiments indicate that the interaction between the UGM and the UGE is sufficient to generate a welldeveloped prostate [2,3]. It was known very early that the development of the prostate is under the control of androgens, which are secreted from the testes [4]. The fetal testes start to produce testosterone at approximately 9 weeks of gestation in humans and at E13-14 in mice, and testosterone is further converted into dihydrotestosterone, a more effective androgen, by 5α-reductase [5]. The androgen receptor (AR) pathway plays an extremely important role in the induction of the prostate [6].
Epithelial prostatic budding occurs at approximately 9-10 weeks of gestation in humans [7] and at E17.5 in mice [8] (Fig. 1). FOXA1, an important pioneer gene in endoderm-derived epithelial cells, is highly expressed in prostate epithelial cells [9]. FOXA1 can interact with closed chromatin and loosen nucleosomes as a pioneer factor, therefore allowing AR to bind to the DNA [10]. More importantly, it has been demonstrated that FOXA1 mutations disturb prostate differentiation [11]. In addition to AR, NKX3-1 is indispensable in prostate specification. NKX3-1 starts to be expressed 2 days prior to prostatic budding, suggesting that NKX3-1 may define the initiation of prostate organogenesis. Moreover, NKX3-1 uniquely marks the prostate epithelium and is not detected in other tissues of the male urogenital system. Functionally, a defect in NKX3-1 alters prostate development in mice [12]. In addition, the combination of AR, FOXA1, and NKX3-1 is identified as a driver for prostate organogenesis by a computational system approach named the Master Regulator Inference algorithm [13]. Moreover, these master regulators are used to reprogram induced epithelial cells derived from mouse fibroblasts into prostate tissue. These reprogrammed prostate-like cells exhibited appropriate histological and molecular properties of the prostate after grafting into a mouse model [13].
The next key events after prostatic budding are ductal outgrowth and branching morphogenesis. Budding starts during late fetal development, and the most prominent bud outgrowth occurs during the first two postnatal weeks in rodents [14]. The elongation and branching morphogenesis of human prostatic buds starts at 11 weeks of gestation and peaks at 16 weeks of gestation. In contrast to that in rodents, most of the prostate epithelium in humans is in the form of canalized ducts undergoing secretory cytodifferentiation before birth [15]. The theory of branching and annihilating random walks (BARWs) describes a process whereby the active tips elongate in all directions to randomly form ducts; ducts can branch through stochastic tip bifurcation at any moment or terminate when the tips meet with an existing duct [16]. The duct network can greatly maximize the exchange surface between the prostate epithelium and lumen. This model explains duct network heterogeneity and its spatiotemporal pattern. The FGF10/FGFR2IIIB interaction is reported to upregulate the expression of SHH and BMP7, which together contribute to duct branching morphogenesis. Additionally, SHH can downregulate FGF10 expression as a negative feedback loop and upregulate BMP4 expression, which controls duct elongation [17]. In addition, PI3K/mTOR signaling is required for prostate epithelial invasion and therefore regulates prostatic branching morphogenesis [18].
AR/WNT/BMP signaling pathways in prostate development
AR is the most studied signal in both prostate development and prostate cancer. AR is recognized as a nuclear receptor but only enters the nucleus upon binding with androgen hormones and then exerts its function as a transcription factor [19]. Immediately after sex determination, dihydrotestosterone converted from testosterone by 5αreductase binds to AR to activate the expression of NKX3-1, FOXA1, PSA, and other prostatic genes, therefore promoting prostate budding, branching morphogenesis, and maturation [20]. Testicular feminization mutant (Tfm) mice carrying a natural defect in the AR locus show an absence of prostatic buds, indicating the essential role of AR [21]. Conditional deletion of AR in both stromal fibroblasts and smooth muscle cells can impair branching morphogenesis in a mouse model [22], suggesting an additional critical role in subsequent ductal branching morphogenesis. In addition, AR and circulating androgens are both abundant in the UGS of male mice, while only the level of circulating androgens is quite low in females, which suggests that androgens play a priming role. In support of this notion, exogenous dihydrotestosterone can induce prostatic budding in the UGS of both wild-type female mice and Tfm male mice [23]. Androgenindependent signals are involved in prostate initiation and budding. The WNT signaling pathway is one of the most important signaling pathways playing a dominant role in cell fate determination [24]. The canonical WNT signaling pathway depends on the translocation of the β-catenin protein (encoded by the CTNNB1 gene), and nuclear βcatenin forms a complex with members of the TCF/LEF family to activate the transcription of WNT target genes [25]. The WNT signaling pathway regulates prostate specification and subsequent prostatic budding and epithelial branching morphogenesis as well as prostate stem cell self-renewal. Several WNT ligands and the WNT upstream regulator R-spondin 3 are present in the lower urogenital tract during prostate development and are more abundant in the male UGS than in the female UGS [26]. The β-catenin and WNT/β-catenin-responsive downstream genes Axin2 and Lef1 are highly expressed in the prostatic bud epithelium and colocalize with NKX3-1. Moreover, treatment of UGS explant cultures with a WNT antagonist, such as DKK1, not only decreases the number of prostatic buds but also inhibits NKX3-1 expression, indicating critical roles for WNT/β-catenin during prostate specification and bud formation [27]. Conditional deletion of β-catenin in the E15.5 mouse UGS prevents prostatic differentiation and bud formation. Interestingly, pretreatment of the mouse E15.5 UGS with dihydrotestosterone for 24 h can result in rudimentary bud formation, even after inducing β-catenin deletion by tamoxifen treatment [27]. This indicates that β-catenin is required for initiating prostate differentiation but not for subsequent prostate gland formation. Supporting this conclusion, the specific deletion of β-catenin in adult luminal epithelial cells in the prostate gland in Probasin-Cre mice does not affect glandular homeostasis [28]. However, treatment of cultured postnatal rat ventral prostates with either the WNT agonist WNT3A or WNT antagonist DKK1 leads to a significantly reduced number of branches [29], suggesting a delicate dosing effect of WNT signaling on prostatic epithelial branching morphogenesis. In addition, the noncanonical WNT/calcium pathway, in which noncanonical WNT ligands such as WNT4, WNT5A, and WNT11 mediate the induction of intracellular Ca2+ transients to activate the Ca2+-sensitive kinases CAMK2 and PKC [30], also plays a role in building a unique branch pattern during prostate branching morphogenesis. WNT5A is mainly expressed at the distal tips, and ex vivo experiments show that WNT5A treatment does not affect prostate bud initiation but regulates the size and number of buds [31].
The BMP signaling pathways is necessary for prostate budding/BMP signaling pathway is also critically involved in prostate development. BMP4 is highly expressed in the male UGS from E14 to birth. Exogenous BMP4 can inhibit prostate ductal budding in a dose-dependent manner, and in BMP4 haplo-insufficient adult mice, the prostate contains an increased number of duct tips [32]. These data demonstrate that the BMP signal inhibits prostate ductal budding to ensure an appropriate number of ductal tips for normal prostate development. In addition, activin A is weakly expressed during development, but its expression is upregulated in the prostatic epithelium during puberty. Follistatin and activin receptors are expressed throughout the prostatic epithelium. Functionally, activin A can inhibit prostatic branching in prostate organ cultures, but follistatin, an activin-binding protein that inhibits TGFβ signaling, can increase branching in vitro [33]. Taken together, these data suggest that the TGFβ/BMP signaling pathway negatively regulates prostatic ductal branching morphogenesis.
BMP signaling pathways synergistically determine prostate development/BMP signaling pathways synergistically determine prostate development (Fig. 2). Conditionally knocking out β-catenin in the UGS results in undetectable NKX3-1 expression, but AR is still highly expressed [28]. These results indicate that prostate lineage specification depends on WNT/β-catenin signaling even when the AR signaling pathway is active. However, after the completion of prostate lineage commitment, prostate development can occur without the canonical WNT signaling pathway [28]. Both ex vivo and tissue implantation experiments demonstrate that when AR is deleted in AXIN2-expressing mouse prostate cells, the formed prostates are relatively small and immature [34]. This indicates the indispensable role for AR in WNT-responsive cells during all the stages of prostate development. In the prostate cancer cell line LNCaP, WNT3A treatment can promote AR binding to the promoter regions of WNT target genes such as MYC and CYCLIN D1; additionally, AR and β-catenin can be recruited to the promoter and enhancer regions of the AR target gene PSA [35]. Another report also noted that WNT/β-catenin could increase AR expression through the binding of LEF1 to the promotor of AR [36]. In addition, active WNT/β-catenin can activate BMP signaling in prostatic bud tips to inhibit inappropriate prostatic budding and together ensure the initiation of prostate development [37]. β-catenin transcriptionally upregulates the expression of TGFβ2, TGFβ3, and BMP4 in prostate stromal cells, and the activated TGFβ pathway suppresses basal cell proliferation [37,38]. The TGFβ and AR signaling pathways in the stroma can affect the WNT signaling pathway, which helps to limit prostatic regression [39]. Therefore, a balance between the WNT and TGFβ/BMP signaling pathways is necessary for prostate budding.
Prostate lineage differentiation from pluripotent stem cells
Current knowledge on prostate development is mainly derived from rodent animal models and prostate cancer cell Fig. 2 Crosstalk between key signal pathways during prostatic budding. AR signaling pathway activates canonical WNT signaling, which in turn activates the master regulator NKX3-1 to that drives prostatic budding from UGE. In addition, WNT signaling can directly promote AR signal or indirectly inhibit AR signal by activating BMP signal which inhibits AR and prostate ductal budding. AR/WNT/BMP work together to ensure an appropriate number of ductal tips for normal prostate development lines; however, there are significant differences between the human prostate and rodent prostate, such as differences in histology and morphology. Limited by the shortage of human prostate materials, especially embryonic specimens, a more widespread method is urgently required. Human pluripotent stem cells, including embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs), harbor the capacity to undergo multilineage differentiation to generate almost all cell types composing the body. Therefore, human pluripotent stem cells may provide a promising source to study human prostate development (Fig. 3). As early as 2006, a study by Taylor et al. showed that human ESCs are able to differentiate into the prostatic lineage by utilizing a co-transplantation assay [40]. They constructed a hetero-species recombinant tissue composed of the mouse UGM or rat seminal vesicle mesenchyme and human ESCs under the renal capsule of immunodeficient mice and observed prostate-like tissue within 8-12 weeks. Key transcription factors such as AR and NKX3-1 and the epithelial cell markers P63, CK8, and CK18 were detected at 4 weeks, while a mature prostate that expressed prostate-specific antigen (PSA) and exhibited prostate layer structures appeared 8-12 weeks after grafting [40]. However, this approach utilized co-culture and transplantation and was inefficient, hindering further mechanistic study and applications.
Prostate organogenesis is a stepwise process. The prostate arises from the UGS, which is a caudal extension of the endoderm-derived hindgut [1]. It was demonstrated that definitive endoderm specification is mainly controlled by the NODAL/SMAD signaling pathway [41]. Consistently, human ESCs can be efficiently differentiated into the definitive endoderm lineage upon activin A treatment in low-serum cultures [42]. The endoderm germ layer is an undetermined sheet of cells, it forms a primitive tube and becomes regionally specified along the anterior-posterior axis later, and FGF signaling is necessary for establishing the posterior endoderm [43]. In addition, high β-catenin activity is observed in the posterior endoderm and inhibits the foregut fate [44]. Therefore, agonists of FGF and WNT signals are used to differentiate the definitive endoderm into CDX2-positive hindgut lineages [45].
Although many achievements have been made in efficiently differentiating human ESCs/iPSCs into the definitive endoderm and hindgut, there are very few reports on the directed differentiation from the hindgut to the UGS and subsequently to the prostate lineage. The master signals of this process are still largely unknown, but some attempts have been made to elucidate this process. Inspired by the early success achieved in generating prostate lineage cells following co-transplantation of human ESCs and the rodent UGM [40], Hepburn and colleagues first generated prostate-derived human iPSCs and differentiated iPSCs into the definitive endoderm. Then, the differentiated endoderm cells were cocultured with the UGM in vitro. They observed increased efficiency of prostatic epithelial differentiation [46]. However, there are still many problems in directed prostatic differentiation requiring further exploration.
stem cells are capable of generating a range of differentiated/stem cells are capable of generating a range of differentiated cells, including prostate lineages [3]. A tissue recombination experiment using the UGM with adult bladder epithelia suggests that the UGM can induce bladder epithelial cells to form a prostate through a transdifferentiation mechanism that requires stromal TGF-β signaling to mediate epithelial WNT activity [47]. In addition, bladder urothelium differentiation from human ESCderived endoderm has been successfully achieved in vitro. Kang and colleagues first generated bladder urothelial cells from human ESCs under serum-and feeder-free conditions with retinoic acid, and the bladder urothelial cells highly expressed urothelium-specific genes such as UPIB, UPII, Fig. 3 Summary and proposal of prostate differentiation from human pluripotent stem cells. Human ESC/iPSC can respond to TGFβ signaling pathway to differentiate into definitive endoderm, from which there are two paths to generate prostate organoids: mimic the prostate development process (from definitive endoderm to hindgut stage by WNT and FGF signaling pathways, then to UGS by certain signals, and eventually to prostate organoids by AR and other signaling pathways; or direct to bladder urothelial cells by RA, and then transdifferentiate into prostate organoids UPIIIA, P63, and CK7 [48]. This may represent an alternative approach to guide prostate generation from human ESCs/iPSCs (Fig. 3).
Prostate organoids
An organoid is an advanced technology that mimics the hallmarks, cell types, and even the structure and functions of real organs and hence provides an alternative in vitro model for developmental study and disease modeling [49]. ESCs/iPSCs provide an ideal source to construct prostate organoids. Human ESC/iPSC-derived organoids can be used as powerful platforms in modeling human organ development and disease. For example, human gastric organoids have been generated in vitro from pluripotent stem cells through manipulation of the FGF, WNT, BMP, retinoic acid, and EGF signaling pathways as well as threedimensional culture. Generated gastric organoids have been used to identify novel signals that regulate early endoderm patterning or to study the function of the transcription factor NEUROG3 in gastric endocrine cell differentiation. Moreover, human gastric organoids have been used to mimic the pathophysiological response of the gastric epithelium to H. pylori, which manifests the potential of these organoids in drug discovery and modeling the early stages of gastric disease and even cancer [50]. Similar achievements have been made in studying human lung development during gestation, recapitulating fibrotic lung disease in vitro by introducing the mutation in HPS1 that causes an early-onset form of intractable pulmonary fibrosis [51] and modeling colonic familial adenomatous polyposis, which identified geneticin as a promising drug for APC-mutant patients [52].
Prostate organoids can be derived from pluripotent stem cells, prostatic progenitor cells, or primary prostate biopsy samples (Fig. 4). Based on the strategy that the Rspondin 1-based WNT-activation culture method allows long-term propagation of murine and human prostate epithelium, both basal and luminal populations have been demonstrated to contain bipotent progenitor cells, therefore making it possible to establish murine and human prostate organoids in vitro. Basal-or luminalderived prostate organoids express AR, NKX3-1, and prostate epithelium layer markers including P63, CK5, and CK8 [53]. Moreover, those organoids exhibit testosterone responsiveness upon dihydrotestosterone addition or withdrawal. Also, the Zeb1+ prostate epithelial cells are multipotent prostate basal stem cells that can selfrenew and therefore capable of generating functional prostate organoids at the single-cell level [54]. Prostate organoids derived from iPSCs were successfully generated using a co-culture technique with the urogenital sinus mesenchyme, and early prostate organoids can be generated within several weeks [46].
Organoids can be used to study the functions of genes involved in prostate cancer initiation by gene editing [55]. Genetic ablation studies reveal the indispensable role of Zeb1 in prostate development [54]. In addition to normal prostate cells, a testosterone-responsive prostate organoid culture system derived from advanced prostate cancer tissue was developed to study prostate homeostasis and tumorigenesis [56,57]. Recent research has demonstrated that prostate stromal cells can increase organoid formation efficiency and influence organoid branching morphogenesis via cell-cell contact and secrete soluble growth factors that may regulate branching Fig. 4 Application overview of human prostate organoid technology. Prostate organoids can be generated from prostate primary tissues or tumor samples directly or derived from iPSC through somatic reprogramming, differentiation, and co-culture technology. Prostate organoids have wide application potentials and provide a valuable resource to study human development, model pathogenesis, and test drugs [58]. This system provides a powerful model to study the functions of developmental regulators or oncogenes, such as MYC and AKT1 [55]. The high-throughput model can rapidly generate human prostate tissue ex vivo and in vitro, which makes it a better model for studying prostate development and disease than slow, inefficient, and laborious prostate organoids derived from primary cultures [46] (Fig. 4).
Conclusion
In this review, we have briefly introduced prostate development and summarized the major signaling pathways involved in prostate development and differentiation (i.e., the AR, WNT, and TGF-β/BMP signaling pathways). To advance our understandings of human prostate development and prostatic disease, prostate organoids based on human pluripotent stem cells would be a promising and valuable tool with some challenges. | 2023-01-23T14:26:18.340Z | 2020-07-16T00:00:00.000 | {
"year": 2020,
"sha1": "288ec683087c4bdfb2771f4a0b21cc6d94ef0f2a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13287-020-01801-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "288ec683087c4bdfb2771f4a0b21cc6d94ef0f2a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
2643636 | pes2o/s2orc | v3-fos-license | A case study of global health at the university: implications for research and action
Background Global health is increasingly a major focus of institutions in high-income countries. However, little work has been done to date to study the inner workings of global health at the university level. Academics may have competing objectives, with few mechanisms to coordinate efforts and pool resources. Objective To conduct a case study of global health at Canada's largest health sciences university and to examine how its internal organization influences research and action. Design We drew on existing inventories, annual reports, and websites to create an institutional map, identifying centers and departments using the terms ‘global health’ or ‘international health’ to describe their activities. We compiled a list of academics who self-identified as working in global or international health. We purposively sampled persons in leadership positions as key informants. One investigator carried out confidential, semi-structured interviews with 20 key informants. Interview notes were returned to participants for verification and then analyzed thematically by pairs of coders. Synthesis was conducted jointly. Results More than 100 academics were identified as working in global health, situated in numerous institutions, centers, and departments. Global health academics interviewed shared a common sense of what global health means and the values that underpin such work. Most academics interviewed expressed frustration at the existing fragmentation and the lack of strategic direction, financial support, and recognition from the university. This hampered collaborative work and projects to tackle global health problems. Conclusions The University of Toronto is not exceptional in facing such challenges, and our findings align with existing literature that describes factors that inhibit collaboration in global health work at universities. Global health academics based at universities may work in institutional siloes and this limits both internal and external collaboration. A number of solutions to address these challenges are proposed.
(8Á10). The first occurrence of a university using the term 'global health' in the name of a center or institution was in 1999, and by 2009, at least 41 universities in the United States and Canada had established pan-university global health institutes or centers and 11 more had established global health programs in existing departments or divisions (11). Figure 1 sets out various approaches, based on a scan of organizational forms. As Merson and Page note, 'university-wide centers have expanded the disciplinary framework for global health beyond the health professions to include business, engineering, public policy, divinity, law, and the disciplines of social science' (11, p. 2). The expansion of this field relates to changes beyond the walls of the academy, including the globalized nature of health, the rapid dissemination of news, the increased interconnectedness between people, and the framing of global health as a foreign policy objective (12Á14). Manifesta-tions of university global health activity include substantial student, faculty, and university presence at global health conferences; the development of research networks; and new coalitions of universities (15Á19). There has been a sustained demand for global health education for a number of years in countries around the world (20Á23). Students within the health professions, public health, anthropology, social sciences, law, and political science are increasingly undertaking part of their training abroad (24Á26). Aside from academics, a complex mix of actors shape the global health agenda, including donors and funding bodies, non-governmental organizations (NGOs), advocacy groups, health professional organizations, private corporations, and governments at all levels.
Many universities see global health as part of their role as institutions in a global community which have a social accountability or social responsibility mandate (27,28). They can do so through scholarly work (knowledge generation and its dissemination), education, and decision-maker influence (29,30). Universities may manage substantial funds and grants, bring together partners from different sectors, and deliver services in high-and lowincome countries, often via third-party NGOs. They are also key actors in the economy, for example, through partnering with private enterprise (31). Universities may also innovate better ways to address global health problems and then act to implement their findings (32).
In early 2011, we came together as a group of global health academics and trainees, interested in 'the state of global health'. In particular, we sought to explore ongoing challenges in managing global health at a large, research intensive university. We hypothesized that how global health is organized influences the research and action of academics, their ability and interest in collaboration internally, and the formation of external partnerships.
Methods
To conduct this case study we assembled a research team comprised of six faculty members from different disciplines (family medicine, public health, health promotion, nursing, international law/human rights, and ethics) at different stages of their careers, one medical student, and one resident, all based at the University of Toronto. We represent an 'insider view' on this issue, bringing our collective years of experience within the institution to bear on the design of the study and the analysis of the results. We obtained ethics approval from the University of Toronto Health Sciences Research Ethics Board.
We assembled a history of global health at the university based on official documents and the knowledge of research team members. Using existing inventories and annual reports, and broad searches of the main website of the University of Toronto, we identified current centers, departments, and hospitals affiliated with the university as potential sites of global health activity in the first half of 2011. A list of faculty was compiled who met the following criteria: 1) affiliation with a center, department, or hospital that was specifically focused on global health or international health, or 2) usage of the terms 'global health' or 'international health' to describe their work. We recognized that given the myriad forms and decentralized nature of global health activities, we were likely unable to identify every single person engaged in global health at the university. Nevertheless, we listed more than 100 academics engaged in global health situated across the university, often with multiple affiliations (Supplementary file).
From this list, we purposively sampled those in centers, departments, and hospitals from a range of fields to conduct key informant interviews. We sought faculty who were 'on the ground' but were also providing academic leadership in global health. One investigator (AP) carried out confidential, semi-structured interviews with each key informant during the summer and fall of 2011. The interview guide included questions about their views of global health; the institution they worked for and its relationship to the university; their experiences in global health activities, including collaboration with other divisions or colleagues; and their thoughts on what helps move global health forward at the university. Because the study was unfunded, the interviews were not audio-recorded and then transcribed, but rather detailed notes of responses were taken during the interview and returned to participants for verification. A pair of coders (AP, AtK) analyzed responses to questions thematically and compiled key themes. Synthesis involved discussion among the research team through in-person meetings and over email.
Results
Global health at the University of Toronto (1990Á2011) The University of Toronto is currently Canada's largest academic center, with more than 66,000 undergraduate students, 15,000 graduate students, and 11,000 faculty (33). Historically, clinical research leaders made important contributions to health (34) and held important roles in international health. The School of Hygiene, now the Dalla Lana School of Public Health, was one of a handful established by the Rockefeller Foundation in the 1920s (35). In 2001, the Dean established a Center for International Health, providing core funding to a full-time director, a group of part-time faculty leaders across affiliated teaching hospitals and some departments, and administrative staff. The center became an informationcoordinating body for global health by cataloguing activities occurring in different faculties, departments, and hospitals; sharing information online and in annual reports; providing part-time support to faculty members; holding annual global health research days; and leading university-wide initiatives such as World AIDS Day and the University of Toronto HIV/AIDS initiative in Sub-Saharan Africa. The center supported other institutions at affiliated hospitals to apply for grants and external funding for research and training. Resources provided to the Center for International Health at the University of Toronto were much less in comparison to similar institutions in the US (36). As center core funds diminished, global health activities proliferated, most based within teaching hospitals (37). Within the Faculty of Medicine, at least five departments developed one or more programs related to global health. At least four separate global health-related educational programs existed for students at the university, and academics with an interest in this area were found in almost every faculty. Each of these in turn has multiple connections with lowand middle-income country (LMIC) partner universities, hospitals, and NGOs (38). In this context, global health academics at the University of Toronto worked to develop multiple research agendas and launch mostly separate interventions.
Key informant interviews
To explore how the organization of global health at the university influenced academics' research and action, we conducted key informant interviews. Among the 28 academics invited to an interview, 25 accepted the invitation. Twenty interviews could be arranged during our study period, of which 19 were in person and one was via telephone. Of the 20 key informants interviewed, 13 (65%) were female. Ten were assistant professors, four were associate professors, four were professors, and two were emeritus professors. Nine cited a teaching hospital as their primary affiliation, four were located within the university's school of public health, and two were with NGOs. One each was with the faculty of nursing, the school of rehabilitation sciences, and the school of international relations. One key informant was predominantly a university administrator.
When asked to define 'global health', a fair degree of consensus was apparent among academics, although at least three expressed skepticism about the usefulness of the term 'global health', reflecting continued questioning about a shared definition (10,39,40). The concept of achieving health equity underpinned most responses. Eight academics explicitly defined the overall objective of work within the field as achieving equity in health outcomes between different populations. Most emphasized that this included vulnerable or marginalized groups both within Canada and within LMIC. Furthermore, several academics included the concept of the social determinants of health in their definition, as well as the idea of multiple, interacting systems influencing health. For example, one stated, 'it relates to interactions between political jurisdictions and players like NGOs, corporations that are affecting the health of people all over the world'.
Regarding values that should underpin global health work, half of those interviewed cited 'equity' as central, and 'human rights' and 'justice' were each mentioned by five academics. Solidarity, mutual respect, reciprocity, and non-maleficence were common themes in the responses as well. Similar concepts were reported when academics were asked about what they emphasize in their work, with many additionally citing the importance of sustainability.
More than half of those interviewed reported that they saw themselves as collaborating with others at the university, or as members of a team within the university. The remainder felt that they were working more or less independently from others, often emphasizing that they felt isolated from others' work. Many in both groups felt that their work contributed to the university's mandate on global health, although this was predominantly around research. Those whose work predominantly involved developing education initiatives felt left out. Many assessed the university's leadership to be increasingly supportive of global health, at least in principle, as reflected in recent strategic plans.
All 20 academics expressed a negative assessment of how global health was currently organized at the University of Toronto. They perceived that their work existed in silos or in parallel to others with a lack of support from the university. Descriptors used included 'disjointed', 'incoherent', 'fragmented', 'chaos', and 'anarchic'. 'We lack vision with lots of small initiatives, but not an overarching system, structure, or principles to bring us together and create synergy'. The primary method of organizing global health efforts was noted to be around key individuals rather than specific institutional structures. Given the challenges in collaboration, several academics interviewed also noted 'glimmers of hope'. Most felt that there were many people working hard but not in collaboration with one another. An exemplar of this sentiment was, 'There are lots of people doing good things. There hasn't been a cohesive umbrella that would help catalyze and synthesize. There are people doing their own thing, and there is nothing wrong with that. It could be more efficient to use resources to have a greater impact'.
Barriers to collaboration with colleagues within the university that were highlighted by the interviewed academics included the absence of an overall strategic plan from the university around global health (at the time), a lack of time and opportunity to connect with colleagues, and limited incentives for collaboration. The large institutional size of the University of Toronto also made it difficult to connect with others. Several academics noted that fragmentation may not always be negative, in that it may facilitate multiple approaches to a problems based on different approaches to working in global health. Although some academics could not identify any existing factors that facilitated connecting with others, many identified the role of individuals acting as facilitators as essential to forming linkages; these were referred to as 'champions'. Also cited as important were regular meetings or forums to meet colleagues and building on personal networks. Many cited the enthusiasm of others and their willingness to share time and resources as important facilitating factors.
Those interviewed highlighted that international collaboration occurred primarily through personal and individual effort, rather than through an institutional process. Most academics that worked with colleagues outside of Canada felt that the university had not played a substantial role in establishing these relationships. Some even went so far as to indicate that the university policies and processes might have had a negative impact on such relationships. Some noted concerns around the institution's adverse influence on partnership development, including unduly onerous bureaucracy, a lack of resources, and limited recognition for their efforts.
No single solution was identified to resolve concerns about the organization of global health at the university. Broad sets of ideas proposed included developing mechanisms or structures to help people connect and share ideas and resources, such as regular meetings that would improve dialogue and communication across the university around global health. Many suggested establishing a better sense of what is happening around global health currently. For example, helping academics know about existing external partnerships. Several academics recommended that the university propose incentives to promote internal and external collaboration. Most academics recommended that the university prioritize better coherence and develop a shared strategic plan around global health. Similar to the discussion of individual supports, many academics identified funding and support for faculty and students as important priorities. Some articulated a need for systems thinking that brings together individual academics into teams that can address the complexity of global health problems (41). Leadership that can balance the needs of faculty and funders, university administrators, the academic community, philanthropists, and advocates was seen as needed, as well as the mobilization of new resources (42).
Many academics felt a strong, university-wide center for global health would be helpful. Examples of successful centers provided included ones at Johns Hopkins, Harvard, London School of Hygiene and Tropical Medicine, and specific institutions such as the Earth Institute (Columbia University), and François-Xavier Bagnoud Center for Health and Human Rights (Harvard University). It was proposed that such a center could provide supports to academics and be a place to share ideas. It could also help in developing and driving a shared vision and strategic plan, working with stakeholders to clarify areas for collaboration, resource development and adding value, as has been developed in collaborative research centers (43).
However, several academics were also wary of too much control over individual academic agendas. A tension was evident in the responses given and the misgivings of faculty towards such a center would need to be carefully addressed through a strategic planning process. Such tensions are apparent in other academic planning processes in higher education institutions, which value the integrity of academic inquiry, led by individuals, at the same time wanting to benefit from the stimulation of cross-disciplinary initiatives and resource mobilization that can come through working together.
Discussion
We have presented a case study of how global health activities have been organized at the University of Toronto.
Through key informant interviews, we established that there is consensus among academics from a variety of disciplines and centers that fragmentation and siloed efforts are a major concern. Not only does this limit joint efforts through internal collaboration, it hampers the establishment of external partnerships. The lack of a central vision has potentially hampered the mobilization of the substantial resources of the university toward taking largescale action in global health. A well-resourced, universitywide center was identified as one potential solution by our participants. Subsequent to our study, a strategic planning process did get underway in the Faculty of Medicine (38). Our initial findings were used in the development of a 'roadmap' for global health in the Faculty of Medicine. Furthermore, a new Institute for Global Health and Equity and Innovation has been proposed, based at the Dalla Lana School of Public Health, but with cross-university participation. A global health summit to engage all academics and institutions across the university in the development of such an Institute is schedule for November 2014. It remains unclear what the Institute will look like, but much can be learned from the experience of others. At Emory University (Atlanta, Georgia, USA), after deliberations involving faculty, staff, students, and alumni on the role of the university in global health, a Global Health Institute was developed. Substantial start-up funds were obtained and a clear vision and strategic plan was developed with explicit performance metrics. An internal advisory committee specifically works to foster cross-unit cooperation and resolve barriers to collaboration (28). At Johns Hopkins University (Baltimore, MD, USA) the center for global health was developed as a hub that 'interdigitates' with medicine, public health, and nursing. The center has explicitly laid out objectives that include a multidisciplinary approach to solving global health problems (44). At Vanderbilt University (Nashville, Tennessee, USA), the Institute for Global Health took a 'centerwithout-walls' approach to nurture non-competitive partnerships among and within departments and schools. Part of the role of the Institute is to maintain an ongoing repository of global health activities across the university (45). At the University of California, San Francisco (San Francisco, CA, USA), global health sciences operates across dentistry, medicine, nursing, and pharmacy, supported by existing centers and institutes whose directors serve on its executive committee. A database of ongoing projects, faculty experience, and interests is maintained, and the focus is on demonstrating the value-add of participation rather than increasing competition (46). Finally, the University of Virginia (Charlottesville, VA, USA) established a center for global health building on past experience with international health. The center ensures that there are dedicated personnel to provide leadership and coordinate communication and collaboration among faculty, administrators, and students across departments and schools (47).
We note several limitations to our study. The findings may not be generalizable to other institutions that differ in size, composition history, and context. However, we feel that the views of academics captured here may be similar to others working in global health at other universities. We conducted only 20 interviews, representing a subset of all disciplines involved in global health, but we found a great deal of consistency in responses, and did not feel that further interviews would reveal new themes. Finally, the organization of global health at the University of Toronto is continually changing, and this study presents only a cross-sectional view. Longitudinal research that tracked the evolution of collaboration in this area would be valuable to academics and administrators. Bibliometric analysis could provide greater insight into existing and potential networks of academics and how this changes over time, including how academics situated in medical and non-medical institutions do or do not collaborate (48,49). Network analysis methods could assist in mapping out collaboration and understanding where information is and is not shared and where improved collaboration could happen (50). The different perspectives and activities that drive 'global health brands' at universities could be explored. Finally, organizational researchers could examine the impact of interventions (e.g. strategic planning, small grants, and networking events) on global health collaboration within higher education institutions.
We believe the findings of our exploratory study are particularly relevant to global health leaders developing capacity at their institutions, and trainees who hope to contribute to the field as academics in the future. The University of Toronto is certainly not exceptional in facing such challenges (11,36,51). Our findings align with existing literature that describes four key factors that inhibit collaboration in global health work at universities. First, institutional cultures may favor discipline-specific funding, where rewards accrue to individuals rather than teams, and foster competition between centers, schools, and departments (29,52). Second, collaboration is complicated further by the lack of a standard definition of global health to cohere efforts (9). Colleagues must be convinced of the validity and sustainability of global health as an academic field (11). As with many new fields, global health has developed organically and often disparately. Even within a single institution, it may be difficult to decide who identifies with the field and in what way. Simply knowing who is working on what and where (geographically) and with whom (organizationally) across different centers or departments can be helpful (51). Third, initiatives within a single institution often have different and competing objectives. They may emphasize research, education, or service more than other areas. They may have different views on the role of equity and take different approaches in their relationships with partners (40). Often, no institutional mechanism exists for elucidating Á let alone addressing Á potential conflicts. Fourth, actors must often sacrifice time and energy to coordinate their activities. Such coordination is rarely supported centrally by the institution and may take academics away from their primary activities with partners. These obstacles result in a lack of a sense of 'community' among global health academics; a sense of fragmentation for all stakeholders; and inefficiencies in service, research, and education (28). | 2018-04-03T01:43:44.821Z | 2014-08-27T00:00:00.000 | {
"year": 2014,
"sha1": "31f0847de99aae65b6a148f123b4272b0c25add8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3402/gha.v7.24526",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31f0847de99aae65b6a148f123b4272b0c25add8",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
} |
59540978 | pes2o/s2orc | v3-fos-license | Bayesian Inference in Auditing with Partial Prior Information Using Maximum Entropy Priors
Problems in statistical auditing are usually one–sided. In fact, the main interest for auditors is to determine the quantiles of the total amount of error, and then to compare these quantiles with a given materiality fixed by the auditor, so that the accounting statement can be accepted or rejected. Dollar unit sampling (DUS) is a useful procedure to collect sample information, whereby items are chosen with a probability proportional to book amounts and in which the relevant error amount distribution is the distribution of the taints weighted by the book value. The likelihood induced by DUS refers to a 201–variate parameter p but the prior information is in a subparameter θ linear function of p, representing the total amount of error. This means that partial prior information must be processed. In this paper, two main proposals are made: (1) to modify the likelihood, to make it compatible with prior information and thus obtain a Bayesian analysis for hypotheses to be tested; (2) to use a maximum entropy prior to incorporate limited auditor information. To achieve these goals, we obtain a modified likelihood function inspired by the induced likelihood described by Zehna (1966) and then adapt the Bayes’ theorem to this likelihood in order to derive a posterior distribution for θ. This approach shows that the DUS methodology can be justified as a natural method of processing partial prior information in auditing and that a Bayesian analysis can be performed even when prior information is only available for a subparameter of the model. Finally, some numerical examples are presented.
Introduction
This paper addresses the statistical problem of estimating the total amount of error in an account balance obtained from auditing. To do so, the statistical toolbox employed by the auditor must be adapted to use a Bayesian approach. The conclusions drawn from the audit process are commonly based on statistical methods such as hypothesis testing, which in turn is based on compliance testing and substantive testing. The first of these is conducted to provide reasonable assurance that internal control mechanisms are present and function adequately. Substantive testing seeks to determine whether errors are present and if so, their size. In auditing practice, the total amount of error in a single statement, denoted by θ, and associated substantive testing are highly important to decision making. For instance, the test H 0 : θ ≤ θ m vs. H 1 : θ > θ m , can be conducted in order to accept or reject the amount of error detected in the audit, where θ m denotes the total amount of error the auditor deems material. Johnstone (1995) [1] presented auditing evidence showing that the classical hypothesis test is incoherent and that Bayesian techniques are to be preferred.
Monetary Unit Sampling (MUS) or equivalently Dollar Unit Sampling (DUS, is commonly used to obtain sample information. In DUS, the population size is the recorded book value (B) and the sample plan consists of selecting monetary (dollar) units with an equal chance of being selected. The amount of error for each dollar selected is the difference between its book value and its audit value. The taint of the randomly-selected dollar unit is given by the quotient between the error and the book values. Most of the audited values will be correct and so the associated errors will be zero. The taints in a dollar unit sample are recorded and used to draw inferences about the parameter of interest, i.e., the total amount of error. In practice, auditors usually assume that no amount can be over or under-estimated by an amount greater than its book value. Therefore, the range of taints extends from −100 to +100 per cent in increments of one per cent: −100, −99, . . . , −1, 0, 1, . . . , 99, 100, and the proportions of each taint are: p −100 , p −99 , . . . , p −1 , p 0 , p 1 , . . . , p 99 , p 100 . For a sample DUS of size n, the practitioner knows the observed number of tainted dollar units in the sample with i% taints, n i , where 0 ≤ n i ≤ n, i = −100, . . . , 0, . . . , 100, and 100 ∑ i=−100 n i = n. In practice, B is very large in relation to sample size n and then the multinomial model adequately reflects the likelihood function. The likelihood of the problem is expressed as a parameter p = (p −100 , . . . , p 100 ) of dimension 201, where n = (n −100 , . . . , n 100 ).
To complete a Bayesian analysis, a prior distribution is required, and this is frequently a conjugated Dirichlet prior. However, there are certain difficulties. On the one hand, quantifying the expert's opinion as a probability distribution is a difficult task, especially for complex multivariate problems. Furthermore, although the auditor usually has an intuitive understanding of the magnitude, i.e., the total amount, of error θ, the proportion of p i will be unknown. Finally, the likelihood of the observed data depends on the parameters p −100 , . . . , p 100 . In consequence, the analyst must consider a Bayesian scenario under partial prior information, and seek to combine prior information about θ with the sample information about the individual proportions.
In a non-Bayesian context, McCray (1984) [2] introduced a heuristic procedure to obtain a maximum likelihood function. Following Hernández et al. (1998) [3], we now propose a modification of the likelihood to make it compatible with prior information on θ and then perform a Bayesian analysis. The prior distribution for the total amount of error in the population is commonly asymmetrical and right tailed, and statistically-trained auditors can readily elicit values such as the mean and/or certain quantiles. In this paper, we propose to use for the prior the maximum entropy prior with a specified mean. The advantages of this objective "automatised" prior are that it requires only a small amount of prior information, and nothing else, and is computationally feasible.
The remainder of the paper is organized as follows. Section 2 outlines technical results needed to derived the modified likelihood we use to combine with prior distributions. Section 3 shows how maximum entropy priors can be incorporated into the auditing context. Section 4 then presents some numerical illustrations of the method, and the results obtained are discussed in Section 5.
The Likelihood Function
Assuming the joint probability mass function given in (1) and consider that there exists a measurable function ψ(p −100 , . . . , p 100 ) = θ such that the auditor has prior information about θ ∈ Θ and Θ a discrete set of values of θ. Observe that by construction The following notation will be used. Π denoted a separable metric space, A is the natural σ-field of subsets of Π, and B a sub-σ-field of A, A + b (p) denotes the set of all real-valued functions f (p), p ∈ Π, which are nonnegative, bounded and A-measurable, π is a probability measure on Theorem A1 ( [5,6]) in Appendix provides a modified likelihood function for the subparameter θ. The function f B π in Theorem A1 is the modified likelihood function desired. In fact, we have a A-measurable likelihood function ψ(p −100 , . . . , p 100 ), with A the usual Borel σ-field and also we have prior information given on Θ with its usual σ-field. As Θ is discrete, all atoms of its σ-field are {θ}, and therefore the sets are belonging to the σ-field A. Let B be the sub-σ-field of A induced by ψ on Π. If we define the probability of a set ψ −1 ({θ}) by the probability of {θ} (known a priori), we will have a probability measure on the sub-σ-field B, denoted by π. Furthermore, the sub-σ-field is generated by a countable partition, and in consequence the modified likelihood is given by where for simplicity we write f (p) to refer function in (1). Observe that function in (4) is constant on every set ψ −1 ({θ}), and thus we can write the modified likelihood as Also we note that expressions (4) and (5) are similar to empirical likelihood functions [7] and with the likelihood induced in the notation introduced by Zehna (1966) [8]. Likelihood function in (5) is a B-measurable function and compatible with the prior π, thus Bayes' theorem now apply as We illustrate how to obtain the modified likelihood in (5) with a simulated example.
Example 1. Consider a DUS sample of 100 items which no errors have been discovered 90 times, one taint is 10% in error, one more taint is 90%; and eight taints are −10% in error (understatement error). Also, we assume that the monetary units are drawn from a population of accounts totaling B = $10 6 . To find the likelihood of a value θ we solve the following optimization problem max 100! 90! · 8! p 8 −10 p 90 0 p 10 p 90 subject to: p −10 + p 0 + p 10 + p 90 = 1, and 10, 000 (−10p −10 + 0p 0 + 10p 10 + 90p 90 ) = θ and that all proportions are nonnegative and less than one. For example, for a total amount of error θ = 12, 000 the proportions obtained are p −10 = 0.075, p 0 = 0.894, p 10 = 0.011 and p 90 = 0.020, and the likelihood of this error is 0.014. All computations are easily obtained with Mathematica c using the command NMaximize.
The Maximum Entropy Priors
To apply Bayesian methods in auditing, a prior distribution must be assigned to the total error parameter θ. References [9,10], among others, have described how this might be done. In practice, however, Bayesian methods are not widely used because auditors frequently find it difficult to assess a prior probability function. They often lack statistical expertise in this respect, and so cannot easily assess hyperprior parameters, which might not have an intuitive meaning. In most cases, only certain descriptive summaries, such as the mean and/or median of a probability distribution, can be assigned straightforwardly. Thus, auditors tend to feel comfortable assessing certain values of the prior distribution and disregard the other possible values of the parameter. In such a situation, the maximum entropy procedure might be an appropriate way to obtain the prior distribution required.
Let the parameter space Θ be an interval Θ = [θ L , θ U ]. It is well known that the probability distribution π which maximizes the entropy with respect to the objective uniform prior on [θ L , θ U ] subject to partial prior information given by E π (g k (θ)) = Θ g k (θ)π(θ) dθ = µ k , k = 1, . . . , m, has the form where λ k are constants to be determined from the constraints in (7). Observe that functions g k can adopt several interesting expressions. For example, for g 1 (θ) = θ and g k (θ) = (θ − µ 1 ) k , k = 2, . . . , m, we have that partial prior information consists of specifying m central moments in the distribution. Quantiles are also easy to incorporate considering g k (θ) = 1 (θ L ,θ k ) (θ). For practical applications and illustrative purposes we focus on situations where only the mean θ 0 is given, i.e., g 1 (θ) = θ, and µ 1 = θ 0 . In such case, , that is, the uniform distribution on the interval (θ L , θ U ).
2.
If where λ is obtained by solving the nonlinear equation
Numerical Illustrations
For illustrative purposes, we present a simulated audit situation in which two auditors have partial prior information about the mean, and are comfortable using a maximum entropy prior in a DUS context. Let us consider the DUS data from an inventory with a reported book value of B = $10 6 , a sample size of 100 items, and observed taints of 0, 5, 10 and 90 and 94, 4, 1, 1, cases, respectively. In order to decide whether to accept the auditee's aggregate account balance, the auditors then conduct the statistical hypothesis test of H 0 : θ ≤ θ m vs. H 1 : θ > θ m , where θ m denotes an intolerable material error. Assume that a figure of five to seven per cent over the reported book value is a common value for this materiality. For instance, let us suppose that the auditors wish to test H 0 : θ ≤ $50, 000 vs. H 1 : θ > $50, 000, that is, θ m = $50, 000. Following (5), for every value of the total error θ the modified likelihood associated with the DUS data is obtained solving max p 94 0 p 4 5 p 10 p 90 subject to: p 0 + p 5 + p 10 + p 90 = 1, and 10, 000 (0p 0 + 5p 5 + 10p 10 + 90p 90 ) = θ and that all proportions are nonnegative and less than one. For a given maximum entropy prior on θ, we can now derive its posterior distribution using (6). Using Bayes' theorem, these priors can then be updated to posteriors conditioned on the data that were actually observed. To facilitate reproducibility of the results presented, a simplified code version is available as Supplementary Material to this paper.
To compare scenarios where non prior or only limited partial prior information is available, and so the auditors must base their decisions on the information in the data, we present the following situation.
Auditor #1 adopts a reference non-informative prior for the parameter θ, i.e., uniform on Θ. Observe that for a constant prior the Bayes' theorem is applicable because the constant is cancelled out in (6) and the posterior distribution is equivalent to the normalised modified likelihood. Figure shows the posterior distribution (in grey) of the total amount of error for the DUS data given above.
On the other hand, the partial prior information provided by Auditor #2 is given by the a priori mean of θ 0 = $40, 000. With this partial prior information, the maximum entropy prior for θ, deduced by solving Equation (10), corresponds to λ = −25. In practical applications, we suggest using a grid of 1000 total error points for a good approximation to the likelihood function. Figure 1 shows the prior and posterior distribution for Auditor #2 with the sample information considered above. Observe that when just a small amount of prior information is included via the mean, there are differences between the posterior distributions obtained by Auditors #1 and #2. The estimated mean total error, that is, the posterior mean of the distribution in each case, is $16, 576.2 and $19, 577.2, respectively, and so the posterior distribution for Auditor #1 is more right-skewed than that obtained by Auditor #2. The posterior probabilities of the null hypothesis are similar, presenting strong evidence for H 0 [11], although more so under MEP. Table 1 details the posterior probability of the null hypothesis to be tested in (11). All computations were conducted using Mathematica c (version 11.2). In practice, auditors commonly wish to obtain a high probability quantile of the posterior distribution, say 0.95, and will then accept the accounting balance if this quantile represents a small proportion of the book value, for example no more than five per cent. In Table 1 which shows these quantiles, there is a significant difference between the non-informative and the maximum entropy case, which represent 4.4% and 3.6%, respectively, of the recorded book value. In other words, the posterior probability of the actual total error in the accounting balance being less than $36, 000 is 0.95, which represents a reduction of almost 18% in the 95%-quantile compared with a non-informative scenario.
The advantages of the proposed model are highlighted by comparing it with conventional methods such as the conventional Bayesian approach and the classical statistics procedure.
Accordingly, let us first consider a conventional conjugated Bayesian model with multinomial sampling distribution and a non-informative conjugated Dirichlet prior. A burn-in of 10,000 updates followed by a further 50,000 updates produces the parameter estimates θ 0.95 = $48, 880 and Pr{H 0 |DUS data} = 0.954 (the WinBUGS code is available as Supplementary Material to this paper). Therefore, both of the new Bayesian upper bounds shown in Table 1 are tighter than the above conventional Bayesian bound. Furthermore, the Bayesian Multinomial-Dirichlet model is fairly sensitive to the dimension of p, a concern which does not arise in the proposed formulation. For instance, the above numerical illustration developed with a non-informative Dirichlet prior over the range 0-100 obtains an unrealistic 95% upper bound of $295, 900, in contrast with the MEP upper bound which is $36, 000.
On the other hand, under a classical approach and following Fienberg et al. (1977) [12], an upper confidence bound for an α percent confidence coefficient with the Stringer method, based on the total overstatement error, is given by where π i,1−α denotes the 1 − α upper confidence bound for the population proportion when i errors are found in the sample. Stringer used a Poisson approximation to obtain these quantities. For the case of this numerical illustration, the bound obtained is $43, 950. Therefore, a "classical" auditor can conclude, with at least 95 percent confidence, that the total overstatement error in the population does not exceed $43, 950. However, for α = 0.01 we find that the 99 percent upper confidence bound is $62, 655, and the null hypothesis in (11) must hten be rejected. This is somewhat confusing, as Johnstone [1] pointed out: ". . . results close about the critical accept/reject partition in conventional hypothesis tests can often result in reversed decisions . . . ". Furthermore, Table 1 shows that the Bayesian MEP upper bound is tighter than the earlier classical bound.
Discussion
In this paper, we propose a genuine Bayesian approach as an appropriate formulation for addressing statistical auditing problems. This formulation presents several advantages for the substantive test defined in Section 1: (i) it allows incorporation of the auditor's judgements about the materiality of the error and makes it easy to derive a reasonable prior distribution; (ii) the Bayesian methodology proposed provides a sensible and straightforward formulation, (iii) the posterior quantities derived are very efficient compared to the existing classical methods, especially when errors are small. The results obtained by our procedure appear to be more reasonable than those achieved by conventional ones such as the classical (Stringer bounds) and the Bayesian (Dirichlet bounds) approaches.
Conventional (conjugated) Bayesian analysis under the DUS methodology, based on multinomial likelihood, needs a Dirichlet prior distribution to be elicited, a requirement which in practice is unrealistic when 201 parameters are involved [13]. Elicitation in a high dimensional parametric space is a complex task [14], but the model presented in this paper overcomes this difficulty. Obviously, alternative priors can be considered. As has been observed elsewhere, objective priors for discrete parameters are starting to be considered both in the univariate scenario [15] and in the multivariate case [16]. This constitutes an interesting line of research for future investigation.
As reported in the NRC Panel review [17], mixture distributions can be appropriate in the audit process. In practice, auditors have found that the distribution of non-zero error amounts differs markedly between types of accounting populations, for instance, between receivables and inventory [18]. This fact introduces additional complexity when we wish to model the audit process without considering the source of information (receivables or not, . . . ). A further advantage of the proposed model is that it includes both over and understatement errors.
The approach we describe in the paper is "automatic" in the sense that the model incorporates the sample information available and the partial prior information as the prior mean, and no more. No distributional assumptions are required for the likelihood function and no assumptions are made as to the priors. Given this absence of assumptions and the simplicity of the formulation, this approach may be considered reliable for audit purposes. Focusing on hypothesis testing, this paper provides a theoretical basis for using the heuristic quasi-Bayesian model [2]. The numerical illustrations presented suggest that the resulting 95%-quantiles are consistent with the priors and the likelihood considered. In both of these priors (uniform and MEP), the posterior distributions present a moderate right skew towards a higher level of error; only one taint of 90% is observed in the sample and the model is sensitive to this taint. The use of the mean as prior information yields an evident reduction in the 95% upper bound. An interesting area for further investigation of these audit test problems would be to incorporate another intuitive descriptive summary, such as the median or the mode [19]. | 2019-02-05T19:30:05.606Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "4006a99112eb9891ea71dbc48de36b3a51fe6d9b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/e20120919",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4006a99112eb9891ea71dbc48de36b3a51fe6d9b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
231956750 | pes2o/s2orc | v3-fos-license | Upregulation of AXL and β-catenin in chronic lymphocytic leukemia cells cultured with bone marrow stroma cells is associated with enhanced drug resistance
Despite the advent of even more effective therapies, Chronic Lymphocytic Leukemia (CLL) is still incurable and patients often develop drug resistance. We and others have found that bone marrow stromal cells (BMSCs) are excellent models for assessing the mechanism(s) by which stroma cells nurture CLL B-cells and we have shown that BMSC protects leukemic B-cells from spontaneous and drug-induced apoptosis. The leukemic B-cell derives survival signals from stromal cells and the bone marrow site is able to harbor residual leukemic B-cells protected from chemotherapy. Prior evidence indicates that the facilitation of residual disease burden may be a key pathway to clonal evolution and ultimate clinical relapses difficult to treat in CLL. To further delineate additional resistance mechanisms, we evaluated alteration of critical survival pathways including AXL, in leukemic B-cells upon co-culture with BMSCs. All experimental details are provided in the supplement. We co-cultured primary CLL B-cells (Supplementary Table 1) with BMSCs derived from healthy subjects or untreated CLL patients (Supplementary Table 2) and compared them to CLL B-cells cultured alone for 48 h. CLL B-cells were separated from BMSCs and analyzed by western blot (WB) analysis. A significant increase of AXL expression in post co-cultured CLL B-cells were detected compared to CLL B-cells cultured alone (Fig. 1A). Further analysis also found an increased accumulation of β-catenin in post co-cultured leukemic B-cells from basal levels (Fig. 1A). However, we could not detect any significant alteration of cell surface AXL levels on these leukemic B-cells by flow analysis (Fig. 1B, C), indicating an increase in AXL expression restricted to the cytoplasm. Additionally, we tested other malignant cell types and observed a significant increase of both AXL and β-catenin in B lymphoma cell lines (Mino, Raji, and SUDHL4) upon their co-culture with BMSCs (Fig. 1D). Furthermore, by RT-PCR using specific sets of primers we observed a significant increase of AXL mRNA levels but not β-catenin mRNA levels in post co-cultured CLL Bcells compared to that in CLL B-cells cultured alone (Fig. 1E, F). Next, we examined the expression of AXL and β-catenin in CLL B-cells after their exposure to BMSCs for 48 h using either a direct co-culture method versus coculturing CLL B-cells with BMSCs separated via transwells. Interestingly, we found increased expression of both AXL and β-catenin in CLL B-cells only when CLL B-cells were in direct contact with BMSCs but not when separated by transwells (Fig. 1G). To explore if CLL B-cell/BMSC interaction induces activation of AKT and ERK-1/2 MAPK, post co-cultured leukemic B-cells were analyzed for P-AKT and P-ERK-42/ 44 by WB. We detected significant increases in P-ERK-42/ 44 but not in P-AKT(S473) levels in co-cultured CLL Bcells compared to CLL B-cells cultured alone (Fig. 1H). Our further analysis to define AXL activation status, revealed no change in P-AXL (Y702), one of the critical activation sites within the kinase domain of AXL in cocultured CLL B-cells as compared to CLL B-cells cultured alone (Fig. 1I) despite significant overexpression of total AXL (Fig. 1I). Further total P-AXL levels were determined
Upregulation of AXL and β-catenin in chronic lymphocytic leukemia cells cultured with bone marrow stroma cells is associated with enhanced drug resistance Sutapa Sinha 1 , Charla R. Secreto 1 , Justin C. Boysen 1 , Connie Lesnick 1 , Zhiquan Wang 1 , Wei Ding 1 , Timothy G. Call 1 , Saad J. Kenderian 1 , Sameer A. Parikh 1 , Steven L. Warner 2 , David J. Bearss 2 , Asish K. Ghosh 3 and Neil E. Kay 1 Despite the advent of even more effective therapies, Chronic Lymphocytic Leukemia (CLL) is still incurable and patients often develop drug resistance. We and others have found that bone marrow stromal cells (BMSCs) are excellent models for assessing the mechanism(s) by which stroma cells nurture CLL B-cells and we have shown that BMSC protects leukemic B-cells from spontaneous and drug-induced apoptosis 1 . The leukemic B-cell derives survival signals from stromal cells and the bone marrow site is able to harbor residual leukemic B-cells protected from chemotherapy. Prior evidence indicates that the facilitation of residual disease burden may be a key pathway to clonal evolution and ultimate clinical relapses difficult to treat in CLL 2 .
To further delineate additional resistance mechanisms, we evaluated alteration of critical survival pathways including AXL 3,4 , in leukemic B-cells upon co-culture with BMSCs. All experimental details are provided in the supplement. We co-cultured primary CLL B-cells (Supplementary Table 1) with BMSCs derived from healthy subjects or untreated CLL patients (Supplementary Table 2) and compared them to CLL B-cells cultured alone for 48 h. CLL B-cells were separated from BMSCs and analyzed by western blot (WB) analysis. A significant increase of AXL expression in post co-cultured CLL B-cells were detected compared to CLL B-cells cultured alone (Fig. 1A). Further analysis also found an increased accumulation of β-catenin in post co-cultured leukemic B-cells from basal levels (Fig. 1A). However, we could not detect any significant alteration of cell surface AXL levels on these leukemic B-cells by flow analysis (Fig. 1B, C), indicating an increase in AXL expression restricted to the cytoplasm. Additionally, we tested other malignant cell types and observed a significant increase of both AXL and β-catenin in B lymphoma cell lines (Mino, Raji, and SU-DHL4) upon their co-culture with BMSCs (Fig. 1D). Furthermore, by RT-PCR using specific sets of primers we observed a significant increase of AXL mRNA levels but not β-catenin mRNA levels in post co-cultured CLL Bcells compared to that in CLL B-cells cultured alone (Fig. 1E, F). Next, we examined the expression of AXL and β-catenin in CLL B-cells after their exposure to BMSCs for 48 h using either a direct co-culture method versus coculturing CLL B-cells with BMSCs separated via transwells. Interestingly, we found increased expression of both AXL and β-catenin in CLL B-cells only when CLL B-cells were in direct contact with BMSCs but not when separated by transwells (Fig. 1G).
To explore if CLL B-cell/BMSC interaction induces activation of AKT and ERK-1/2 MAPK, post co-cultured leukemic B-cells were analyzed for P-AKT and P-ERK-42/ 44 by WB. We detected significant increases in P-ERK-42/ 44 but not in P-AKT(S473) levels in co-cultured CLL Bcells compared to CLL B-cells cultured alone (Fig. 1H). Our further analysis to define AXL activation status, revealed no change in P-AXL (Y702), one of the critical activation sites within the kinase domain of AXL in cocultured CLL B-cells as compared to CLL B-cells cultured alone (Fig. 1I) despite significant overexpression of total AXL (Fig. 1I). Further total P-AXL levels were determined © The Author(s) 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Correspondence: Neil E. Kay (kay.neil@mayo.edu) 1 Division of Hematology, Mayo Clinic, Rochester, MN, USA 2 by immunoprecipitation experiments. Consistent with our findings in Fig. 1I, we also could not detect any significant alteration in total P-AXL level in pre-or post-cocultured CLL B-cells (Fig. 1J). Therefore, the function of increased AXL levels in co-cultured CLL-B-cells is likely independent of AXL tyrosine kinase activity 5 and is a subject of our future studies.
Upregulation of AXL and β-catenin is associated with the induction of resistance to multiple chemotherapeutic agents in human cancer cells [6][7][8][9] . To see if drug exposure caused further increases in AXL and β-catenin, we treated CLL B-cells with chemotherapy drugs (fludarabine, chlorambucil) used for CLL, at sub-lethal doses (determined from the dose-response curve; Supplementary Fig. 1C, D) of novel agent drugs that included ibrutinib 3 , AXL inhibitor (TP-0903) 3 , or venetoclax also showed upregulation of AXL and β-catenin, over that seen with BMSCs alone (Fig. 1M-O). Thus in vitro drug exposure facilitates further increases in both AXL and β-catenin for co-cultured CLL B-cells consistent with cellular drug resistance.
Since it is known activated ERK and stabilized β-catenin translocate to the nucleus resulting in transcriptional activation of their target genes 10,11 , we subjected CLL Bcells to cytoplasmic/nuclear fractionation following coculture 12 . Indeed we found increased levels of both active (non-phosphorylated) β-catenin (Ser33/37/Thr41) and P-ERK-42/44 in the nuclear fractions of CLL B-cells cocultured with BMSCs compared to CLL B-cells cultured alone ( Fig. 2A), however, increased AXL expression was only in cytosolic fractions (Fig. 2B). Activated ERK can inactivate GSK-3β via phosphorylation resulting in the accumulation of β-catenin 13 . Here we found significant increases of P-GSK-3β(Ser9) and increases in P-ERK-42/ 44 and β-catenin expression (Fig. 2C) in co-cultured CLL B-cells. CLL B-cells were also treated with the ERK upstream MEK inhibitor PD98059, in the presence or absence of BMSCs. After 48 h, we found decreases in P-ERK-42/44 level and accompanying decreases in both AXL and β-catenin for CLL B-cells in the presence BMSCs (Fig. 2D). One study reported a positive correlation between c-Jun and AXL expression levels in head and neck squamous cell carcinoma patients 14 . We indeed found increases of P-c-Jun(S73) protein albeit at variable levels in CLL B-cells co-cultured with BMSCs (Fig. 2E). We also detected increased levels of P-c-jun(S73) in Mino, Raji, and SU-DHL4 cells co-cultured with BMSCs versus these cells cultured alone (Fig. 2F). Additionally, treatment of CLL B-cells in co-culture with the c-Jun upstream, JNK inhibitor SP600125, reduced AXL level, and variably β-catenin expression (Fig. 2G). Moreover, there is evidence that AXL can modify β-catenin levels 15 , so we analyzed whether AXL is upstream of β-catenin in Mino cells. We co-cultured Mino cells with BMSCs after (see figure on previous page) Fig. 1 AXL and β-catenin expression and their role in CLL B-cells co-cultured with BMSCs. A Increased AXL and β-catenin expression in CLL Bcells. AXL and β-catenin protein levels were determined using separated CLL B-cell lysates after 48 h of co-culture. Actin was used as a loading control. B Surface AXL expression on CLL B-cells with or without co-culture with BMSCs. Percent expression of AXL on CD5 + CD19 + B-cells (n = 7) surface were determined after 48 h of co-culture with and without with BMSCs (n = 6) by flow cytometry. C Representative histograms for surface AXL staining on B-cells (red) from two representatives CLL patients (P36, P37) compared to an isotype control (black) (I) and when co-cultured with (red) or without (blue) BMSC (P17) (II). D Co-culture of Mino, Raji, and SU-DHL4 cells with BMSCs. Lymphoma B-cell lysates from Mino or Raji or SU-DHL4 cells co-cultured with or without primary BMSCs were analyzed for the AXL and β-catenin expression by WB analysis. Actin was used as a loading control. E, F AXL and β-catenin mRNA expressions in CLL B-cells in co-culture with or without BMSCs. AXL and β-catenin mRNA expressions were determined in the CLL B-cells by real time (RT)-PCR. Results are presented as mean values with standard deviation (SD). G AXL and β-catenin expressions in CLL B-cells; direct contact versus using transwell. AXL and β-catenin protein levels were examined in CLL B-cell lysates cultured using transwells or in direct contact with BMSCs for 48 h. Actin was used as a loading control H Activation of ERK-42/44 in CLL B-cells co-cultured with BMSCs. CLL B-cell lysates were analyzed for the status of P-ERK-42/44 and P-AKT(S473). Total ERK-42/44 and AKT were used as loading controls. I P-AXL (Y702) levels in CLL B-cells co-cultured with or without BMSCs. CLL B-cell lysates were examined for the levels of P-AXL(Y702), AXL, P-AKT(S473), AKT, and β-catenin. Actin was used as a loading control. J Total tyrosine phosphorylation on AXL in CLL B-cells co-cultured with or without BMSCs. AXL was immunoprecipitated from the CLL B-cell lysates, followed by Western blot analysis using anti-phosphotyrosine (4G10) antibody. The blot was stripped and reprobed with an antibody to AXL. IgG HC was used as a loading control. being transduced with a lentivirus expressing Cas9 and guide RNAs targeting AXL [as efficient transfection and CRISPR experiment were not feasible in primary CLL Bcells]. CRISPR-mediated reduction in AXL expression in Mino cells did reduce β-catenin expression (Fig. 2H). Overall, these data suggest upregulation of AXL and β-catenin in CLL B-cells is related to the combined effects of c-Jun and ERK activation and that increases in AXL in CLL B-cells are able to further modify β-catenin levels.
To further explore the role of AXL and β-catenin in leukemic B-cell drug resistance, we cultured CLL B-cells alone or with BMSCs in the presence of fludarabine, chlorambucil, venetoclax, and TP-0903 drugs alone or in combination with ERK-42/44 inhibitor PD98059 and measured CLL apoptosis. Co-culture with BMSCs as expected, protected CLL B-cells from drug-induced apoptosis (Fig. 2I-L). However, in the presence of PD98059, which downregulated both AXL and β-catenin expression in CLL B-cells in co-culture ( Supplementary Fig. 1E-H), the stromal cell-mediated protection of CLL B-cells was not as effective in suppressing drug-induced killing (Fig. 2I-L). Importantly, PD98059 treatment alone had minimal or no effect on co-cultured CLL B-cell apoptosis (Fig. 2I-L). Moreover, when CLL B-cells not exposed to BMSCs and thus expressing basal levels of both AXL and β-catenin, were treated with fludarabine, chlorambucil, venetoclax, or TP-0903, in the presence of PD98059, there was no enhancement of CLL B-cell sensitivity towards these drugs (Supplementary Fig. 1I). Thus microenvironment mediated signaling via BMSCs leading to increased AXL and β-catenin, enhances drug resistance of leukemic cells.
Finally, we studied CLL B-cells from five CLL patients where we had access to blood samples prior to therapy and then while being treated or after treatment (Supplementary Table 3). We found increases in AXL, β-catenin, P-ERK-42/44, and P-c-Jun(S73) albeit at variable levels, after therapy (Fig. 2M). This finding indicates that AXL and β-catenin presence in CLL B-cells may be a biomarker of drug resistance but further association studies are needed.
In total, our study has found that the interactions between the CLL B-cell and stromal cells in the microenvironment result in modifications of pathways in the leukemic cell known to be associated with drug resistance in human malignancies. Our model highlighting CLL-BMSC interaction and subsequent modification of AXL and β-catenin levels is shown in Fig. 2N. In this current work, we also found that co-culture of human lymphoma cell lines results in enhanced AXL and β-catenin expression suggesting that these biologic phenomena are not limited to CLL B-cells and extend the clinical importance of our findings. Further study of the biology resulting from this cell-cell interaction and relationship to CLL drug resistance for patients on novel agents will add to our knowledge on mechanisms of (see figure on previous page) Fig. 2 Regulation of AXL and β-catenin expression and role in drug resistance of CLL B-cells in presence of BMSCs. A, B Expression of β-catenin and P-ERK-42/44 in nuclear and cytosolic fractions of CLL B-cells co-cultured with or without BMSCs. Nuclear and cytosolic fractions from CLL B-cells were analyzed to detect β-catenin, P-β-catenin (Ser33/37/Thr41), P-ERK-42/44, ERK-42/44, and AXL. LaminA and actin were used as loading controls. C Increased P-GSK3β in CLL B-cells co-cultured with BMSCs. CLL B-cell lysates were analyzed for the P-GSK3β (Ser9), GSK3β, AXL, β-catenin, P-ERK-42/44, and ERK-42/44. Actin was used as a loading control. D Inhibition of ERK-42/44 signaling reduces expression of both AXL and β-catenin in CLL B-cells. PD98059 (70 μM) treated co-cultured CLL B-cell lysates were analyzed for the levels of AXL, β-catenin, P-ERK-42/44, and ERK-42/44. Actin was used as a loading control. E Increased c-Jun activity in CLL B-cells co-cultured with BMSCs. CLL B-cell lysates were analyzed for the P-c-Jun (S73) and total c-Jun expression levels 48 h after co-culture. Actin was used as a loading control. persistence of disease even in the era of ever more effective therapeutic approaches.
Financial support
Supported in part by Tolero Pharmaceuticals Inc., Mayo Clinic intramural funding, AKG grant CA170006, and supported in part by the Henry J. Predolin Foundation. | 2021-02-19T14:37:19.319Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "8deb8772fe2f640415da73401f952eabfc44396b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41408-021-00426-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9145ab62747573ad4503b1040c820271fd8ec0fd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23699085 | pes2o/s2orc | v3-fos-license | Understanding the formation mechanism of lipid nanoparticles in microfluidic devices with chaotic micromixers
Lipid nanoparticles (LNPs) or liposomes are the most widely used drug carriers for nanomedicines. The size of LNPs is one of the essential factors affecting drug delivery efficiency and therapeutic efficiency. Here, we demonstrated the effect of lipid concentration and mixing performance on the LNP size using microfluidic devices with the aim of understanding the LNP formation mechanism and controlling the LNP size precisely. We fabricated microfluidic devices with different depths, 11 μm and 31 μm, of their chaotic micromixer structures. According to the LNP formation behavior results, by using a low concentration of the lipid solution and the microfluidic device equipped with the 31 μm chaotic mixer structures, we were able to produce the smallest-sized LNPs yet with a narrow particle size distribution. We also evaluated the mixing rate of the microfluidic devices using a laser scanning confocal microscopy and we estimated the critical ethanol concentration for controlling the LNP size. The critical ethanol concentration range was estimated to be 60–80% ethanol. Ten nanometer-sized tuning of LNPs was achieved for the optimum residence time at the critical concentration using the microfluidic devices with chaotic mixer structures. The residence times at the critical concentration necessary to control the LNP size were 10, 15–25, and 50 ms time-scales for 30, 40, and 50 nm-sized LNPs, respectively. Finally, we proposed the LNP formation mechanism based on the determined LNP formation behavior and the critical ethanol concentration. The precise size-controlled LNPs produced by the microfluidic devices are expected to become carriers for next generation nanomedicines and they will lead to new and effective approaches for cancer treatment.
Introduction Lipid nanoparticles (LNPs) or liposomes are the most widely used drug carriers for nanomedicines [1][2][3]. LNPs allow two targeting modes: passive targeting by the enhancement permeability and retention (EPR) effect, and active targeting using surface modification with ligands. In addition, LNPs are able to encapsulate a variety of materials such as low molecular compounds [4,5], gold nanoparticles [6], peptides [7], DNA [8,9], and RNA [10,11]. These features make it possible to achieve high flexibility in the design of LNP-based nanomedicines and LNPs have been reported to achieve good therapeutic effects [12][13][14][15]. The nanomedicine size is also considered to be a significant factor influencing the therapeutic effects, because many largesized nanomedicines is trapped and filtered out by the mesh-like structures of the spleen. On the other hand, nanomedicines smaller than 10 nm are removed by the lymphatic system. Suitably sized nanomedicine can effectively accumulate in target organs and produce high therapeutic effects.
Recently, the size dependency of nanomedicines on the penetration efficiency in tumor tissues has attracted attention for the development of the next generation nanomedicines [16,17]. In the case of micelle-based nanoparticles, 30 nm-sized micelles showed higher penetration efficiency than 70 nm-sized micelles did [16]. We also reported the effect of the LNP size on the penetration efficiency into tumor tissues in animal tests [17]. We intravenously administrated 40 and 70 nm-sized LNPs encapsulating siRNA to ICR mice, and then we evaluated the intrahepatic distribution of siRNA. Although both sizes of LNPs showed gene silencing activity, 40 nm-sized LNPs were able to deliver siRNA to the hepatocytes more effectively. Therefore, precise control of particle size is desired for the development of the next generation nanomedicines; such size-controlled nanomedicines can be expected to realize particle-sizedependent drug delivery systems. However, the precise size control of LNPs, for example, the LNP size tuning at 5-10 nm intervals is difficult by the conventional LNPs preparation method.
Microfluidic-based techniques are expected to be excellent methodologies for not only LNP synthesis but also extracellular vesicles separation including exosomes [18][19][20][21][22][23][24][25][26][27][28]. The LNPs are easily produced by injection of organic solutions containing lipids and aqueous solutions into a microfluidic device. Typically, the LNP size is controlled by the flow rate of the solutions and the flow rate ratio (FRR: the flow rate of the aqueous solution to the flow rate of the lipid solution), and rapid mixing is also a significant factor for controlling the LNP size and producing small-sized LNPs. To enhance the mixing efficiency, chaotic mixer structures have been employed in microfluidic devices [20,23,24], and 20 nm-sized LNPs were formed by applying an extremely high flow rate condition for the microfluidics. However, the mixing performance of the chaotic mixer under the high flow rate condition is decreased due to the fluid dynamics [29,30]. We therefore previously investigated the effect of the chaotic mixer on the LNP size and their size distributions. We found that the complete mixing of solutions, which were saline and the lipid/ethanol solutions, was not necessary for controlling the LNP size and producing the small-sized LNPs [20]. In addition, we proposed a LNP formation mechanism in the microfluidic device and a concept of critical concentration to control the LNP size. In brief, the LNPs are formed by the following processes in the microfluidic device: aggregation of lipid molecules, formation of intermediate disk-like structures, fusion of these intermediate disk-like structures, and transformation of the intermediate disk-like structures to enclose LNPs. The intermediate disk-like structures could be formed and were stable at the critical concentration and the lifetime of the intermediate disk-like structures dominated the LNP size. In spite of the indispensability of the critical concentration for precise size tuning of LNPs, the critical concentration range is not well understood. Understanding the LNP formation mechanism and the critical concentration allows us to develop high performance LNPs production systems using microfluidics, leading to the next generation LNPs-based nanomedicines.
In this study, we investigated the effect of lipid concentration and the mixing performance of microfluidic devices on the LNP formation behavior. We fabricated three types of microfluidic devices: the microfluidic device with different depths, 11 μm or 31 μm, of their chaotic micromixer structures and the microfluidic device without micromixers. The LNP size and the mixing rate were measured using the microfluidic devices by changing the flow rate conditions. The LNP formation mechanism and the critical concentration were discussed using the relationship between the formed LNP size and the dilution rate of ethanol.
Fabrication of the microfluidic devices
The master molds of the microfluidic devices with chaotic micromixers were fabricated by two-step photolithography [27]. The master molds were made from SU-8 3010 and 3050 (Nippon Kayaku Co., Ltd., Tokyo, Japan). First, SU-8 3050 was poured onto 3-in silicon wafers (SUMCO Co., Tokyo, Japan) to make the first SU-8 layer. The silicon wafers were spin-coated using a spin coater (MS-A100, Mikasa Shoji, Co., Ltd., Tokyo, Japan) to a thickness of 79 μm and then the wafers were baked on a hot plate for 30 min to evaporate the solvent. These pre-baked silicon wafers were exposed to UV light with a mask aligner (M-1S, Mikasa Shoji) through photomasks (12700 dpi, Unno Giken Co., Ltd., Tokyo, Japan). Then, the silicon wafers were post-baked on the hot plate and SU-8 3010 or 3050 was poured again onto the wafers to make the second SU-8 layer. These silicon wafers were then spin-coated to a thickness of 11 or 31 μm and they underwent a second pre-baking. Photomasks for the second SU-8 layer (the chaotic micromixer part) were aligned and exposure to UV light was done. After the post-baking and developing processes, the SU-8 molds were treated with a vapor of trichloro (1H,1H,2H,2H-perfuluorooctyl) silane (Sigma-Aldrich, St. Louis, MO, USA). Polydimethylsiloxane (PDMS; SILPOT 184 W/C, Dow Corning Toray Co., Ltd., Tokyo, Japan) was cast onto the SU-8 mold and cured in an oven at 70˚C for 1 h. The PDMS replica was cut out from the SU-8 mold and holes for inlets and the outlet were punched out. The PDMS replica was bonded to a glass substrate (S1111, Matsunami Glass Ind., Ltd., Osaka, Japan) using an oxygen plasma treatment apparatus (CUTE-1MP/R, Femto Science, Gwangju, Korea). Poly(etheretherketone) (PEEK) capillaries (inner diameter = 300 μm, outer diameter = 500 μm) were purchased from the Institute of Microchemical Technology Co., Ltd. (Kanagawa, Japan). PEEK capillaries were connected to the inlets and the outlet of the microfluidic device and cured with superglue. The master mold of the microfluidic device without the micromixers was fabricated by the standard photolithography method and the microfluidic device was made by the same replica molding procedure as described above.
Synthesis of the LNPs
POPC was dissolved at 5, 10, or 20 mg/mL in ethanol. Saline was prepared by dissolving sodium chloride at 154 mM in ultrapure water (Direct-Q UV system, EMD Millipore Co., Billerica, MA). Fig 1 shows a schematic illustration of the experimental setup and the design of the microchannel structure. We used three types of microfluidic devices: the microfluidic device with different depths, 11 μm (the CM_11 device) or 31 μm (the CM_31 device), of their chaotic micromixer structures and the microfluidic device without micromixers (the NM device). The total length of microchannel was 110 mm and the CM_11 and CM_31 devices were equipped with 69 cycles of basic mixer structures. The width and space of the micromixers was 50 μm. Separate syringes (GASTIGHT 1002, Hamilton Inc., Reno, NV, USA) were filled with the lipid solution and saline and the syringes were connected to the microfluidic device. We used syringe pumps (Model 100, BAS Inc., Tokyo, Japan) to feed the solutions into the microfluidic device. LNPs were continuously formed by mixing the solutions in the microfluidic device. LNP solution was collected in a microtube at the outlet of the PEEK capillary. The collected LNP solution was stored in a refrigerator until particle size measurement. The size of the LNPs was measured by dynamic light scattering (DLS) using a Zetasizer Nano ZS ZEN3600 instrument (Malvern Instruments, Worcestershire, UK).
Evaluation of the mixing performance
A mixture of DOPC and Rho-PE dissolved in ethanol was employed as a lipid solution for evaluating the mixing performance. The concentration of lipid solution was 10 mg/mL DOPC containing 0.1 mol% of Rho-PE. DOPC and Rho-PE were dissolved in chloroform in a centrifuge tube and dry nitrogen gas was blown through to evaporate chloroform. To remove chloroform completely, the centrifuge tube was set in a desiccator and kept under vacuum by a rotary pump for overnight. After the chloroform was removed, an appropriate volume of Understanding the formation mechanism of lipid nanoparticles in microfluidic devices with chaotic micromixers PLOS ONE | https://doi.org/10.1371/journal.pone.0187962 November 28, 2017 ethanol was injected into the centrifuge tube and the lipid solution was strongly shaken several times. We measured the fluid dynamics in the microfluidic devices using a laser scanning confocal microscope (A1R, Nikon, Tokyo, Japan). Scanning region of the x-y plane was set at 512 × 256 pixels. Fluorescence images at a cross section of the microchannel were obtained at 5 μm step intervals (z-axis: depth of the microchannel) and scanning speed was set at 2 f/s. We evaluated the mixing performance of the three microfluidic devices, two with 11 μm or 31 μm depths of chaotic micromixer structures (CM_11 and CM_31) and the microfluidic device without chaotic micromixers (NM). The scanning region of the z-axis was set at 100, 120, and 90 μm, for the CM_11, CM_31, and NM devices, respectively. The fluorescence images were analyzed using Image J (NIH) to obtain the gray scale color intensity. The mixing rate was calculated from the following equation [31]: where, N, I i , I i 0 , and I i Perf. Mix are the total number of pixels, the gray scale intensity at pixel i, the gray scale intensity at pixel i without mixing or diffusion, and the gray intensity of the completely mixed solution at pixel i, respectively. The 80% mixing rate was assumed as complete mixing.
Effect of lipid concentration on LNP formation
First, we focused on the effect of lipid concentration on the LNP size. The CM_31 device was used for the experiment . Fig 2(a) shows the LNP size distributions at the flow rate of 100 μL/ min and the flow rate ratio (FRR: the flow rate of the aqueous solution to the flow rate of the lipid solution) of 3 and 9. LNP sizes were increased with increasing lipid concentration. The smallest size of LNPs was produced by the 5 mg/mL POPC/ethanol solution, regardless of the FRR. Then, we changed the flow rate to 500 μL/min, because the flow rate condition affected the LNP size. Fig 2(b) shows the LNP size distributions at the flow rate of 500 μL/min and the FRR of 3 and 9. We observed similar LNP formation behavior between 100 μL/min and 500 μL/min. Low lipid concentration or high FRR condition was able to form small-sized LNPs. We summarize the effect of lipid concentration on LNP size under the different FRRs in Fig 3. When we used 5 mg/mL POPC/ethanol as a lipid solution, the LNPs formed at 100 μL/ min were almost the same size as those of the LNPs formed at 500 μL/min. On the other hand, 10 and 20 mg/mL lipid solutions produced slightly larger size LNPs at 100 μL/min than those formed at the flow rate of 500 μL/min. The LNP size differences by changing the flow rate were approximately 5 nm and 10 nm, for 100 μL/min and 500 μL/min, respectively. We also calculated the particle size polydispersity index (PDI) to evaluate the uniformity from the particle size distribution. The PDI values were mainly smaller than 0.1 as shown in S1 Fig. These results suggest that the lipid concentration is one of the essential factors for controlling the LNPs size. We were able to produce 30 nm-sized LNPs at high FRR and flow rate conditions, which could penetrate in tumor tissues effectively, even though the silencing efficiency was slightly reduced compared with 70 nm LNPs [13]. In this study, the size of the LNPs was measured by DLS, because DLS is the most widely used LNP size evaluation method. To observe the actual LNP size, we also tried to measure the negatively stained LNPs by TEM. 20-30 nm LNPs were mostly observed by TEM analysis, although the some LNP shapes were deformed due to the dry out under the vacuum condition (data not shown). Consequently, LNP size tuning was precisely achieved by controlling the flow condition and the lipid concentration from 25 nm to 80 nm. Previously, we proposed the LNP formation mechanism in the microchannel [20] that was based on reported [32][33][34] molecular dynamics (MD) simulations. Lipid molecules are gradually self-assembled by mixing aqueous and lipid solutions, because lipids are amphiphilic molecules and cannot be dissolved in aqueous solutions. The self-assembled intermediate disk-like structures are called bilayered phospholipid fragments (BPFs) and they are thermodynamically semi-stable [35,36]. The BPFs grow and finally transform to enclose the LNPs. We found that The BPFs fuse and grow to a limiting size, which is assumed to be semi-stable under the described conditions. A large amount of BPFs makes it possible to grow larger-sized LNPs due to the fusion of the BPFs. For this reason, the LNP size produced at the high lipid solution concentration is larger compared with the size produced at the low lipid solution concentration. We therefore assume that the dilution rate of ethanol affects the LNP size.
Effect of mixing performance on the LNP formation
We fabricated three types of microfluidic devices, the CM 11, CM 31 and NM devices, and used them with the aim of elucidating the effect of mixing or the diluting rate of ethanol on the LNP size. The flow rate and the FRR were also changed to confirm the effect of dilution rate on LNP size. Fig 4(a) shows the LNP size distributions at the flow rate of 50 μL/min and the FRR of 3 and 9. The LNP size was dramatically changed by the micromixer structures. The CM_31 device was able to produce suitably sized LNPs with a diameter smaller than 100 nm, which is effective for drug delivery, regardless of the FRR condition. Conversely, the CM_11 device produced slightly larger LNPs with a diameter of 100 nm at FRR of 3. However, when we employed the FRR of 9 for LNP synthesis, the LNP size was almost the same size as that obtained with the CM_31 device. In addition, using the NM device, we obtained the larger LNP size and wider size distribution compared with the other devices. These results indicate that the rapid dilution of ethanol is essential for producing the small-sized LNPs. Fig 4(b) represents the LNP size distributions at the flow rate of 500 μL/min and the FRR of 3 and 9. For FRR of 3, the LNP size diameter difference obtained between the CM_11 and the CM_ 31 devices was from 5 to 10 nm and they could form small-sized LNPs, whereas the NM device could not form small-sized LNPs with a diameter less than 50 nm. to 120 nm, although the LNP size distribution obtained was only slightly larger than that of the CM_31 device. At FRRs of 7 and 9, the CM_11 and CM_31 devices showed similar LNP formation behavior. The high FRRs of 7 and 9 make the rapid dilution of ethanol possible, because of the high ratio of aqueous phase. In addition, the amount of lipid molecules per unit volume was also smaller than that for the FRR of 3. Therefore, this result also indicates that the dilution rate plays an important role in formation of LNPs.
Evaluation of mixing performance of the microfluidic devices
We carried out a visualization experiment to evaluate the mixing performance of each microfluidic device. We measured fluorescence images at the merging point of the lipid solution and the saline, after passing through the 1st, 5th, 10th, 23th, 46th, and 69th chaotic mixer structures. The mixture of the headgroup labeled fluorescent lipid (Rho-PE) and POPC was used as the lipid solution. The fluorescence images were analyzed by image J and the mixing rate was calculated according to Eq (1) . Fig 6(a) shows confocal images of a cross section of the CM_31 device at different positions. The flow conditions were 100 μL/min and FRR of 3. We confirmed that the solutions were mixed completely after passing through the 69th chaotic mixer structures. Fig 6(b) compares the mixing rate for 500 ms between the CM_11 and CM_31 devices at FRR of 3. The mixing rate in the entire microchannel is shown in S3 and S4 Figs. The x-axis represents the residence time from the merging of the lipid and the saline at the inlet of the microchannel. The mixing performance of the CM_31 device was higher than that of the CM_11 device. Notably, for the flow rates of 50 and 100 μL/min in the CM_11 device, the mixing performance dramatically declined compared with the CM_31 device and largesized LNPs were formed due to the slow dilution rate. As demonstrated in Fig 5, for the CM_11 device the LNP size ranged from 60 to 120 nm, indicating dependency on the flow rate at FRR of 3, whereas for the CM_31 device there was no dependency (size: 50-60 nm). Largesized LNPs formed under the slow dilution rate condition. Moreover, for the CM_31 device, 60 nm-sized LNPs formed at 50 and 100 μL/min, although the times necessary to reach the 20% mixing rate were different. These results suggest that the rapid mixing at a mixing rate exceeding 20% is critical for producing the small-sized LNPs.
Complete mixing is not required for LNP synthesis using the microfluidic device, because 60 nm sized LNPs formed for the 50 μL/min flow rate even though a long mixing time (> 400 ms) was needed for complete mixing (S3 and S4 Figs). For the optimal flow condition of the chaotic mixer, the Reynolds number should be from 1 to 100 from the viewpoint of fluid dynamics [29,30]. Therefore the mixing performance of the 500 μL/min condition (40% mixing at the 5th chaotic mixer structures) slightly declined compared to that of the 50 and 100 μL/min conditions (60-70% at the 5th chaotic mixer structures). However, the high flow rate condition enables rapid passing through many chaotic mixer structures at unit time. For this reason, the high flow rate condition was able to produce the smallest LNPs among all flow conditions. Fig 6(c) compares the mixing rate for 500 ms between the CM_11 and CM_31 devices at FRR of 9. The mixing performance of the CM_11 device was improved compared to its performance at FRR 3. The LNP size diameter difference obtained between the CM_11 and CM_31 devices at the same flow rate condition was almost 5 nm (Fig 5). Here, we focused on the 50 and 100 μL/min conditions to elucidate the critical mixing rate or the critical ethanol concentration for producing small-sized LNPs. At both flow rates, the mixing rate increased after passing through the 1st chaotic mixer. However, the mixing rate after passing through the 5th Understanding the formation mechanism of lipid nanoparticles in microfluidic devices with chaotic micromixers chaotic mixer was calculated to be higher than 40% for the CM_11 device, regardless of the flow rate. After passing through the 5th chaotic mixer, the mixing performance declined and then the mixing rate increased to 60% at the 10th chaotic mixer. On the other hand, the mixing rates of the CM_31 device at the 5th chaotic mixer were calculated to be 70% and 60% for the flow rates of 50 and 100 μL/min, respectively. However, 48 nm-sized LNPs were formed at 50 μL/min in the CM_11 device, and 40 nm-sized LNPs were formed at the other conditions. In other words, the LNP size was almost the same at these conditions. From these results, we assume that rapid mixing from 20% to 40% is the critical factor for producing the smallsized LNPs and controlling the LNP size. In the case of FRR of 9, the slope of the mixing rates between 20% to 40% for each flow rate conditions were roughly calculated to be 0.4-2% /ms from the Fig 6(c). This suggests that the formation of LNPs was achieved by mixing on the 10, 15-25, and 50 ms time-scales for 30, 40, and 50 nm-sized LNPs at the FRR of 9, respectively.
LNP formation process
A summary of the LNP formation process in the microfluidic device is shown in Fig 7. We focused on the dilution rate of ethanol and the concentration of lipid based on the BPF formation process. A hydrophobic chain of lipids is self-assembled due to the increasing solution polarity. The BPFs, which are semi-stable, grow until they are transformed into stable closure vesicles (that is, the LNPs) as shown in Fig 7(a). The growth of BPFs induces the increase of surface energy in the reaction system. When the ethanol concentration in the neighborhood of the BPFs is moderate, the BPFs are able to grow until they reach their thermodynamically stable size. Then, the grown BPFs are transformed to LNPs to decrease the surface energy in the reaction system. When ethanol around the BPFs is diluted rapidly, the BPFs cannot grow enough to form large-sized LNPs (Fig 7(b)).
First, we estimated the apparent ethanol concentration from the following equation.
Apparent critical ethanol concentration ½% ¼ 100 À Briefly, the ethanol concentrations at the complete mixing condition were calculated to be 25% and 10% ethanol for the FRR of 3 and 9, respectively. From the experimental results and the hypothesis of the LNP formation process, we assume that the BPFs begin to form at the 80% ethanol condition (mixing rate: 20%) and to transform to LNPs at the 60% ethanol condition (mixing rate: 40%). We therefore consider that the ethanol concentration of 60 to 80% is critical for producing the small-sized LNPs and controlling the LNP size in this experimental system. The critical concentration could change according to the experimental conditions, such as the concentrations and types of lipids, solvents, and additives. Although the mixing rate or apparent ethanol concentration is considered as an index value of uniformity of the solution, the LNP size is affected by not only the mixing performance of the device but also the lipid concentration and FRR. The BPFs and the LNPs are formed at the saline-ethanol interface where the hydration of ethanol molecules and the aggregation of lipid molecules trigger the LNP formation. Therefore, we consider that complete mixing is not necessary for controlling LNP size. For example, 70 nm-sized LNPs, with only a small size deviation, could be produced for the flow rate of 500 μL/min and FRR of 9 using the NM device. In the microchannel, the concentration gradient of ethanol was formed at the saline-ethanol interface. From the viewpoint of fluid dynamics, the distance necessary to mix the solutions at the high flow rate is longer than that of the low flow rate. In other words, a narrow concentration gradient forms in the microchannel for the high flow rate even though FRR may be the same. In fact, we confirmed that the mixing rate of the high flow rate at the outlet of the microfluidic device was smaller than that of the low flow rate (data not shown). In this case, the hydration of ethanol is dominated by molecular diffusion at the saline-ethanol interface. The narrow concentration gradient makes it possible to form small-sized LNPs with the low LNP size distribution, because the BPFs only grow at the critical ethanol concentration. The high FRR condition also forms the narrow concentration gradient at the liquid-liquid interface. Moreover, the micromixer structures of the microfluidic device are able to accelerate attaining homogeneity of the solutions by increasing the liquid-liquid interface area. Increasing the liquid-liquid interface area and having a shorter diffusion distance compared to the NM device allowed the rapid dilution of ethanol. For these reasons, we consider that the rapid dilution of ethanol at the liquid-liquid interface is essential, and not complete mixing. The high FRR condition or the concentration of lipids also affected the LNP size. In the case of the high FRR, the amount of lipid molecules at the saline-ethanol interface was lower than that of the low FRR. Therefore, the high FRR is considered to have the same effect as the low lipid concentration condition and the production of 30-40 nm-sized LNPs is enabled using the CM_31 device. On the other hand, the high concentration of lipids produced largesized LNPs, regardless of the flow condition, as shown in Fig 3. The lipid concentration does not affect the mixing rate or the dilution rate of ethanol. Thus, the BPFs frequently form at the liquid-liquid interface under the high lipid concentration condition (Fig 7(c)). This BPF formation behavior is similar to that in the crystal nucleation process. The BPFs grow by fusion with individual small-sized BPFs followed by bending of the BPFs to form the enclosed LNPs [34]. The growth rate of BPFs depends on the concentration of BPFs and the dilution rate of ethanol. At the high concentration of BPFs, it is easy to fuse each BPF and then large-sized LNPs are formed. The transformation rate from the BPFs to the closed form was estimated to take 100-200 ns by MD simulations [32][33][34]. However, the BPFs growth process takes a longer time compared with the transformation process and includes the following processes: aggregation of lipid molecules, formation of BPFs, diffusion of BPFs, and fusion of BPFs. We therefore assume that the LNP size is dominated by the BPF growth process and the rapid dilution that is possible by using the microfluidic device offers a promising approach for precise control of LNP size.
Conclusion
To summarize, we demonstrated the effect of lipid concentration and dilution rate on the LNP size using microfluidic devices. Having a low concentration of lipid solution and using the microfluidic device equipped with chaotic micromixer structures made it possible to produce small-sized LNPs with a narrow particle size distribution. We found that mixing performance of the microfluidic devices was the essential factor for producing the small-sized LNPs at the low flow rate or the low FRR. In addition, we proposed the LNP formation mechanism based on the fluid dynamics and assumed that the critical concentration of ethanol was 60-80% for controlling the LNP size. Ten nanometer-sized tuning of LNPs was achieved by the optimum residence time at the critical concentration and rapid dilution using the microfluidic devices with chaotic micromixer structures. The residence times at the critical concentration necessary to control the LNP size were 10, 15-25, and 50 ms time-scales for 30, 40, and 50 nm-sized LNPs, respectively. The critical concentration might be changed by experimental conditions. The properties of lipids such as their charge, and the size of hydrophobic or hydrophilic groups are also considered as important factors influencing LNP formation. However, we can estimate the critical concentration via the dilution rate, the lipid concentration and the properties of lipids, and easily adjust the appropriate synthesis condition to control LNP size precisely. We believe that the proposed LNP formation mechanism offers significant information for the development of novel microfluidic devices with good potential the practical applications. The precise size-controlled LNPs produced by the microfluidic devices are expected to be carriers for next generation nanomedicines and they will lead to new effective approaches for cancer treatment. Tokeshi. | 2018-04-03T04:21:37.229Z | 2017-11-28T00:00:00.000 | {
"year": 2017,
"sha1": "2747c29f3ecef923837657e419b42d0532c5486f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187962&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aeb8e90999df97d0737d215fd571a687e09bff02",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
236207960 | pes2o/s2orc | v3-fos-license | Cooperation and opportunism in Galapagos sea lion hunting for shoaling fish
Abstract For predators, cooperation can decrease the cost of hunting and potentially augment the benefits. It can also make prey accessible that a single predator could not catch. The degree of cooperation varies substantially and may range from common attraction to a productive food source to true cooperation involving communication and complementary action by the individuals involved. We here describe cooperative hunting of Galapagos sea lions (Zalophus wollebaeki) for Amberstripe scad (Decapterus muroadsi), a schooling, fast swimming semipelagic fish. A group of 6–10 sea lions, usually females only, drove scad over at least 600–800 m from open water into a cove where, in successful hunts, they drove them ashore. Frequently, these “core hunters” were joined toward the final stages of the hunt by another set of opportunistic sea lions from a local colony at that beach. The “core hunters” did not belong to that colony and apparently were together coming toward the area specifically for the scad hunt. Based on the observation of 40 such hunts from 2016 to 2020, it became evident that the females performed complementary actions in driving the scad toward the cove. No specialization of roles in the hunt was observed. All “core hunters” and also opportunistically joining sea lions from the cove shared the scad by randomly picking up a few of the 25–300 (mean 100) stranded fish as did scrounging brown pelicans. In one of these hunts, four individual sea lions were observed to consume 7–8 fish each in 25 s. We conclude that the core hunters must communicate about a goal that is not present to achieve joint hunting but presently cannot say how they do so. This is a surprising achievement for a species that usually hunts singly and in which joint hunting plays no known role in the evolution of its sociality.
| 9207 DE ROY Et al. and potentially increase benefits. Foraging benefits are only accrued if communal hunting increases the rate at which prey is caught, or the total amount of prey obtained, so the shared rewards increases per capita net benefit (Creel, 2001). In addition, the direct benefits and costs of cooperative hunting may depend on prey size relative to the predator, while the indirect ones such as inclusive fitness may depend on social and kin relationships among the hunters. Prey size and grouping tendency determine the extent to which hunters can share the resource and how intense competition may become, if the hunt is successful.
The level of behavioral organization between co-operators varies substantially, and cheating may emerge as an alternative strategy to reduce the costs to self while participating in the benefits of a successful hunt (Packer & Ruttan, 1988). This complicates the description of communal hunting behavior. At the simplest level, there is (1) mere similarity of action between individuals that happen to hunt in spatial proximity; (2) acts may also be performed in synchrony (i.e., similar behavior shown in unison); there may be (3) coordination (similar acts performed at the same place and time); and finally, (4) true collaboration (complementary acts performed at the same place and time). Recently, Lang and Farine (2017) pointed out that additionally cooperative hunting may usefully be characterized multidimensionally by the degree of sociality, communication, specialization within the hunting group, the extent of resource sharing and dependence, that is, the importance of social predation for the overall energy intake of the individual.
Most early work on cooperative hunting has been done on terrestrial species. In some species, such as hunting dogs (Lycaon pictus), hunting together most likely has selected for their extreme sociality (Creel, 2001). For them, being able to defend the prey against stronger competitors may additionally select for the evolution of group hunting.
In lions, communal care and protection of offspring may be just as important in selecting for female sociality as communal hunting (MacDonald, 1983). This may also apply for sperm whales (Physeter macrocephalus; Whitehead & Weilgart, 2000).
In pinnipeds, grouping clearly evolved through other mechanisms than in terrestrial predators (avoidance of predators of newborns, reduction in harassment of adult females by males, etc.) (Bartholomew, 1971;Trillmich & Trillmich, 1984). Cooperative hunting can be excluded as an important selective force for their sociality. When many pinnipeds hunt at a common site, this is usually caused by independent attraction to an important food resource like migrating salmon at river mouths for California sea lions (Zalophus californianus; Keefer, Stansell, Tackley, Nagy, Gibbons et al., 2012).
Recently, cooperative hunting has been reported for Galapagos sea lions (Zalophus wollebaeki) attacking large yellow-fin tuna (Tunnus albacares) on the northern coast of the island of Isabela, Galapagos (Páez-Rosas et al., 2020).
We here report another hunting strategy by Galapagos sea lions directed at schooling prey, in which a coordinated group of sea lions herd their prey through open water toward a predetermined stranding site. In particular, we ask what degree of cooperation this collaborative behavior involves.
| ME THODS
T.D.R. observed the sea lions hunting for Amberstripe scad (Decapterus muroadsi, Temminck & Schlegel 1843) at Rocas Bainbridge (90°33′52″W,0°21′00″S), a group of 6 islets directly east of the island of Santiago in the Galapagos archipelago. This scad species reaches 55cm in length and is a semipelagic planktivore feeding in dense schools primarily on fish eggs and larvae.
Data are based on 284 hr of observation over 31 days, between 3 September 2016 and 13 November 2020, during which time 40 hunts were observed (Table 1). Observations were made six times from shore level, seven times from about 3 m high, 13 times from a height of about 10 m, and 14 times from a vantage point about 35 m above the beach which allowed a view of the adjacent sea for more than a kilometer out ( Figure 1, point 1). During this time, much of the events were documented by photography which allowed to estimate the number of fish chased ashore as well as to record the duration of the fast-paced feeding behavior from the time stamps of the photographs. By analyzing the resulting 2,400, time-stamped photographs in detail, a considerable amount of information was extracted that could otherwise not have been recorded accurately via observation alone. For example, in 22 of the 32 successful hunts recorded, the number of fish driven ashore by the sea lions was estimated to the nearest 25, ranging from under 25 to ~300.
Generally, as hunting sea lions porpoise intensely to maintain high speed, it is impossible to determine accurately the exact number of individuals involved, because they do not surface in unison. In addition, the observer was taking photographs to document events and, when using a telephoto lens (zoom range 80-400 mm), was not always able to observe the whole scene. This limited the accuracy of the counts of core hunting animals versus local sea lions joining the hunt opportunistically closer to shore, usually within 100-200 m of the site where fish were driven ashore. When hunts were observed from sea level, this angle of view allowed only to state that a hunt occurred but not to detail the number or role of individuals involved. As a consequence of these limitations, we here report the minimum number of sea lions observed, whether considered core hunters (those that drove the fish school toward the island in a zigzag course sometimes exceeding 1,000 m distance) or the opportunistic individuals that joined the hunt near its termination. Because it was very difficult, and often impossible, to distinguish the "core hunters" from the "opportunistic hunters" (from the local beach) who joined the hunt in the final minutes, the total number of sea lions observed is not necessarily the sum of the two categories (Table 1, which gives the details for all 40 hunts). Usually, brown pelicans (Pelecanus occidentalis) joined the hunt and their numbers were also estimated unless they were highly dispersed or the hunt failed before the sea lions came close to the beach.
In the final seconds of the hunt when the sea lions appeared to accelerate to maximum speed, their swim speed was estimated to be around 4.5 m/s from the time stamp of photographs and the distance estimate derived from Google Earth. This value lies well within the range of swim velocities (usually around 2 m/sec, max 5.3 m/s) reported by Ponganis et al. (1990). For Atlantic mackerel (Scomber scombrus) of about 30-40 cm length, a species comparable to Amberstripe scad, a sustained speed of 1.2 m/s (Wardle et al., 1996) and a maximum burst speed of 5.5 m/s have been reported (Wardle & He, 1988).
| RE SULTS
During 31 days (284-hr observation time), hunts were observed on only 11 days (data in Table 1). On five of these days, only one hunt was observed, and maximally, 13 hunts occurred within one day (median 1.5 hunts/day). Only six out of 40 hunts failed and for two the outcome was unknown as the cove could not be seen from the observer's position. The observation effort was equally distributed across daytime, but 30 out of 40 hunts happened during the afternoon (12:00-18:30 hours). In two cases, the hunts' duration was timed when the hunters were still 600m and 800m from shore, respectively.
The hunts lasted between five and six min until they ended at the shore. Of course, the length of the zigzag course followed by the hunters was greater than the linear distance of visibility. The time sea lions spent feeding once the fish had beached was very short, lasting on average 24 ± 20 s (mean and SD; n = 28 hunts; range 2 to 91 s). In 2020, six sea lions were estimated to be the core hunters (except one case of five). In 2016, in the five cases where the number of hunters could be estimated more than seven sea lions were involved (Table 1).
As the core hunters approached the shore toward the end of the hunt, they were joined by sea lions from the cove adding up to a mean of 12.4 total sea lions estimated at the stranding site. In successful hunts, a mean of about 100 fish stranded (range 25-300, estimated in sets of 25).
| Description of a hunt
About eight hunts could be observed from the beginning to the end, or at least from the farthest point (500-800 m) that visibility Notes: In cases when only a few sea lions were detectable (in the photographs) but the observer was sure that more were present, we note this by adding a "+" to the number, meaning that there were at least a few more than what could be counted. When ++ is used, this means that there were many more than counted, but no estimate was possible. From the moment of stranding, all fish were typically consumed within well under one minute, even when several dozen fish had stranded. In only two of all 28 hunts when feeding times were clocked, was one minute surpassed. In these cases, feeding lasted for 62 and 91 s, respectively, as a result of the fish school being split into two parts, stranding at different points of the beach. Some fish escaped during the final drive into the cove.
Pelicans are very attentive to these hunts. By positioning themselves along the water's edge ahead of the arriving hunt, pelicans could take an estimated 20%-50% of the catch, especially when fish numbers were low.
| Changes in hunt dynamics observed over four years
A total of 15 successful hunts were observed during 2016 and 2017, with as many as 19 sea lions involved, displaying high synchrony and strategic positioning in the final stages of the hunt as all animals drove the fish school toward the beach (Figure 3). At that time, the majority appeared to be adult females with a very small number (undetermined) of subadult bulls. Although these hunts were not observed from beginning to end, there was a strong impression that between seven and ten core hunters worked together in close synchrony, versus a smaller number of local opportunistic hunters joining in at the last moment, typically no more than four individuals.
From 25 hunts observed in July and October-November of 2020, it was possible to see the hunt in greater detail. The core group was now reduced to six females, compared with an estimated seven to ten individuals in previous years. The average number of core and opportunistic animals involved was reduced to around 10-14 animals (with two exceptions of 16 and 21, including some yearlings and one or two bulls). These 2020 hunts also appeared less coordinated than in previous years.
| D ISCUSS I ON
We document an example of apparently cooperative hunting by Galapagos sea lions who drive a school of fish into a cove that operates as a trap. Whereas in terrestrial mammals hunting cooperatively will allow them to capture larger prey than a single individual could bring down, in marine mammals cooperative hunting often enables predation on schooling prey where a single predator may hunt very inefficiently. Such hunting cooperation has been described for killer whales hunting herring (Baird, 2000), for humpback whales (Clapham, ), for dolphins (Connor, 2000;Vaughn et al., 2007), and for California sea lions (Pierotti, 1988).
The evidence for collaboration among the core hunters comes from observations of 40 hunts over a period of five years. During all hunts, this core group, usually numbering a minimum of six individuals, worked in synchrony in a manner that involved both Figure 1). 16:43:10 Scad school in the shallows being driven onto the beach by the original hunters plus opportunistic hunters that have joined in. 16:43:13 Sea lions and pelicans feeding at the shoreline. 16:43:26 Immediately after feeding, the hunters leave the beach (latter three photographs are of beach area labeled D on map Figure 1) together from where the hunt first became visible at the outermost Bainbridge Rock, all the way to the stranding point. The core group did not belong to the local colony and left the area immediately after the conclusion of each hunt, with some local animals following behind them. In contrast to normal, individual foraging, these females must have grouped specifically for this hunt.
In the hunt of scad, the sea lions' cooperation is key to their access to this species that has almost never been recorded as prey before (Dellinger & Trillmich, 1999;Páez-Rosas & Aurioles-Gamboa, 2010, 2014. Given the high burst swimming speed of scad Our observations suggest, although without individual identification this cannot be proven, that there are a few individuals, apparently all females numbering a minimum of six, who have mastered the strategy of long-distance herding of the prey, driving them intentionally toward a location where the geography can be used as a trap. Whereas some of the local animals cause disruption by approaching the fish school in a direction opposite to the hunt, at least part of the group of local sea lions appeared to contribute to the final drive into the cove. They may contribute to the overall success of the hunt by enclosing the fish at a time when almost invariably a part of the school manages to escape, as evidenced in the photographs. This is why we prefer to call them "opportunistic hunters" and avoid the term "scroungers" (Packer & Ruttan, 1988) most often used in the theoretical literature on foraging.
How and when the core hunting group of females decide to engage in a communal hunt remain unknown; however, their coordinated herding action suggests some planning. Whether such hunting groups involve cliques of animals (i.e., animals particularly strongly connected within the social network of a colony) as described by Wolf et al. (2007) remains unknown. At present, we have no information to infer how such planning is accomplished.
The complementary action in driving the school to a target location may imply some sort of communication among the hunters, most likely visual, although underwater vocalizations cannot be ruled out. From our observations, we cannot conclude that there was any sort of specialization of roles among the core hunters. Sharing of the resource happened clearly in a rather chaotic manner and even involved other individuals and species, with absolutely no aggression displayed between them. Since the animals repeatedly hunted in this manner over years, the strategy must have been successful in terms of the energy intake of the hunters, but it certainly is not a strategy that is of general importance in terms of the overall energy intake at population level. It is well documented that most sea lions forage individually for pelagic or benthic prey (Jeglinski et al., 2012;Páez-Rosas et al., 2017;Schwarz et al., 2021). It appears therefore highly likely that social learning is involved in the cooperative for- where the panicked fish, much less maneuverable in shallow water than scad, stranded themselves in their attempted escape. The participation of several sea lions in these hunts could potentially be explained as primarily involving attraction of independent predators to a common resource, that is, mere similarity of action between individuals that happen to hunt in spatial proximity.
| CON CLUS ION
An important aspect of our observations lies in identifying the clear difference between "core hunters" whose strategy involves planning and collaboration by complementary behaviors to achieve their goal, and the "opportunistic hunters" who merely join a hunt organized and driven by others. It remains unclear to what degree the opportunistic hunters may contribute to hunting success or how often they reduce the successful outcome of the hunt and might therefore be called "scroungers". Remarkably, the core hunters approach the area from afar. They must employ some sort of communication in order to start the hunt as a group which implies communication about a goal that is not present. As the local animals did not initiate these hunts independently, the hunting strategy appears to be a cultural trait. Until marking and/or telemetry can be applied to allow identification of individuals, it remains unclear where these animals come from and how they synchronize their planned hunting. Coordinated communal hunting appears all the more surprising given that sea lions usually hunt individually, and that communal hunting certainly has played no role in the evolution of their sociality.
ACK N OWLED G M ENTS
We thank the Galapagos National Park Directorate for special permission to access the location where the hunt takes place, and the BBC Natural History film crew for facilitating some of the field observations. We greatly appreciate the help of John E. McCosker and W. Smith-Vaniz in determining the scad species identity. We thank Mauricio Cantor for highly constructive input to an earlier version of the manuscript.
CO N FLI C T O F I NTE R E S T
The authors have no conflict of interest to declare. | 2021-07-25T05:26:16.147Z | 2021-06-21T00:00:00.000 | {
"year": 2021,
"sha1": "027a5cd809e854c3089fee7275f4e5971d13b996",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7807",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "027a5cd809e854c3089fee7275f4e5971d13b996",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251865086 | pes2o/s2orc | v3-fos-license | HIV prevention programme with young women who sell sex in Mombasa, Kenya: learnings for scale‐up
Abstract Introduction In 2018, the National AIDS and sexually transmitted infection (STI) Control Programme developed a national guidelines to facilitate the inclusion of young women who sell sex (YWSS) in the HIV prevention response in Kenya. Following that, a 1‐year pilot intervention, where a package of structural, behavioural and biomedical services was provided to 1376 cisgender YWSS to address their HIV‐related risk and vulnerability, was implemented. Methods Through a mixed‐methods, pre/post study design, we assessed the effectiveness of the pilot, and elucidated implementation lessons learnt. The three data sources used included: (1) monthly routine programme monitoring data collected between October 2019 and September 2020 to assess the reach and coverage; (2) two polling booth surveys, conducted before and after implementation, to determine the effectiveness; and (3) focus group discussions and key informant interviews conducted before and after intervention to assess the feasibility of the intervention. Descriptive analysis was performed to produce proportions and comparative statistics. Results During the intervention, 1376 YWSS were registered in the programme, 28% were below 19 years of age and 88% of the registered YWSS were active in the last month of intervention. In the survey, respondents reported increases in HIV‐related knowledge (61.7% vs. 90%, p <0.001), ever usage of pre‐exposure prophylaxis (8.5% vs. 32.2%, p < 0.001); current usage of pre‐exposure prophylaxis (5.3% vs. 21.1%, p<0.002); ever testing for HIV (87.2% vs. 95.6%, p <0.04) and any clinic visit (35.1 vs. 61.1, p <0.001). However, increase in harassment by family (11.7% vs. 23.3%, p<0.04) and discrimination at educational institutions (5.3% vs. 14.4%, p<0.04) was also reported. In qualitative assessment, respondents reported early signs of success, and identified missed opportunities and made recommendations for scale‐up. Conclusions Our intervention successfully rolled out HIV prevention services for YWSS in Mombasa, Kenya, and demonstrated that programming for YWSS is feasible and can effectively be done through YWSS peer‐led combination prevention approaches. However, while reported uptake of treatment and prevention services increased, there was also an increase in reported harassment and discrimination requiring further attention. Lessons learnt from the pilot intervention can inform replication and scale‐up of such interventions in Kenya.
I N T R O D U C T I O N
There is a growing evidence that HIV risks and vulnerabilities may be higher among young women who sell sex (YWSS) and those new to sex work than in their older counterparts [1][2][3][4][5][6]. In 2018, there were an estimated 15,000 female sex workers (FSWs) <below 18 years of age in Kenya [7]. Mapping and size estimation data from the Tran-sitions study [8] conducted in sex work venues in Mombasa, Kenya, have shown that 52% of all estimated FSWs were 14-24 years old [9], with an HIV prevalence of 10% among self-identified YWSS compared to 4% among non-YWSS of the same age [10]. The same study also revealed that YWSS in these venues reported higher experience of violence compared to those who were not selling sex [11]. Despite large population size estimates, high HIV prevalence and structural vulnerabilities, YWSS have been largely invisible within programmatic initiatives to prevent HIV among FSWs [12]. In Kenya, even though the HIV prevention programme with FSWs has scaled up in the last decade [13,14], data from FSW programmes in Mombasa showed that the enrolment of FSWs <24 years was lower in the programme and among those enrolled, a higher proportion was lost at each service delivery step compared to their older counterparts [15]. Only 13.7% of the YWSS in the Transitions study reported contact with the local HIV prevention programme that was operational in the same venues where the study was conducted [16]. While there is evidence for effective interventions to prevent and treat HIV infections among adult FSWs [17][18][19][20], less is known about the delivery of these interventions with YWSS globally and in Africa [21]. There is need to use this existing evidence to prioritize YWSS in FSW interventions and adapt and evaluate effective interventions for FSWs to address the needs and priorities of YWSS [16].
The National AIDS and sexually transmitted infection (STI) Control Programme (NASCOP), under Ministry of Health in Kenya, through consultation with young key populations, stakeholders and review of evidence, developed programme guidelines to work with young key populations in 2018 [22]. Following that, NASCOP, in partnership with Jhpiego, University of Manitoba, Partners for Health and Development in Africa and International Centre of Reproductive Health-Kenya (ICRH-K), initiated a 12-month pilot intervention with cisgender YWSS in Mombasa, to reduce their risk and vulner-ability to HIV through provision of comprehensive combination prevention services using a YWSS peer-led approach. We assessed the effectiveness and feasibility of the pilot intervention after 1 year, using a mixed-methods study design. The findings of the study are presented in this paper.
Study site
The study was conducted in Kisauni sub-county, one of the six sub-counties in Mombasa, Kenya. Key population size estimation conducted in 2014 showed that Kisuani had the highest number of YWSS compared to other sub-counties (41% of the total estimated YWSS population) [8]. ICRH-K, the implementing partner in the project, has a long-standing HIV prevention programme with FSWs in the county. This pilot intervention was embedded within the existing intervention and was implemented in 12 wards of Kisauni, covering 104 FSW venues (i.e. street, sex dens, lodge, bar and public place).
Intervention
For the pilot intervention, the theory of change ( Figure 1) proposed that provision of a combination package of structural, behavioural and biomedical services to YWSS will reduce their HIV-related risk and vulnerability. The theory of change was guided by NASCOP's programme guidelines for work with young key populations [22]. The intervention is described in Appendix S1. The intervention expected nine outcomes: (1) increased identification and registration of YWSS at outreach; (2) increased knowledge of HIV prevention and treatment; (3) increased use of prevention products-condoms and preexposure prophylaxis (PrEP); (4) increased enrolment and use of clinical services; (5) increased HIV testing; (6) increased linkage to antiretroviral therapy (ART); (7) decreased experience of violence and discrimination; (8) improved access to post-violence support; and (9) decreased dependency on sex work.
Participants
The inclusion criteria for participants in the pilot project were: all young women aged 15-24 years, who self-identified as a cisgender FSW operating from a sex work venue. The age group of the study participants aligns with the age definition of YWSS in the national guidelines [21].
Routine programme monitoring data
Routine monthly programme monitoring data were collected from October 2019 to September 2020 by ICRH-K to measure the outcomes 1, 4, 5 and 6. During registration at outreach, unique identification codes were assigned to YWSS. Service utilization data were collected for each YWSS using NASCOP tools [23]; the data were entered into a monthly report in Microsoft Excel. A non-individualized aggregate report was submitted monthly by the project to NASCOP. Using the monthly aggregate report, descriptive analysis was conducted for specific indicators of interest.
Polling booth surveys
Polling booth surveys (PBS) were conducted in the first month of the intervention in October 2019 (round 1) and after completion of the intervention in November 2020 (round 2). PBS is a group interview method in which participants are provided a private space or booth containing colour-coded "yes," "no" and "not applicable" ballot boxes and a set of numbered "voting" tokens corresponding to each questionnaire item. Participants then answer survey questions that the researcher reads aloud by placing the appropriately numbered token in the relevant box. This survey method is described in more detail elsewhere [24,25]. The purpose of the survey was to measure the outcomes 2, 3, 4, 5, 7, 8 and 9. The target sample size was 91 YWSS, based on a power calculation to be able to significantly detect changes in condom use by 20% or more at 95% confidence level. Participants were divided over a total of nine PBS sessions with an average of 10 YWSS per session. The first and second round of PBS were conducted with 94 respondents and 90 respondents, respectively, out of which 38% and 39% were aged 15-19 years, respectively. The remaining respondents were aged 20-24 years. A stratified random sampling methodology was used to recruit the survey participants. Descriptive analysis was performed to produce proportions and comparative statis-tics. Chi-square tests of significance were conducted to assess changes between rounds 1 and 2. The sampling method and analysis is described in detail in Appendix S2.
Q U A L I TAT I V E D ATA
Key informant interviews (KIIs) and focus group discussions (FGDs) with YWSS, peer educators, clinicians and programme staff were conducted pre-and post-intervention (September 2019 and November 2020). The purpose of KIIs and FGDs was to understand the feasibility of implementing a comprehensive HIV prevention and treatment programme with YWSS. Six FGDs with an average of 10 participants per group, and four KIIs were conducted (three FGDs and two KIIs at pre-and post-intervention). The qualitative assessment employed purposive sampling techniques. FGD participants sampled in each round included 12 unregistered, 33 registered YWSS and 17 peer educators. KII participants included two clinicians and two programme staff. Informed consent was obtained prior to the participation. FGDs and KIIs were conducted at the drop in centre (DICE) by a trained female qualitative researcher and a female note taker predominantly in Swahili using semi-structured guides. All the FGDs and KIIs were digitally recorded, transcribed verbatim and translated into English. Field notes were typed in Microsoft Word. Two analysts reviewed the transcripts and notes and developed a codebook with clearly defined codes by mutual agreement. Thematic analysis was conducted with the aid of NVivo 11.0.
Ethics approval
Ethical approval for the study was obtained from the Kenyatta National Hospital/University of Nairobi Ethics and Research Committee (P268/04/2019).
Routine programme monitoring data
By the end of the 12 months intervention, the project registered 1376 YWSS (83% increase since the first month of intervention), of which 28% (383/1376) were <19 years of age (Table 1). YWSS who were engaged regularly in the intervention increased from 51% (382/750) in the first month to 88% (1209/1376) in the last month. Eighty-seven percent (1195/1376) of YWSS registered, received clinical services and tested for HIV during the intervention. Of those who tested for HIV, 34% (401/1195) were first-time testers, 26 YWSS were identified as living with HIV and all were linked to treatment. An increase in linkage to treatment was observed from 75% in the first month of intervention to 100% in the last month.
PBS
Comparing round 1 PBS results with round 2, the proportion of respondents exchanging sex with a paying client in the last 1 month (80.9% vs. 77.8%, p = 0.61), having a regular partner or lover (73.4% vs. 77.8%, p = 0.49), having penetrative anal sex in the last 3 months (10.6% vs. 13.3%, p = 0.58) and injecting narcotic drugs in the last 3 months (2.1% vs. 6.7% p = 0.14) remained non-significant ( There were no significant changes in respondents reporting client violence in the last 3 months (12.8% vs. 7.8%, p = 0.26), police harassment in the last 3 months (14.9% vs. 11.1%, p = 0.44), being harassed or beaten by members of the community or public because of doing sex work (13.8% vs. 17.8%, p = 0.46) or experiencing discrimination by healthcare providers because of sex work (8.5% vs. 11.1%, p = 0.55). A slightly higher proportion of YWSS reported being harassed or beaten by family members because of doing sex work in the last 3 months (11.7% vs. 23.3%, p < 0.04) and experiencing discrimination by educational institution in the last 3 months (5.3% vs. 14.4%, p < 0.04). There were no significant changes in respondents reporting receiving support from the intervention to address these experiences of violence and discrimination (40.0% vs. 58.1%, p = 0.22). In round 2, respon-dents who reported relying solely on sex work for day-to-day living increased (36.2% vs. 54.4%, p < 0.01).
4.3
Qualitative data The project worked with YWSS peer educators who conducted mobilization within their social networks at the sex work venues. YWSS reported that they identified with the YWSS peer educators and this helped establish trust and increased the project's acceptability and uptake of services. Additionally, the DICE offered a safe space to receive services with non-judgemental providers and comprehensive services and facilitated a social connection with fellow YWSS through group sessions.
Early signs of project success
Inclusion of non-health services also encouraged YWSS to access services. Upon receipt of vocational skills training, some respondents started their own businesses and created a saving culture, which has contributed towards a supplementary source of income.
They [YWSS] have also been trained on microfinance and they've also started a merry-go-round, where they contribute 1000 shillings every month and give it to one person. We have like three groups and the peer educators have also started a chamaa (informal savings group) where they contribute twenty shillings daily and give it to one person and this person can start up some small business, (programme staff, pilot project)
The respondents attributed the project's success to the YWSS-centric project implementation model, which was responsive to their feedback. YWSS were engaged routinely in planning the pilot project to increase its responsiveness towards their needs.
Young KPs [YWSS] are the ones who are supporting us to do the implementation framework, so we have them in our planning. . . They are also part and parcel in terms of following-up and ensuring that implementation is happening and probably monitoring of the same, (programme staff, pilot project)
Missed opportunities for optimization
The pilot engaged various stakeholders. Activities were conducted at sex work venues and family influences were
Experience of stigma and discrimination
Experienced discrimination by health service providers because of sex work in the last 3 underestimated. On a few occasions, parents or guardians came to the DICE with complaints about the provision of condoms and contraceptives. The respondents also recommended that future programs adopt a proactive stakeholder engagement strategy with parents and relevant government authorities. Navigating legal hurdles surrounding the provision of contraception to YWSS <18 years posed another challenge. Though YWSS were identified from sex work venues and most were mature minors, there were two instances where guardians reported the provision of contraceptives by the project to the police. Such incidences made service providers uneasy about providing services to YWSS below the legal age, even if the YWSS needed them.
"As a clinician I see so many young KPs [YWSS] coming to the DICE and they have been mobilized from the hotspots [sex work venues]. . . it's not that they have been mobilized from their houses or from home. So, when they ask for family planning services and am not able to offer. . . it becomes a challenge even explaining to them the reason why we are not offering them family planning services", (female clinician, pilot project) It was not always possible to continuously reach some YWSS because they did not possess phones, were mobile and moved between sex work venues or were denied permission by sex den management to attend the DICE activities and services. Some YWSS wanted their sex work identity to remain anonymous and feared accessing services from the DICE, which was popularly identified as a sex worker clinic.
It becomes so hard for one to come here [DICE] because they think that if they come here, when they get out and someone who sees them will judge them for the kind of work that they do. It becomes so hard, it is hard, (19-yearold YWSS) The pilot could not meet all the expectations of YWSS. Many YWSS wanted scholarship opportunities to pursue formal education but the pilot could identify very few matching opportunities.
So, we tried to get a number of them into TVET [technical and vocational education and training] centre for vocational training but I think the qualifying criteria for TVET were too high so when we looked at the qualifications, none of the girls who showed interest actually made it through, (programme staff, pilot project)
D I S C U S S I O N
To the best of our knowledge, our paper is the first to report on results of delivering a comprehensive HIV prevention intervention with YWSS in Kenya. It should be noted that during the intervention period (October 2019-September 2020), Kenya also experienced the COVID-19 pandemic. Government of Kenya implemented aggressive measures recommended by the World Health Organization to proactively limit the spread of COVID-19 from March to June 2020 [26]. These measures included rigid and abrupt stay-at-home policies enforced through curfews and lockdowns [26]. HIV prevention and treatment interventions, especially peer-led outreach, were disrupted for 4-6 months and programmes made critical adaptations to ensure services were available for FSWs. Over the 12 months period, registration of YWSS, clinic visits and utilization of services increased despite COVID-19related restrictions. The project has been also successful in ensuring that YWSS <19 years access services. Respondents reported increases in HIV-related knowledge, ever and current usage of PrEP, HIV testing and linkage to treatment. YWSS appreciated the comprehensive, non-judgemental and safe services offered under one roof, peer support and social connection with other YWSS, including the involvement of YWSS in planning and implementation. However, increase in dependency on sex work, harassment by family and discrimination at educational institutions was also reported. Certain behaviours related to condom use and contact with a peer educator did not improve as expected. We think that some of the unexpected results may have been a result of COVID-19 during which peer-led outreach to FSW venues was disrupted. YWSS may have also experienced challenges balancing using HIV prevention methods and staying resilient during the pandemic with reduced income, limited access to prevention options and reduced peer support [26,27]. The time frame of the intervention may have been short to change some of the structural outcomes like dependency on sex work and have to be explored further to determine its priority in interventions with YWSS [28][29][30].
Our findings are similar to the findings reported by Busza et al. about their pilot intervention with YWSS in Zimbabwe, where they were successful in reaching higher YWSS with acceptable services [31]. Chabata et al. in their study from Zimbabwe also found that YWSS who were engaged in interventions reported an increase in knowledge about HIV prevention and treatment and initiation and continuation on PrEP by intervention end [32]. Another study from Burkino Faso [33] has also shown that peer-led interventions and involvement of the community had a positive impact on reduction of risky behaviours among YWSS. Delany-Moretlwe et al. in their review of health services recommended that young key populations require comprehensive, integrated services that respond to their specific developmental needs, including health and non-health services within the context of a human rights-based approach [34].
Similar to other studies, our project experienced specific challenges, including following-up with mobile YWSS, fear of some YWSS being identified if they visited the DICE and the expectation to address a wide range of non-HIV needs [31]. In addition, the legal framework in the country is not supportive of providing sexual and reproductive health services to girls <18 years and criminalizes sex work. These challenges need to be addressed to scale-up HIV prevention interventions for YWSS. The pilot engaged various stakeholders, but comprehensive and continuous engagement of parents of YWSS was not adequately done.
Our study shows that a comprehensive YWSS peer-led intervention using a combination prevention approach involving YWSS in design and implementation can ensure that YWSS access information, prevention and treatment services to reduce their risk and vulnerability to HIV. For future replication and scale-up, the implementing partners need to involve key stakeholders like family members, community leaders, social support services, child rights groups and law enforcement agencies to support the interventions. The intervention should also build strong partnerships with government healthcare providers to make services accessible for YWSS who do not wish to access the DICE and partner with other programmes like DREAMS [35] in the county to address the non-HIV needs of the population. The programmes also need to advocate for legal reforms to improve access to sexual and reproductive health services for young people and those who sell sex.
The study has several limitations: (1) the data are from a cross-sectional survey and cannot be used to determine causality; (2) the responses to the survey were self-reported and may have some social desirability bias though the PBS method adopted has less bias compared to face to-face interviews [36]; (3) the respondents may have had some social desirability bias related to identifying themselves as FSWs; and (4) the monitoring data and survey data analysed captured data at an aggregated level and did not allow individualized analysis. One of the strengths of the study is the use of multiple data sources.
C O N C L U S I O N S
Our 1-year pilot intervention successfully rolled out peerled HIV prevention services for YWSS and improved critical population-level outcomes. The study shows that when supported by policy guidance, embedding YWSS-led interventions within larger FSW interventions is feasible and can be scaledup with inclusion of a few critical learnings. Lessons learnt from implementation of this project can be used to scale-up interventions with YWSS in Kenya. | 2022-08-28T06:17:55.558Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "b6b403b975f76e6976a129c2d450458a1eab9cee",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "02602dc29e11e42a616cf29c532d10ed72361e79",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263967358 | pes2o/s2orc | v3-fos-license | Ibrutinib-based therapy reinvigorates CD8+ T cells compared to chemoimmunotherapy: immune monitoring from the E1912 trial
Key Points • Higher effector T-cell numbers and their rejuvenated cytotoxic function accompanies favorable clinical responses to ibrutinib-rituximab.• Enhanced CD8+ T-cell lytic synapse activity during ibrutinib-rituximab therapy can be exploited using the bispecific antibody glofitamab.
Introduction
E1912 was the first frontline phase 3 study to compare ibrutinibbased therapy (ibrutinib with rituximab) with the chemoimmunotherapy fludarabine, cyclophosphamide, and rituximab (FCR). 1 Long-term follow-up has demonstrated superior progression-free survival (PFS) and overall survival for ibrutinibrituximab relative to those for FCR. 2 Nevertheless, clinical challenges remain, including the current need for continuous therapy, tolerability, and residual and progressive disease.Immunotherapy represents a powerful combination or alternative therapy to tackle resistant disease and deepen responses. 3,4However, T-cell exhaustion is a major barrier for optimal immunotherapy. 5,6Dysfunctionality of T cells is characterized by increased chronic lymphocytic leukemia (CLL)associated T-cell subsets expressing inhibitory checkpoint molecules. 7Furthermore, helper CD4 + T cells have a tumor-promoting capacity, whereas impaired immune synapse formation contributes to suppressed CD8 + T-cell cytotoxicity. 80][11][12][13] However, the impact of BTKis on T-cell function and association with clinical response is less well defined.Here, we leverage pretreatment and ontreatment peripheral blood patient samples serially collected from the E1912 trial and report on the impact of ibrutinibrituximab vs FCR on T cells using immune monitoring functional assays.
Samples from patients with CLL
Viable peripheral blood mononuclear cells at baseline and 6-, 12-, and 18-month time points from the Eastern Cooperative Brief Report
Results and discussion
T-cell monitoring and correlation with PFS, MRD, and infections We initially investigated the impact of therapy on T cells and explored the association with clinical outcome (Figure 1A).Flow cytometry measured the absolute numbers of naive (CD45RA + / CCR7 + ), central memory (CD45RA − /CCR7 + ), effector memory (T EM ; CD45RA − /CCR7 − ) and terminally differentiated effector memory (CD45RA + /CCR7 − ) subsets in patients at baseline and 6-and 12-month treatment time points (supplemental Table 3).This analysis revealed a reduction in the majority of CD4 + and CD8 + T-cell subsets, including naive and effectors, during ibrutinib-rituximab treatment (Figure 1B; supplemental Figure 2A,C), consistent with T-cell normalization, as previously reported for monotherapy. 10,13Expectedly, we observed a marked decrease of subsets after FCR, with evidence of immune reconstitution at 12 months (Figure 1B; supplemental Figure 2B,D). 14The frequencies of subsets remained relatively stable during ibrutinib-rituximab treatment, whereas FCR caused naive and central memory subsets to contract, whereas T EM expanded (supplemental Figure 2E-F).Both therapies reduced the number of regulatory T cells, T helper 17 (T H 17) cells, and natural killer (NK) cells compared with those at baseline, but an increased the regulatory T-cell to CD4 ratio after FCR was observed (supplemental Figure 3A-C). 15Strikingly, patients on ibrutinib-rituximab with higher T-cell numbers, including PD-1 + effector CD8 + and CD4 + subsets, at baseline had longer PFS (Figure 1D,F), suggesting the importance of an existent but exhausted immune response before therapy.Interestingly, higher levels of PD-L1-expressing CLL cells at baseline correlated with favorable PFS (Figure 1D,F; multivariable analysis in supplemental Table 4).Furthermore, an elevated frequency of effector CD8 + T cells at the 6-month ibrutinib-rituximab time point associated with favorable PFS, whereas no association was detectable at 12 months (Figure 1D,F).Conversely, higher T-cell numbers correlated with worse PFS in the FCR arm, whereas increased NK-cell frequency at baseline associated with favorable outcome (supplemental Figure 4A).Consistent with tumor-mediated exhaustion, greater numbers of PD-1 + and PD-L1 + CD8 + T-cell subsets associated with higher measurable residual disease (MRD) during ibrutinib-rituximab treatment (supplemental Figure 5A).In contrast, elevated frequencies of T-cell subsets not expressing checkpoint molecules, including CD8 + terminally differentiated effector memory and NK cells, correlated with low MRD during ibrutinib-rituximab treatment, in keeping with reduced exhaustion.An association between T cells and MRD was less evident in the FCR arm, except for increased checkpoint-expressing T cells at 12 months, which correlated with higher MRD (supplemental Figure 5B).Ibrutinib's inhibition of interleukin-2-inducible T-cell kinase (ITK) enhanced T H 1 polarization, 16,17 but both therapeutic arms reduced T H 1 and T H 2 numbers and T H 1:T H 2 ratios (supplemental Figure 6A-B).Nevertheless, an increased frequency of T H 2 and CD4:CD8 ratio (baseline and 6 months) associated with unfavorable PFS and incidence of infection, respectively, during ibrutinib-rituximab treatment (Figure 1F-G).However, increased effector CD8 + T-cell numbers and CD16 + NK cells at 6 months were associated with no infections during ibrutinib-rituximab treatment (Figure 1G).Conversely, increased T cells after FCR correlated with infections (supplemental Figure 4B).In sum, higher CD8 + T-cell numbers at baseline and early on-therapy, associated with favorable clinical responses, whereas PD-1 + and PD-L1 + subsets associated with greater MRD during ibrutinib-rituximab treatment.
Ibrutinib-rituximab promotes CD8 + synapses and immunotherapy-triggered killing function
Next, we characterized the cytolytic function of therapyreshaped T cells against baseline CLL cells (Figure 2A-B).T cells from both 6-and 12-month ibrutinib-rituximab time points showed enhanced killing function compared with those at pretreatment levels.In contrast, T cells after FCR showed no cytolytic improvement.Notably, patients who experienced grade 3 infections during ibrutinib-rituximab treatment showed lower anti-CLL T-cell cytotoxic function (Figure 2C).Hypothesizing altered T-cell:CLL interactions, we then performed Green rows (correlations with hazard ratio [HR] values < 1) indicate higher immune subsets associated with longer PFS, whereas higher immune subsets associating with shorter PFS (HR > 1) are highlighted in blue rows.Confidence intervals (95%) and P values are shown.(E) Schematic summary of the significant correlations (Wilcoxon test) between immune subsets and infection (any infection) during ibrutinib-rituximab treatment (n = 88 patients).Negative t statistics (t.stat) indicate higher immune subset levels in patients who did not develop infection (green rows).In contrast, correlations with a positive t.stat indicate higher immune subset levels in patients who developed infection (blue rows).(F) Kaplan-Meier curves of immune subsets associated to good prognosis for the ibrutinib-rituximab arm.Higher levels of percentage PD-L1 + CD19 + cells (high: 12 progression events per 43 patients and low: 1 progression event per 43 patients), absolute number of PD-1 + CD8 + T EM (high: 3 progression events per 43 patients and low: 10 progression events per 43 patients), and PD-1 + CD4 + T cells (high: 3 progression events per 43 patients and low: 10 progression events per 43 patients) at baseline associate with longer PFS.Higher percentage of CD8 + T EM (high: 3 progression events per 42 patients and low: 10 progression events per 43 patients) at the conjugation assays.T cells during ibrutinib-rituximab treatment showed augmented formation of polarized F-actin synapses with baseline CLL cells (Figure 2D; supplemental Figure 7A).In comparison, T cells after FCR exhibited distinctly nonpolarized synapses (Figure 2E).Given the opposing roles of patient CD4 + and CD8 + T cells, 8 we examined these subsets and detected an increased frequency of granzyme B + CD8 + T-cell:CLL synapses at both ibrutinib-rituximab time points compared with that at baseline, in which CD4 + T-cell:CLL synapses dominated.This switch in the CD4 + :CD8 + synapse balance was not detected after FCR (Figure 2F-G; supplemental Figure 7B-D).Interestingly, in keeping with protumoral CD4 + T cells, increased formation of CD4 + T-cell:CLL F-actin-positive synapses at baseline correlated with unfavorable PFS and grade 3 infections during ibrutinib-rituximab administration (Figure 2H; supplemental Figure 7E).Together, these data demonstrate that ibrutinibrituximab promotes previously exhausted CD8 + T-cell activity, which could provide a gateway for immunotherapy.
Ibrutinib is known to reduce PD-1 expression on patient T cells. 9,10,18Here, we detected a reduced frequency of PD-1expressing T-cell subsets during ibrutinib-rituximab administration as well as PD-L1-expressing T cells except for CD8 + T EM at 6 months (Figure 1D-E; supplemental Figure 8).In contrast, T-cell PD-1/PD-L1 expression was relatively unaffected after FCR.This prompted us to investigate for a checkpoint blockade in our cytotoxicity assay (Figure 2I-K; supplemental Figure 9).Both ibrutinib-rituximab-and FCR-exposed T cells were insensitive to anti-PD-1.However, anti-PD-L1 19 increased anti-CLL T-cell cytotoxicity only at the 6-month ibrutinib-rituximab time point, suggesting a narrow window for the checkpoint blockade activity. 17his led us to investigate whether the T-cell-engaging bispecific antibody glofitamab (CD20 × CD3) 20 could trigger improved cytolytic responses.T cells from all ibrutinib-rituximab time points tested up to 18 months showed increased anti-CLL T-cell killing after treatment with glofitamab, compared with those at baseline (Figure 2L-N).However, T cells after FCR did not respond to glofitamab, including at the later time point.Overall, these data support the ability of ibrutinib-based therapy to enhance T-cellmediated cytotoxicity induced by bispecific immunotherapy.
In summary, our data highlight the importance of T cells during ibrutinib-rituximab therapy, with higher T-cell numbers and rejuvenated cytotoxicity accompanying favorable clinical responses.Our exploratory findings that increased levels of PD-1-expressing T cells as well as PD-L1-expressing CLL cells before therapy associate with longer PFS suggests that ibrutinib-rituximab appears to capitalize on T-cell-mediated immune surveillance in patients.Strikingly, opposing associations were found in the chemoimmunotherapy arm, and T cells showed no functional improvement after FCR.Previous studies have reported CD8 + T clonotype expansion during ibrutinib therapy, 21,22 likely reflecting active immunosurveillance.Taken together, tumor debulking and alleviation of T-cell exhaustion during BTKi-based therapy [9][10][11][12][13] may promote CD8 + T-cell activity.The switch from CD4 + T-cell:CLL interactions at baseline to CD8 + lytic synapses during ibrutinib-rituximab therapy supports this concept.Although ibrutinib-rituximab did not increase T H 1 numbers, we do not exclude ITK inhibition contributing to beneficial immunomodulation. 184][25] Overall, this report underscores the importance of trial-associated science to understand how BTKis modulate T cells and supports the development of immunotherapy-based therapies.
Figure 1 .
Figure 1.Higher CD8 + T-cell numbers at baseline and early on-therapy associate with favorable PFS and no infections with ibrutinib-rituxumab. (A) Schematic representation of the E1912 trial and biobanked peripheral blood mononuclear cell samples collected at baseline (B/L) and 6-(6M) and 12-(12M) month time points for correlative T-cell analysis.PFS, infection, and MRD clinical outcome data were collected.(B) Absolute numbers of CD8 + T EM cell subsets (CD45RA − CCR7 − ) for ibrutinibrituximab (n = 86 patients) and FCR (n = 50) at the time points indicated.Patient data are presented as Box and whiskers (10-90 percentile; log scale) plots.(C) Percentage of PD-1 + CD8 + T EM subsets during ibrutinib-rituximab (n = 86) or FCR (n = 50) treatments.(B-C) Data are given as the mean ± standard error of the mean; statistical analysis between time points were assessed using the Wilcoxon signed-rank test.(D) Tabular schematic summary of the significant correlations (Cox model) between higher immune subset levels (flow cytometry, median values used as cut-off point) and PFS for patients on ibrutinib-rituximab (n = 88 patients with 13 experiencing disease progression).Green rows (correlations with hazard ratio [HR] values < 1) indicate higher immune subsets associated with longer PFS, whereas higher immune subsets associating with shorter PFS (HR > 1) are highlighted in blue rows.Confidence intervals (95%) and P values are shown.(E) Schematic summary of the significant correlations (Wilcoxon test) between immune subsets and infection (any infection) during ibrutinib-rituximab treatment (n = 88 patients).Negative t statistics (t.stat) indicate higher immune subset levels in patients who did not develop infection (green rows).In contrast, correlations with a positive t.stat indicate higher immune subset levels in patients who developed infection (blue rows).(F) Kaplan-Meier curves of immune subsets associated to good prognosis for the ibrutinib-rituximab arm.Higher levels of percentage PD-L1 + CD19 + cells (high: 12 progression events per 43 patients and low: 1 progression event per 43 patients), absolute number of PD-1 + CD8 + T EM (high: 3 progression events per 43 patients and low: 10 progression events per 43 patients), and PD-1 + CD4 + T cells (high: 3 progression events per 43 patients and low: 10 progression events per 43 patients) at baseline associate with longer PFS.Higher percentage of CD8 + T EM (high: 3 progression events per 42 patients and low: 10 progression events per 43 patients) at the 6-month time point associate with longer PFS.Absolute number data are referred to as "ab."P values indicated.*P < .05;**P < .01;****P < .0001.n/s, not significant.
Figure 2 .
Figure 2. Ibrutinib-rituximab promotes CD8 + T-cell lytic synapse activity and supports immunotherapy-triggered anti-CLL killing function.(A) Illustration of the autologous cytotoxicity assay using anti-CD3/-CD28-activated T cells (cytolytic T lymphocytes [CTLs]) from B/L, 6M, and 12M time points mixed with target B/L CLL B cells (pulsed with superantigen as a model antigen) with flow-based quantification of T-cell killing function.(B) T-cell-mediated CLL cell death comparing T cells purified from B/L, 6M, and 12M time point samples (n = 30 patients per treatment arm).Data at 6M and 12M were normalized to B/L levels to generate fold change values for each patient.(C) The association between patient's T-cell killing function (12M ibrutinib-rituximab time point, n = 30) and infection status during ibrutinib-rituximab therapy (no infections vs grade 2 or 3 infections) (Wilcoxon test, P = .01).(D-E) Representative confocal medial optical section and 3-dimensional (3D) volume-rendered images of T-cell:CLL conjugates formed between patient T cells (B/L, 6M, and 12M on-ibrutinib-rituximab [D] or FCR [E]) interacting with autologous B/L CLL B cells (blue, CMAC dyed).Bar charts: quantitative relative recruitment index (RRI) analysis of F-actin polarization (red, rhodamine phalloidin) in T-cell:CLL conjugates (n = 50 patients per treatment arm).(F) Box and violin plots (minimum-maximum) showing the percentage of CD4 + or CD8 + T-cell:CLL conjugates formed from the total T-cell:CLL conjugates in B/L, 6M, and 12M ibrutinib-rituximab time point samples (n = 15 patients).Representative confocal images of CD8 + (white) and CD4 + (green) T-cell conjugates with CLL B cells (blue) at B/L vs on ibrutinib-rituximab therapy.(G) Representative confocal 3D volume-rendered images of granzyme B (GrB; white) expression at CD8 + T-cell synapses, comparing ibrutinib-rituximab and FCR 12M time point samples.(H) Kaplan-Meier curve showing the association between the strength of polarized F-actin CD4 + T-cell:CLL immune synapse interactions in patient B/L samples and their PFS outcomes during ibrutinib-rituximab administration.Median F-actin RRI values were used as a cut-off point to determine weak (<median RRI) vs strong (>median RRI) CD4 + T-cell synapses (n = 52 patients).Patients' showing strong CD4 + T-cell:CLL immune synapses at B/L showed significantly adverse PFS (9 progression events per 29 patients) compared with patients showing weak CD4 + T-cell:CLL interactions (1 progression event per 23 patients).(Cox model, P = .01;HR, 9.14; 95% confidence interval, 1.15-72.47).Bar chart: F-actin RRI analysis of CD4 + T-cell:CLL conjugates at B/L and representative 3D volume-rendered confocal images comparing patients | 2023-10-14T06:17:44.317Z | 2023-10-12T00:00:00.000 | {
"year": 2023,
"sha1": "d1246f03f51613e8a80db2a9f3cf6c00af6a0dd0",
"oa_license": "CCBYNCND",
"oa_url": "https://ashpublications.org/blood/article-pdf/doi/10.1182/blood.2023020554/2084527/blood.2023020554.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ba9cc725f63928431033b6ec8512c6ab03f00dcf",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55613839 | pes2o/s2orc | v3-fos-license | Green-Synthesized Silver Nanoparticles and Their Potential for Antibacterial Applications Green-Synthesized Silver Nanoparticles and Their Potential for Antibacterial Applications
The prevalence of infectious diseases is becoming a worldwide problem, and antimicro- bial drugs have long been used for prophylactic and therapeutic purposes, but bacterial resistance has creating serious treatment problems. The development of antibiotic resistance makes scientists around the world to seek new drugs that would be more effective. The use and search for drugs obtained from plants and other natural prod - ucts have increased in recent years. It is well known that silver and its compounds have strong antibacterial activity. Silver, compared to the other metals, shows higher toxicity to microorganisms, while it exhibits lower toxicity to mammalian cells. The progress in the field of nanotechnology has helped scientists to look for new ways in the develop - ment of antibacterial drugs. Silver nanoparticles (AgNPs) are interesting for their wide range of applications, e.g. in pharmaceutical sciences, which include treatment of skin diseases (e.g. acne and dermatitis) and other infectious diseases (e.g. post-surgical infec-tions). Various antibacterial aids, such as antiseptic sprays, have also been developed from AgNPs. In this chapter, we have focused on various synthesis methodologies of AgNPs, antibacterial properties, and the mechanism of action.
Introduction
Frequent use of antibiotics results in resistance of pathogens against them. This is the health and the life-threatening reality. It is therefore necessary to look for new sources of effective potent drugs. Nature is an inexhaustible source of health-promoting substances. Combination of knowledge in natural medicine with modern technology leads to the discovery of new drugs. One of the most promising sources in recent years has been shown to be plant extracts, which are rich in antioxidant and antimicrobial compounds that have been used as a nanoparticle synthesis agent [1][2][3]. Nanotechnology and nanoscience have been established recently as an interdisciplinary subject dealing with biology, chemistry, physics, and engineering. The term "nano" is derived from the Greek word dwarf in the meaning of extremely small. The size of nanoparticle is between 1 and 100 nm [4,5]. Unique biological, chemical, and physical properties of silver nanoparticles (AgNPs) lead to the wide range of applications in spectroscopy, sensors, electronics, catalysis, and pharmaceutical purposes [6]. It is well known that silver has an inhibitory effect toward many bacterial strains and microorganisms commonly presented in medical and industrial properties [7], e.g. in the medical industry including creams containing silver to prevent local infections of burns or open wound, dental work, catheters, plastics, soaps, pastes, and textiles [8][9][10][11]. Many authors confirmed that AgNPs show efficient antimicrobial properties and kill bacteria at low concentrations (mg L −1 ) [12] without toxic effect on human cells [13,14].
The presented review deals with the possibilities of the synthesis of AgNPs with respect to green synthesis as well as their antimicrobial activity and its determination. Finally, we focused on the mechanism of action theories of silver nanoparticles.
Synthesis of silver nanoparticles
Production of nanoparticles can be achieved through different methods. Generally, we can divide these methods into physical, chemical, and biological. Some of these methods are simple and make good control of nanoparticle size by affecting the reaction process. On the other side, there are still problems with stabilization of the product and in obtaining monodisperse nanosize using achieved method [15]. In Table 1, we show list of some synthesis techniques of AgNPs and some of them are further briefly described.
Physical and chemical methods
For a physical approach, the nanoparticles are prepared using evaporation-condensation, which could be carried out in a tube furnace at atmospheric pressure [16]. This method has Silver nanoparticles synthesis
In vivo methods
Pulsed laser ablation Bacterial Pathogenesis and Antibacterial Control some disadvantages due to large space of tube furnace, high energy consumption, and long time for achieving thermal stability. For these reasons, various methods of synthesis of AgNPs by physical methods were developed, for example, a thermal decomposition for the preparation of nanoparticles in powder form [17]. Conventional physical methods, such as pyrolysis, were used by Pluym et al. [18]. The advantages of physical methods are speed and no use of toxic chemicals, but there are also some disadvantages, such as low yields and high energy consumption [19].
Chemical methods provide an easy way to prepare AgNPs in solution (water or organic solvent could be used). The most common method of synthesis of silver nanoparticles is reduced by organic and inorganic reducing agents. Generally, various reducing compounds, such as sodium ascorbate, sodium borohydride, hydrogen, Tollens reagent, and N,N-dimethylformamide, could be used for Ag + ion reduction. The reduction leads to the formation of metallic silver Ag 0 , which is followed by agglomeration into oligomeric clusters. These clusters eventually lead to the formation of metallic colloidal silver particles [20]. It is essential to use stabilizing agents during the preparation of AgNPs to avoid their agglomeration [21]. It is necessary to note that polymeric compounds such as poly (vinyl alcohol), poly (vinyl pyrrolidone), and polyethylene glycol are effective protecting agents for the stabilization of nanoparticles [22,23].
Green synthesis
As we described, there are various chemical and physical methods of synthesis of silver nanoparticles. These methods are cost and toxic for the environment [24]. These facts lead to the look for new, simple, and eco-friendly alternatives that would not harm human and animal health. The revolution in the world of synthesis of silver nanoparticles has brought the development of the green synthesis techniques. The biologically provided synthesis of nanoparticles has been shown to be simple, low cost, and environmentally friendly. In the green synthesis, the reduction procedure is performed by a natural-based material including bacteria, fungi, yeast, plants and plant extracts, or small biomolecules (e.g. vitamins, amino acids, or polysaccharides) [25][26][27]. With the development of reliable methodology to produce nanoparticles, several attempts of in vivo and in vitro synthesis of AgNPs have been realized. In this chapter, we divided the green synthesizing methods into in vivo synthesis, where for the biogenic production of silver nanoparticles, whole cells were used. In this approach, silver reduction can happen intracellularly or extracellularly with the formation of silver nanoparticles on the cell walls. Another case of synthesis of AgNPs is in vitro method. These procedures perform outside of a living organism with a cell-free extract, since the nanoparticles do not need whole cells for their synthesis. Generally, the process of biosynthesis involves three steps. The first step is, the phase of reduction of Ag + ions; the second is growth step, where larger aggregates are observed; and finally, the third, in which the stabilization of nanoparticles with capping agents is proceeded [28]. In Table 2, we reported the most recently published green synthesis of silver nanoparticles.
In vitro synthesis of AgNPs
In the so-called green approach, the reduction procedure is performed by a natural-based material, most commonly a plant extract containing substances with the antioxidant and reducing properties, e.g. aldehydes, ketones, terpenoids, flavones, or carboxylic acids [29].
In the field of plant-mediated nanotechnology, various plant extracts of specific parts such as root, bark, stem, leaves, seed, fruit, peel, and flower have been used for the synthesis of silver nanoparticles [30][31][32]. In general, silver nitrate aqueous solution is used for the reaction with the plant extract leading to rapid formation of stable nanoparticles. The plant extract is usually prepared by suspending dried powdered parts of plant in distilled or deionized water or organic solvent, most often ethanol and methanol. Extraction is carried out in various ways (different temperature, time, extract concentration, and pH). After extraction, the solid residues are removed, and the filtrate is used for the synthesis of silver nanoparticles. It is investigated that green synthesis using plant extracts is faster than microorganisms, such as bacteria or fungi [30]. The biosynthesis of silver nanoparticles from different parts of plants is schematically described in Figure 1. Green-Synthesized Silver Nanoparticles and Their Potential for Antibacterial Applications http://dx.doi.org/10.5772/intechopen.72138 It is well known that edible mushrooms are rich in proteins, saccharides, vitamins, amino acids and other compounds, which can be used as reducing agents in biosynthesis of silver nanoparticles. The aqueous extract of edible mushroom Volvariella volvacea was used for the synthesis of AgNPs. Author thought that as mushrooms are rich in proteins, there is increased productivity of nanoparticles compared to other biosynthesis routes already reported [33]. In recent time, there has been published another papers that described green synthesis from mushroom extracts, for example, Pleurotus ostreatus [34], Ganoderma neo-japonicum [35], and Pleurotus florida [36].
Interestingly, the use of microorganism extract has resulted in an easy method to synthesize nanoparticles with characteristic shapes, size, and morphology. Extracts from microorganisms (fungi, bacteria, yeasts, actinomycetes) may act as reducing and capping agents for the synthesis of AgNPs. The reduction of Ag + ions is proceeded by biomolecules found in extract (enzymes, proteins, amino acids, polysaccharides or vitamins). In case of microorganisms, the extracts can be made by two methods. The first is by washing the biomass and dissolving the cells in water or a buffer [37], and the second is by used medium in which biomass has grown [38]. The aqueous extract from a fungus Penicillium brevicompactum was attempted [39]. Of course, specific bacteria can be used for the synthesis of nanoparticles. An interesting approach of green biosynthesis of AgNPs using supernatant of Bacillus subtilis and microwave irradiation in water solution has been studied [40]. Authors reported extracellular biosynthesis of silver nanoparticles and avoid the aggregation of microwave radiation they used. The rapid biosynthesis of AgNPs using the bioreduction of aqueous Ag + ion by the culture supernatants of Klebsiella pneumonia, Escherichia coli, and Enterobacter cloacae was reported [41]. An extensive volume of literature reported successful biosynthesis of AgNPs using microorganisms including bacteria, fungi, yeasts, and actinomycetes. Reduction of Ag + ions is achieved using biomolecules that also serve as a capping agent. Reaction is mostly performed in water solution, which is considered as environmentally friendly solvent system. The extraction and purification of biomolecules require one or more steps in production process, and for this reason, it could be needed more time and would be not economic. The solution of this problem is using simple molecules, such as saccharides and polysaccharides as reducing agents. A simple method for silver nanoparticles was described by Raveendran et al., who used α-D-glucose as reducing agent in gently heated system [42]. In another work, the AgNPs were synthesized using pectin from citrus [26]. Authors found optimal conditions and some advantages, such as short reaction time, almost 100% conversion of Ag + ion to Ag 0 and very good reproducibility and stability of the product. There were used many approaches using polysaccharides for synthesis of AgNPs, e.g. dextrin [43], cellulose [44], or polysaccharides isolated from marine macro algae [45]. Preparation of silver nanoparticles using other isolated or purified biomolecules has also been studied. For example, as reducing and capping agents were used glutathione [46], tryptophan residues of oligopeptides [47], natural biosurfactants [48], oleic acid [49], etc.
Essential oils could be one of the alternative methods for biosynthesis of AgNPs. With respect to chemical composition of essential oils (phenols, flavonoids, terpenes), essential oils have been successfully used for the preparation of silver nanoparticles. Usually, reduction is used for aqueous solution of silver nitrate, and essential oils serve as reducing and capping agents [50].
Algae extracts have great efficiency in green synthesis of nanoparticles. There have been a few articles published reporting this method. Rajeshkumar and his co-workers published an algae-mediated preparation of AgNPs using purified brown algae Sargassum longifolium water extract. The extract was mixed with silver nitrate water solution and kept at the room temperature [51]. Bioreduction of Ag + ions by algae extracts is similarly proceeded due to content of phytochemicals (carbohydrates, alkaloids, polyphenols, etc.).
In vivo synthesis of AgNPs
Under the term in vivo, we understand the biosynthesis of nanoparticles in living organisms, either extracts or isolated biomolecules. Gardea-Torresdey published the very first article discussing about synthesis of silver nanoparticles using living plant Alfalfa (Medicago salvia). They found that Alfalfa root is able to absorb silver in neutral form (Ag 0 ) from agar medium and transport it into the shoots of plant where Ag 0 atoms arrange themselves to produce AgNPs [52]. Marchiol et al. [28] reported the in vivo formation of silver nanoparticles in plants Brassica juncea, Festuca rubra, and Medicago sativa. The rapid bioreduction was performed within 24 h of exposure to AgNO 3 solution. TEM analyses indicated the in vivo formation of AgNPs in the roots, stems, and leaves of the plants, which had a similar distribution but different sizes and shapes. The contents of reducing sugars and antioxidant compounds were proposed to be involved in the biosynthesis of AgNPs.
Some microorganisms resistant to metal can survive and grow in the presence of metal ions. The first evidence of bacteria synthesizing silver nanoparticles was observed using Pseudomonas stutzeri AG259 strain [53]. Since the first evidence of bacteria producing AgNPs in 1999, different bacteria were used, e.g. Lactobacillus strains [54], E. coli [25], or Bacillus cereus [55]. Fungi have been observed as good producers of silver nanoparticles due to their tolerance and capability of bioaccumulation of metals. When fungus is exposed to the Ag + ions, it produces enzymes and metabolites, which protect it from undesirable foreign matters resulting in production of AgNPs [56]. Many reports dealing with biosynthesis of silver nanoparticles using fungi or yeasts have been published. For example, fungus-mediated synthesis of silver nanoparticles was described by Mukherjee [57]. They isolated the fungus Verticillium from the Taxus plant, and after mycelia growth and separation, they suspended dry mycelia in distilled water and added the Ag + ions to prepare silver nanoparticles. They found that the AgNPs were formed below the cell wall surface due to reduction of silver ions (Ag + ) by enzymes presented in the cell wall membrane. Extracellular production of silver nanoparticles was described, for example, by Sadowski, who prepared nanoparticles from Penicillium fungi isolated from the soil [58].
Antibacterial activity
The discovery of the first antibiotics has dramatically changed the quality of human life, but the development of the natural mechanism of bacterial resistance has been forced scientists to develop more effective antimicrobial drugs. The interest about the use of nanoparticles as antibacterial agents has seen a dramatic increase in the last few decades. The unique properties of silver nanoparticles have allowed exploiting in medicinal field. The most studies have been attended to their antimicrobial nature. Since silver nanoparticles show promising antimicrobial activity, researchers use several techniques to determine and quantify their activity on various Gram-positive and Gram-negative bacteria.
Methods of evaluation of antibacterial activity
To evaluate the antimicrobial activity different methods are currently used, the results of which are given in different ways. Commonly used techniques to determine the antimicrobial activity of biogenic silver are the minimal inhibitory concentration (MIC), the minimal bactericidal concentration (MBC), time-kill, the half effective concentration (EC 50 ), welldiffusion method, and disc-diffusion method. The most commonly used is disc-diffusion method developed in 1940. These well-known procedures are comprised of preparation of agar plates incubated with a standardized inoculum of test microorganism. Then, the sterile discs (about 6 mm in diameter) impregnated with AgNPs at a desired concentration are placed on the agar surface. According to agar well-diffusion method, the tested concentration of AgNPs is introduced into the well with a diameter of 6-8 mm punched into agar. Cultured agar plates are incubated under conditions suitable for tested bacteria, and the sensitivity of the tested organisms to the AgNPs is determined by measuring the diameter of the zone of inhibition around the disc or well. This method is contributed and beneficial for its simplicity and low cost and is commonly used in antibacterial activity of Ag nanoparticles evaluation [59].
The antibacterial properties of silver nanoparticles are often studied by employing dilution methods, quantitative assays, the most appropriate ones for the determination of MIC values. Minimal inhibitory concentration (MIC) is usually expressed in mg mL −1 or mg L −1 and represents the lowest concentration of the AgNPs, which inhibits the visible growth of the tested microorganism. Either broth or agar dilution method may be used for quantitative measurement, the in vitro antimicrobial activity against bacteria. The minimum bactericidal concentration (MBC) is less common compared to MIC determination and is defined as the lowest concentration of antimicrobial agent killing 99.9% of the final inoculum after incubation for 24 h. The most appropriate method for determining the bactericidal effect is the time-kill test and can be also used to determine synergism for combination of two or more antimicrobial agents. These tests provide information about the dynamic interaction between different strains of microorganism and antimicrobial agents. The time-kill test reveals a timedependent or a concentration-dependent antimicrobial effect. The varied time intervals of incubation are used (usually 0, 4, 6, 8, 10, 12, and 24 h), and the resulting data for the test are typically presented graphically [59].
Antibacterial activity of AgNPs
Antimicrobial activity of silver is well known. Silver has been used for treatment of several diseases since from ancient time [60]. The AgNPs synthesized by different methods were widely tested against number of pathogenic bacteria with evidence of strong antimicrobial activity against a broad-spectrum bacteria including both Gram-negative and Gram-positive. Some researchers have been reported that the AgNPs are more effective against Gram-negative bacteria [61][62][63], while opposite results have also been found [64]. The difference in sensitivity of Grampositive and Gram-negative bacteria against AgNPs may result from the variation in the thickness and molecular composition of the membranes. Gram-positive bacteria cell wall composed of peptidoglycan is comparatively much thicker than that of Gram-negative bacteria [2,65].
The importance of antibacterial activity study on different bacterial strains becomes from the importance of understanding the mechanism, resistance and future application. The latest studies on antimicrobial properties are summarized in Table 2.
Although the antibacterial effect of silver nanoparticles has been widely studied, there are some factors affecting the antimicrobial properties of AgNPs, such as shape, size, and concentration of nanoparticles and capping agents [30]. Nakkala et al. [71] analyzed AgNPs with the average size of 21 nm, and the size distribution was found to be 1-69 nm prepared by medicinal plant Ficus religiosa. These nanoparticles showed excellent antibacterial activity in P. fluorescens, S. typhi, B. subtilis, and E. coli. Bacterial cells exposed to lower concentration of AgNPs exhibited delays of growth which may be due to the bacteriostatic effect, while at the higher concentration (of 60 and 100 μg), the AgNPs were found to exhibit bactericidal effect as no growth was observed.
The smaller particles with a larger surface-to-volume ratio were able to reach bacterial proximity most easily and display the highest microbicidal effects than larger particles [19,69,76]. Normally, a high concentration leads to more effective antimicrobial activity, but particles of small sizes can kill bacteria at a lower concentration. Furthermore, apart from size and concentration, shape also influences the interaction with the Gram-negative organism E. coli [19,77]. Pal et al. [78] discussed about depending of nanoparticles' shape and size on antibacterial activity against Gram-negative bacteria E. coli. They found that observed interaction between nanoparticles of silver with various shapes and E. coli was similar, and the inhibition results were variable. They speculated about the fact that AgNPs with the same surface areas, but different shapes, may have unequal effective surface areas in terms of active facets [78]. Sadeghi et al. [79] found different antimicrobial effects of nanosilver shapes (nanoparticles, nanorods, and nanoplates) for S. aureus and E. coli. SEM analysis indicated that both strains were damaged and extensively inhibited by Ag-nanoplates due to the increasing surface area in AgNPs.
Mechanism of action
In the past decade, silver nanoparticles as antimicrobial agents have attracted much attention in the scientific field. Although several reviews have described the AgNPs' mechanism in detail, the exact mechanism of the antibacterial effect of silver and AgNPs remains to be not fully elucidated. Most studies considered multiple mechanisms of action but simplified the main tree of different mechanisms determine the antimicrobial activity of silver nanoparticles: (1) irreversible damage of bacterial cell membrane through direct contact; (2) generation of reactive oxygen species (ROS); and (3) interaction with DNA and proteins [80][81][82][83]. The damage of cell membranes by AgNPs causing structural changes renders bacteria more permeable and disturbs respiration function [84]. Interestingly, Morones et al. [84] demonstrated the existence of silver in the membranes of treated bacteria as well as in the interior of it by transmission electron microscopy (TEM) analysis. Another aspect of mechanism is the role of Ag + ions release. Research has shown that the Ag + ions at a lower concentration than that of AgNPs can exert the same level of toxicity [60]. Several evidences suggest that the silver ions are important in the antimicrobial activity of silver nanoparticles [81,85]. Durán et al. [81] discussed that silver ions react with the thiol groups of proteins, producing bacterial inactivation, and inhibit the multiplication of microorganisms. Ag + in μmol L −1 levels had weakened DNA replication due to uncoupling of respiratory electron transport from oxidative phosphorylation, which inhibits respiratory chain enzymes and/or interferes with membrane permeability. On the other side, silver ion can interact with the thiol groups of many vital enzymes and inactivate them and generate reactive oxygen species (ROS) [29]. The AgNPs can act as a reservoir for the monovalent silver species released in the presence of an oxidizer.
[85] Ag + release was found to correlate with AgNP size, the silver nanoparticles antibacterial activity below 10 nm is mainly caused by the nanoparticle itself, while at larger sizes, the predominant mechanism occurs through the silver ions [81]. Lee et al. [86] studied the mechanism of antibacterial action on Escherichia coli. A novel mechanism for the antibacterial effect of silver nanoparticles, namely the induction of a bacterial apoptosis-like response, was described. They observed accumulation of reactive oxygen species (ROS), increased intracellular calcium levels, phosphatidylserine exposure in the outer membrane which indicate early apoptosis, disruption of the membrane potential, activation of a bacterial caspase-like protein and DNA degradation which is the sign of late apoptosis in bacterial cells treated with silver nanoparticles (Figure 2).
Antimicrobial activity of silver nanoparticles combined with various antibiotics is currently being studied, and the synergistic antibacterial effect has been found. Singh et al. [62] studied individual and combined effects of AgNPs with 14 antibiotics. They found that synergistic action of AgNPs and antibiotics resulted in enhanced antibacterial effect. Exposure of bacteria in combination of AgNPs and antibiotics reduced the MICs significantly, and the bacteria were found to be susceptible to all of the tested antibiotics, except cephalosporins, where no change was observed. The significant reduction of required antibiotic dose up to 1000-fold in combination with small amount of AgNPs could achieve the same effect. The study on bacterial strains resistant to one or more antibiotics belonging to the β-lactam class indicated that the addition of AgNPs decreased MIC on the susceptibility range, therefore, addition of AgNPs not only reduced MIC, but also caused bacteria sensitivity to antibiotic. Briefly, simultaneous action of AgNPs with antibiotics could prevent the development of bacterial resistance. These results are in accordance with findings reported by Gurunathan [76], who observed synergistic effects of silver nanoparticles in the presence of conventional antibiotics on Gram-negative bacteria E. coli and K. pneumoniae. The viability of bacteria was significantly reduced by more than 75% in combination of sublethal dose of meropenem and AgNPs. Evidence of a synergistic effect resulting from the combination of silver nanoparticles with five different antibiotics was declared by reducing MIC against multiresistant, β-lactamase, and carbapenemase producing Enterobacteriaceae [87]. The resistance on antibiotic treatment of S. aureus is fast growing global problem due to slow development of new effective antimicrobial agents. Akram et al. [88] investigated synergic effect of five various antibiotics and AgNPs applied in combination with blue light against methicillin-resistant S. aureus (MRSA). These triple combinations of blue light, AgNPs, and the antibiotic considerably enhanced the antimicrobial activity against MRSA, in comparison with treatments involving one or two agents.
The biofilm formation is adjunctive problem of resistance on antimicrobial agents and chronic bacterial infections. It was proposed that Ag-NPs can impede biofilm formation [89]. Hwang et al. [90] found that combination of AgNPs with ampicillin, chloramphenicol, and kanamycin against various pathogenic bacteria inhibits the formation of biofilm. Deng et al. [91] examined the synergistic antibacterial mechanism of four different classes of conventional antibiotics in combination with AgNPs against the multidrug-resistant bacterium Salmonella typhimurium. The antibiotics enoxacin, kanamycin, neomycin, and tetracycline interact with AgNPs strongly and forming antibiotic-AgNPs complex, while no such effects were observed for ampicillin and penicillin. This complex with AgNPs interacts more strongly with the Salmonella cells and causes more Ag + release, thus creating a temporal high concentration of Ag + near the bacterial cell wall that ultimately causes cell death. Green-Synthesized Silver Nanoparticles and Their Potential for Antibacterial Applications http://dx.doi.org/10.5772/intechopen.72138
Conclusion
The use of silver nanoparticles provides an opportunity to solve a global problem of bacterial resistance toward antibiotics. The possibilities of silver nanoparticles synthesis are very broad. In the last decade, there has been dramatically grown scientific interest in nanoparticles biosynthesis by various reducing and capping agents presented in biological sources including plants, plant extracts, microorganism, or larvae. The natural green synthesis approach is an eco-friendly and cost-effective due to the fact that no toxic and dangerous chemicals are used.
One of the key aspect in the design of more potent antibacterial system is the understanding its mode of action. Generally, nanoparticles are well established as promising alternative to antibiotic therapy or combinational therapy because they possess unbelievable potential for solving the problem with the development of pathogens resistance. Finally, from this point of view, silver nanoparticles represent product with potential application in medicine and hygiene, and the green synthesis of AgNPs can pave a way for the same. | 2018-12-08T01:07:31.153Z | 2017-12-20T00:00:00.000 | {
"year": 2018,
"sha1": "6a57ea7278876ff9c48e561de01ff6b26b40a75b",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/57963",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "28db6264a4a7efdf1ba8ab82ff9179e943c71f6b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
52266924 | pes2o/s2orc | v3-fos-license | Origin of the Variability of the Antioxidant Activity Determination of Food Material
of one-dimensional methods have been developed to measure antioxidant activity in vitro. The methodological diversity is due to the use of a broad range of conditions for antioxidant activity measurement. This diversity has led to widely conflicting results that are extremely difficult to interpret.
Introduction
With the development of functional foods having beneficial effects for the health, the interest of scientists, consumers and industrialists in raw materials rich in antioxidants has increased considerably over last few years. Moreover, consumers require more accurate information on the composition of the food which they eat. Natural antioxidants such as phenolic compounds have been reported to possess beneficial bioactivities due to their capacity to act as antioxidant; anticarcinogenic, antibacterial, antimutagenic, anti-inflammatory, and antiallergic. These activities contribute to bringing a feeling of well-being to consumers. In fact, it was observed that the consumption of foods rich in antioxidants leads to the decrease of some diseases such as diabetes, cancer, cardiovascular or neuronal diseases [1]. To evaluate the antioxidant activity of raw materials several methods were investigated. These methods go through two steps. The first one is the extraction of the antioxidant compounds from the matrix of raw material and the second one consists in the determination of the antioxidant activity. For each step several alternatives are described in the bibliography. The data obtained showed a large variability depending on the method used. This renders the choice of an appropriate method a very sensitive task. The objective of this chapter is to make a critical study of the various methods of evaluation of the antioxidant activity of food raw materials. For this, in the first part, we will underline the different sources of variability from several examples of food raw materials. Then in the second part the different methods used for the extraction of antioxidants will be detailed. The third part will bring a complete view of the existing methods to measure antioxidant activity. The last part will deal with a general discussion on the meaning of the different values of antioxidant activity.
These results indicates that for a given method, the extraction procedure has a great impact on antioxidant activity values. For example with the ABTS method, using an extraction with methanol as solvent and assisted by ultrasound, this leads to 6.7 μmol TE/g FW; while the extraction with a mixture of methanol and water (80%v/v) or with acetone furnishes only 0.94 μmol TE/g FW. The measurement of bioavailability directly in plasma gives a value of 8.3 μmol TE/g FW. Different values are also obtained depending on the method of the extraction used with FRAP or ORAC protocols.
The analysis of the results of antioxidant activity of pure components and extracts from the food matrix indicates a broad variability in antioxidant values whatever the method used. This variability is also observed for a given method with the variety or the degree of maturation of the food raw material. This variability of the antioxidant activity determination can be attributed to three sources (i) factors related to food products such as the variety, and the growth method. (ii) Factors related to the extraction method such as pH, temperature, solvent, presence of an accelerator and (iii) factors related to the method used for the antioxidant activity determination.
Extraction techniques
To measure antioxidant activity of food raw materials, the active molecules must be extracted from the food matrix. The processes of extraction of the phenolic compounds are affected by several factors such as the pH, the temperature, the solvent used. Thus, the optimization of this step requires a judicious choice of the set points of these factors. However, in the bibliography few studies have been devoted to the optimization of these factors.
Moreover, these factors need to be adjusted according to the matrix of the raw material and the quantity of antioxidant molecules. To help in the choice of the most suitable method of the extraction, the main processes described in the literature are summarized in table 4. The advantages and the drawbacks of each process are also reported. Different processes of extraction of active compounds are available. However, the effectiveness of these processes is affected by several factors such as the nature of the solvent, the temperature or the extraction time. The presence of an accelerator of extraction such as microwaves or ultrasounds also plays significant role. The availability of the active molecule will be also taken into account. The analysis of the efficiency of the different processes described above indicates that the use of accelerators provides higher yields than the solid-liquid extraction (SLE) while allowing a low temperature to be maintained. The least advantageous method is the solid-liquid extraction due to the toxicity of solvent and the long time extraction in the majority of cases. The use of microwaves (MAE) as accelerator is highly acclaimed as an alternative method. The use of ultrasounds (UAE) also allows an enhancing of the extraction of active compounds at low temperatures but it leads to lower yields than with microwaves. Other accelerators can be used (ASE, PLE) but they need to increase the pressure and/ or the temperature, which can damage target molecules or alter their properties. Supercritical fluid extraction (SFE) does not use drastic conditions but the molecules extracted must to be soluble in liquid CO 2 . The use of a co-solvent may be necessary if antioxidants are poorly soluble in CO 2 . So, the difference in the efficiency of the different extraction methods used for antioxidant activity determination could be at the origin of the variability observed in the bibliography.
Thus, the choice of a method of extraction needs to take into account the nature of the food matrix and the structure of the molecule to be extracted. The physico-chemical factors of the extraction must be also adjusted carefully. In conclusion there is a great need to standardize the methods of extraction by establishing different protocols and pay attention to different conditions.
In vitro methods for antioxidant activity measurement
An antioxidant is usually defined as a molecule which delays, prevents or removes oxidative damage to a target molecule [17], thus an antioxidant is assessed according to its ability to neutralize free radicals as for example in equation 1 to avoid oxidative degradations.
AO: antioxidant molecule, FR⋅: free radicals AH : antioxidant molecule, R⋅: free radicals Free radicals are reactive oxygen species produced either through numerous biological reactions: mitochondrial respiratory chain or any inflammatory conditions, or from numerous environmental factors such as pollutants, U.V., alcohol, smoking, stress, drugs,... Free radicals are useful if they are in low quantity; they allow the elimination of old cells of the living organism by oxidation reactions or participating in the body's defense. However if they are too numerous, they attack other cells inducing a rapid aging of these cells which causes damage to living organisms. To avoid these reactions, antioxidants can neutralize free radicals and protect our cells. When antioxidant quantity is not enough to neutralize free radicals, it leads to the oxidative stress which has a great importance in the development of chronic degenerative diseases including coronary heart disease, cancer and the degenerative processes associated with aging.
Antioxidants can neutralize radicals by two different mechanisms. The final product will be the same but reactions occurring are different. Radicals can be deactivated either by hydrogen donation (Hydrogen Atom Transfer HAT) or by electron transfer (Single Electron Transfer SET). HAT and SET mechanisms may occur in parallel, the predominant mechanism being determined according to antioxidant structure and properties, solubility, partition coefficient, and system solvent [18]. A wide variety of one-dimensional methods have been developed to measure antioxidant activity in vitro. The methodological diversity is due to the use of a broad range of conditions for antioxidant activity measurement. This diversity has led to widely conflicting results that are extremely difficult to interpret.
Systems based on SET
SET-based methods involve two components in the reaction, i.e. the antioxidant and the oxidant. These methods measure the ability of an antioxidant to reduce any compound (metals, radicals) by electron transfer according to equations 3 and 4.
SET reactions are pH dependent. Indeed, relative reactivity in SET methods is based primarily on deprotonation and the ionization potential of the reactive functional groups.. Ionization potential decreases when pH increases, so SET reactions are favored in alkaline environments. SET reactions are usually slow and can require a long time to reach their final state, so antioxidant capacity calculations are based on the decrease in product concentration rather than their kinetic.
• ABTS (2,2'-Azinobis-(3-ethylbenzothiazoline-6-sulfonic acid) assay
ABTS assay is a spectrophotometric method which measures the ability to an antioxidant to scavenge a free radical cation ABTS▪+. This method was developed by [19] and adapted by [20] to generate directly the radical ABTS▪+ through a reaction between ABTS solution (7mM) with potassium persulfate (2,45 mM) in water. The reaction mixture, which is allowed to stand at room temperature for 12-16 h before use, produces a dark blue solution. Thus, the mixture is diluted with ethanol or phosphate buffered saline (pH 7.4) to a final absorbance of 0.7 at 734 nm (wavelength the most used) and 37 °C. The assay is based on the discoloration of ABTS▪+ during its oxidation by antioxidant compounds, thus reflecting the amount of ABTS radicals that are scavenged within a fixed time period (generally 6 min). The absorbance of the reaction mixture between radicals and antioxidants is compared to that of the 6-hydroxy-2,5,7,8tetramethylchroman-2-carboxylic acid (Trolox). When Trolox is used as standard, this assay is also called Trolox Equivalent Antioxidant capacity (TEAC) assay.
The major advantages of this method are its simplicity to perform and its applicability in lipid and aqueous phases [21]. Thus this method has been widely used in testing antioxidant capacity in food samples. Moreover, the ABTS radical is stable over a wide pH range and can be used to study pH effects on antioxidant mechanisms [22]. This method can be automated and adapted to the use with microplates which allows the carrying out of this measurement with better precision and time.
A major disadvantage of this method is that only the rapid oxidation reactions can be measured because incubation time is often short (6 min). Thus, antioxidants whose constant rates of radical scavenging are low can be undervalued in comparison with their real antioxidant capacity. Moreover, imprecisions on ABTS values can be increased by the fact that variations can occur according to the preparation of ABTS▪+, and the medium temperature which has to be controlled.
• DPPH (2,2-diphenyl-1-picrylhydrazyl) assay
DPPH is one of the oldest and most popular technique used to measure the antioxidant activity of a compound. This method was first described by [23] and subsequently modified by numerous researchers. This method measures the reducing ability of antioxidants toward DPPH▪. DPPH▪ is commercially available and does not have to be generated as for ABTS assay. The antioxidant effect is proportional to the disappearance of DPPH▪ in a methanolic solution. DPPH solution being purple, the absorbance of the mixture can be followed by spectophotometry at 515 nm. Assay time may vary from 10±20min up to 6h. Other techniques such as electron spin resonance (EPR) can be used [18].
Like ABTS, this method is simple and can be automated. However, values found by the DPPH method have to be considered as apparent antioxidant activities because (i) DPPH color can be lost via either radical reaction (HAT) or reduction (SET) as well as unrelated reactions, (ii) steric accessibility also influences the reaction, thus small molecules are favored because they have a better access to the radical site and other compounds such as carotenoids can interfer in the measurement of the antioxidant activity [24].
• Ferric reducing ability of plasma (FRAP)
The FRAP assay is different from the others as there are no free radicals involved but the reduction of ferric iron (Fe 3+ ) to ferrous iron (Fe 2+ ) is monitored. FRAP assay was initially described by [25] for measuring reducing power in plasma and subsequently adapted and modified by numerous researchers to measure antioxidant power of botanical extracts [26].
When an Fe 3+ -TPTZ (2,4,6-tripyridyl-s-triazine) complex is reduced to Fe 2+ by an antioxidant under acidic conditions, it forms an intense blue color with absorption maximum at 593nm. Thus the antioxidant effect can be followed by a spectrophotometer.
A major advantage of the FRAP assay is its simplicity, speed and robustness. The validity of this assay was proved in order to quantify samples with hydrophilic and lipophilic antioxidants. As for ABTS assay, only rapid reactions will be taken into account until the incubation time in this method is short (4-6 min). The FRAP assay measures only reactions following the SET mechanism, antioxidant hydrogen donator may go unmeasured by this assay. This method is thus used in parallel with others to determine the action mechanisms of antioxidants. Protein and thiol antioxidants, such as glutathione cannot be measured by the FRAP assay.
• CUPric Reducing Antioxidant Capacity (CUPRAC) assay
The CUPRAC assay has many similarities to FRAP, Cu is used instead of Fe. This assay is based on the reduction of Cu (II) to Cu (I) by the antioxidants present in the sample. Cu (I) forms a complex with neocuproine (2,9-dimethyl-1,10-phenanthroline) with a maximum absorbance at 450 nm. A dilution curve generated by uric acid standard is used to convert sample absorbance to uric acid equivalents [18]. Phenanthroline complexes have very limited water solubility and must be dissolved in organic solvents. Cuprac values are comparable to TEAC values, whereas FRAP values are lower. The CUPRAC assay has many advantages [27]. Indeed, the CUPRAC assay is more selective due to its lower redox potential. Sugars and citric acid cannot interfere in the assay because they are not oxidized in CUPRAC. The CUPRAC reagent is much more stable than other radicals such as DPPH, ABTS. The redox reaction giving rise to a coloured chelate of Cu(I)-Nc is relatively insensitive to a number of parameters such as air, sunlight, humidity, and pH. The CUPRAC reagent can be adsorbed on a membrane to build an optical antioxidant sensor.
Systems based on HAT
The HAT-based methods involve a synthetic radical generator, oxidisable molecular probe and an antioxidant compound. This method measures the ability of an antioxidant to quench free radicals by hydrogen donation as in equation 2. Assays that are based on HAT mechanisms measure competitive kinetics [22].
Antioxidant with hydroxyl component OH donates an H atom to an unstable free radical to give a more stable radical. HAT reactions are solvent and pH independent and are usually quite rapid, typically completed in seconds to minutes. The presence of reducing agents, including metals, is a complication in HAT assays and can lead to erroneously high apparent reactivity [18].
• Oxygen radical absorbance capacity (ORAC)
The ORAC assay has been used widely in measuring the net resultant antioxidant capacity (or peroxyl radical absorbance capacity) of botanical and other biological samples.
The ORAC assay was developed by [28] for the determination of reactive oxygen species in biological systems. [29] modified the method using fluorescein (FL) as a more stable and reproducible fluorescent probe. This method exists under several adaptations but the principle always remains the same: using a fluorescent probe and AAPH (2,2'-azobis(2-amidinopropane) dihydrochloride) to generate peroxyl radicals. A HAT reaction occurs between antioxidant samples (or standard) and the peroxyl radicals generated by thermal degradation of AAPH. These reactions lead to a loss of fluorescence measured at 515 nm.
The final results (ORAC values) were calculated using the differences between blank and sample areas under the quenching curves of fluorescein, and were expressed as micromoles of Trolox equivalents (TE).
The ORAC method is superior to similar methods because it combines inhibition time and inhibition degree of free radicals. The ORAC using fluorescein is specific for antioxidants and is sensitive, precise and robust. This assay can model reactions of antioxidants with lipids in both food and physiological systems and it can be adapted to detect both hydrophilic and hydrophobic antioxidants with minor modifications. However, the need of a fluorometer, which may be not routinely available, is considered as a disadvantage of this method. The long analysis time has also been a major criticism even if this assay can be automated.
• β -carotene bleaching test
This assay was developed by [30] and modified by other researchers. This assay is based on the generation of a stable β -carotene radical from β -carotene peroxyl radical; the latter coming from lipids (linoleic acid for example) in the presence of ROS and O 2 . Thus, the assay measures the ability to an antioxidant to quench β -carotene radical by donating hydrogen atoms. It results in the bleaching of the solution which can be followed with a spectrophotometer at 470 nm.
The main advantage of this assay is its applicability in both lipophilic and hydrophilic environments. Another advantage is that the carotenoid bleaching assay can detect either the antioxidant or pro-oxidant action of a compound under investigation. Lastly, the carotenoid bleaching assay can be automated by the use of microplates. However, a major limitation is that the discoloration of β-carotene at 470 nm can occur through multiple pathways, thereby complicating the interpretation of results. Other carotenoids such as crocin bleach only using the radical oxidation pathway but crocin is not commercially available. The use of molecules commercially available provide repeatable and reliable data between laboratories
• Total peroxyl radical-trapping antioxidant parameter assay (TRAP)
The total peroxyl radical-trapping antioxidant parameter (TRAP) assay was introduced by [31] to measure the total antioxidant status of human plasma. This assay monitors the ability of an antioxidant to interfere with the reaction between peroxyl radicals generated by AAPH (2,2'azobis (2-amidinopropane) dihydrochloride) and the target. The oxidation is monitored by oxygen uptake measurement. Results are expressed as time necessary to consume all radicals in comparison with a standard (Trolox). Many modifications were realized on this assay to react with lipids, or to be followed by fluorimetry or to take into account interference from lipids and proteins in plasma [32]. Despite its simplicity, the TRAP assay leads to imprecise results because of difficulties to maintain the endpoint over the period of time. Several modifications were developed by using chemiluminescence methods [33].
Discussion -Conclusion
Many results on the determination of the antioxidant activity of purified molecules and / or food raw material have been published over the last few years. However, the obtained data present a broad variability even for a given method or a given molecule. To overcome these problems, some authors have proposed other alternatives by developing new methods, or new ways to process data and express the results. [34] proposed the « quencher method », where the antioxidant activity is directly measured from the solid sample without the extraction step. Free radicals are mixed with the food sample and a spectrophotometric method (ABTS, DPPH) was used. [35] developed the global antioxidant response (GAR) method which uses an in vitro approach with enzymatic digestion, designed to mimic digestion through the gastrointestinal tract aimed at releasing antioxidants in foods. [36] suggested a general method of standardization of estimations of total antioxidant activity (TAA) by extrapolating parameters to zero sample concentration based on a pseudo-first-order kinetics model. Accurate results were obtained in comparison with the ABTS method. Moreover, several papers deal with the standardization of the extraction procedures and the results analysis for a given method in order to minimize the observed variability [37]. However, it appears difficult to find a universal method knowing that many kinds of antioxidants and radicals are present. Four general sources of antioxidant have been identified: (i) enzymes (superoxide dismutase, gluthatione peroxidase), (ii) large molecules (albumin, ferritin), (iii) small molecules (phenols, ascorbic acid, carotenoids) and (iv) and some hormones (estrogen, melatonin). Many kinds of free radicals can be found, for example O 2 ▪-, HO▪, NO▪, RO(O)▪, LO(O)▪. Moreover, the stability, the selectivity of the radicals and the reaction mechanisms can be also different. Thus, it is possible that no single method may be able to express the antioxidant capacity of different antioxidants taken independently or in a mixture [18]. Previous studies demonstrated that it is not appropriate to use one-dimensional methods to evaluate the antioxidant activity of multifunctional food such as fruits and vegetables, since they contain a large diversity of natural antioxidants.
The determination of antioxidant activity in the food matrix needs a sample preparation to extract the active molecules and then an accurate method of measurement and an expression of the results. (i) During sample preparation, precautions must be taken to avoid the loss of antioxidants due to the drastic conditions of extraction. A determination of all food constituents is necessary because a certain interference with antioxidants can occur. Antioxidant capacity values should only be compared where the method, the solvent and the analytical conditions are similar [38]. Indeed, some authors underlined that there is an effect of the solvent used for the extraction or used to solubilize antioxidants on the result of the antioxidant activity evaluation [39][40][41][42]. This is due to interference of the reaction mechanism and the solvent [38].
(ii) The method to measure the antioxidant activity must be chosen according to the nature of the active molecules present in the samples. Some methods described in part 3 are more appropriate for some kinds of antioxidants. For example the DPPH method is more adequate to lipophilic systems. Moreover, several assays must be carried out to determine a value of antioxidant activity. (iii) Results of antioxidant activity measurement can be expressed as EC50 (quantity of antioxidant necessary to assure 50% depletion of free radicals), tEC50 (time to reach 50% depletion of free radicals), tEC50 (time to reach 50% depletion of free radical) or AE (antiradical efficiency defined as the inverse of the product between EC50 and tEC50). Thus, taking these 3 parameters into account can be relevant to have a more comprehensive evaluation of antioxidant activity [38].
The determination of the antioxidant capacity by in vivo methods is not always feasible but it appears interesting because it simulates an environment closer to that really happening in biological systems. Methods using HAT reactions will be preferred to SET reactions because peroxyl radicals used in HAT assays are the predominant free radicals found in lipid oxidation and biological systems. To elucidate a full profile of antioxidant capacity against various ROS, the development of different methods specific for each ROS/ RNS may be needed. [18] proposed a comparison of different in vitro methods; conclusions given that ORAC, TRAP and LDL are considered to be the most biologically relevant assays [18] because the antioxidant capacity measured reflects closer the in vivo action of the antioxidants. So, it appears clearly that the antioxidant activity determination needs a standardization of the procedure used and a combination of at least two or three methods. The use of only one method does not reflect the antioxidant activity of food raw material due the variability of the molecules that act as antioxidant. | 2018-09-13T00:28:08.791Z | 2015-04-15T00:00:00.000 | {
"year": 2015,
"sha1": "c4e819eae8cf89371d40155063e20cfbe86a0290",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/48308",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42a3cdff5582ff650f1c81dcde18017abc62a48f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
2417571 | pes2o/s2orc | v3-fos-license | Analysis of Meiosis in SUN1 Deficient Mice Reveals a Distinct Role of SUN2 in Mammalian Meiotic LINC Complex Formation and Function
LINC complexes are evolutionarily conserved nuclear envelope bridges, composed of SUN (Sad-1/UNC-84) and KASH (Klarsicht/ANC-1/Syne/homology) domain proteins. They are crucial for nuclear positioning and nuclear shape determination, and also mediate nuclear envelope (NE) attachment of meiotic telomeres, essential for driving homolog synapsis and recombination. In mice, SUN1 and SUN2 are the only SUN domain proteins expressed during meiosis, sharing their localization with meiosis-specific KASH5. Recent studies have shown that loss of SUN1 severely interferes with meiotic processes. Absence of SUN1 provokes defective telomere attachment and causes infertility. Here, we report that meiotic telomere attachment is not entirely lost in mice deficient for SUN1, but numerous telomeres are still attached to the NE through SUN2/KASH5-LINC complexes. In Sun1−/− meiocytes attached telomeres retained the capacity to form bouquet-like clusters. Furthermore, we could detect significant numbers of late meiotic recombination events in Sun1−/− mice. Together, this indicates that even in the absence of SUN1 telomere attachment and their movement within the nuclear envelope per se can be functional.
Introduction
Nuclear anchorage and movement, including the directed repositioning of components within the nucleus, are essential for coordinated cell division, proliferation and development [1]. As these processes are largely dependent on cytoskeletal components, the cytoskeleton needs to interact with both the nuclear envelope (NE) and the nuclear content [2]. In this context, the so-called LINC (linker of nucleoskeleton and cytoskeleton) complexes emerged as the key players in that they represent the central connectors of the nucleus and its content to diverse elements of the cytoskeleton [2][3][4]. LINC complexes are widely conserved in evolution regarding their composition and function. They are composed of SUN (Sad-1/UNC-81) domain proteins that reside in the inner nuclear membrane (INM) which bind to KASH (Klarsicht/ANC-1/Syne/homology) domain proteins of the outer nuclear membrane (ONM) [4,5]. Through specific interactions of SUN domain proteins with nuclear components, such as lamins, and the interactions of KASH domain proteins with the cytoskeleton, the SUN-KASH complexes are able to transfer mechanical forces of the cytoskeleton directly to the NE and into the nucleus [6,7].
During meiosis, telomeres are tethered to and actively repositioned within the NE. The characteristic telomere-led chromosome movements are an evolutionarily highly conserved hallmark of meiotic prophase I; they are a prerequisite for ordered pairing and synapsis of homologous chromosomes [8,9]. Directed chromosome movement, pairing and recombination are closely interdependent processes and their correct progression is essential for the faithful segregation of homologous chromosomes into fertile gametes. Failure in any of these processes leads to massive meiotic defects and, consistent with this, mutant mice showing defects in meiotic telomere attachment, chromosome dynamics or synapsis formation are mostly infertile due to apoptosis during prophase I [10][11][12][13].
The attachment of meiotic telomeres to the NE is mediated by SUN-KASH protein complexes [11,[14][15][16][17][18][19][20][21]. Of the five SUNdomain proteins known in mammals, SUN1 and SUN2 have been shown to be the only ones that are also expressed in meiotic cells [11,22]. Recently, a novel meiosis-specific KASH domain protein, KASH5, has been identified as a constituent of the meiotic telomere attachment complex [23,24]. With this, the first fully functional and complete mammalian meiotic LINC complex comprised of SUN1 and/or SUN2 within the INM and KASH5 as the ONM partner has been characterized. Nonetheless, many aspects of mammalian meiotic telomere attachment and movement, including its regulation, are not yet fully understood. To date, SUN1 and SUN1/SUN2 deficient mice have been studied to investigate both somatic and meiotic functions of SUN1 and SUN2 [11,21,25,26]. These studies have provided clear evidence that in somatic cells SUN1 and SUN2 play partially redundant roles. However, it also turned out that mice deficient in SUN1 are infertile due to serious problems in attaching meiotic telomeres to the nuclear envelope [11,21], demonstrating the importance of SUN1 for meiotic cell division. Although SUN2 was found to be present at the sites of telomere attachment during meiotic prophase I, the SUN1 deficient phenotype demonstrated that SUN2 apparently is not able to effectively compensate for the loss of SUN1 in meiosis [11,21,22]. To learn more about the distinctive roles of SUN1 and SUN2 in meiotic telomere function and behavior we started a detailed re-evaluation of the meiotic phenotype caused by SUN1 deficiency. In our current study we now show that in the absence of SUN1 meiotic telomere attachment actually is not entirely lost, pointing to the existence of a SUN1-independent, partially redundant attachment mechanism. Consistent with this, we could find that in Sun1 2/2 mice NE-attached telomeres co-localize with SUN2 and KASH5, suggesting that telomere attachment is mediated by SUN2/ KASH5-LINC complexes in SUN1 deficient meiocytes. Furthermore, Sun1 2/2 meiocytes showed clustering patterns of the NEattached telomeres that resembled typical bouquet-like configurations, indicating that SUN2 is not only sufficient to connect a significant portion of telomeres to the NE, but rather is part of a functional LINC complex capable of transferring cytoplasmic forces required to move telomeres.
Results and Discussion
Though NE-attachment of telomeres is disturbed in SUN1 deficient mice, numerous telomeres can still be found attached to the NE In recent years, it has been established by several groups that meiotic telomere attachment in mammals involves SUN1 and SUN2 as part of the NE spanning LINC complex connecting the meiotic telomeres to the cytoskeleton [11,21,22]. To analyze SUN1 function, two independent SUN1 deficient mouse models have been generated so far (here referred to as Sun1 (Dex10-13) [11] and Sun1 (Dex10-11) [21]), which both revealed a virtually identical, exclusively meiotic phenotype: both male and female SUN1 deficient mice showed severe meiotic defects, which were ascribed to massive problems in meiotic telomere attachment [11,21]. Although SUN2 compensates for the loss of SUN1 in somatic cells, SUN2 overtly does not have the competence to counterbalance loss of SUN1 in meiocytes, and hence it was described that telomere attachment is prevented in Sun1 2/2(Dex10-13) mice [11]. Since we have previously found SUN2 expressed in meiocytes, where it localizes to the sites of telomere attachment [22], this raises the question of the real function of SUN2 in meiosis. To investigate the actual role of SUN2 during meiosis, we therefore initiated a detailed analysis of telomere attachment in SUN1 deficient meiocytes and started off with spermatocytes and oocytes from Sun1 2/2(Dex10-11) mice, which were previously demonstrated to be SUN1 deficient [21]. Worth mentioning, using antibodies recognizing an epitope encoded by exons 13 to 14 [27] we could confim that these mice in fact do not express a functional SUN1 protein (data not shown). To study telomere behavior in SUN1 deficient mice, we combined telomere fluorescence in-situ hybridization with immunocytochemical labeling of the lamina and the synaptonemal complexes in spermatocytes and oocytes of SUN1 knockout and wildtype littermate mice ( Figure 1A). As expected, in wildtype spermatocytes and oocytes all telomere signals that are clearly associated with the ends of synaptonemal complexes, are embedded within the lamina ( Figure 1A and A0). Consistent with the previously published results [11,21], we found that telomere attachment to the nuclear envelope is significantly disturbed in SUN1 deficient meiocytes ( Figure 1A9 and A90). This is evident from telomere signals located in the nuclear interior, in significant distance to the NE. However, within the same meiocytes, we found that numerous telomere signals were still embedded within the lamina (arrowheads in A9 and A90), indicating that in the absence of SUN1 telomere attachment may not be entirely lost, but only reduced. The unexpected high numbers of peripheral, nuclear envelope associated telomere signals that were observed in both spermatocytes and oocytes of Sun1 2/2(Dex10-11) mice (see below) gave the impression that at least a portion of the peripheral telomeres might be structurally anchored at the nuclear envelope, which would clearly contradict the previous notion that loss of SUN1 completely prevents telomere attachment [11]. To clarify whether these telomeres are truly attached or merely located in close vicinity to the NE, we therefore prepared testis tissue and ovary samples for electron microscopy, as both synaptonemal complexes and sites of telomere attachment can easily be detected in electron micrographs ( Figure 1B-B9, C-C9). To affirm that putative attachment does not depend on the knockout genotype, we analyzed samples from both currently available SUN1 deficient strains, Sun1 (Dex10-13) [11] and Sun1 (Dex10-11) [21]. As anticipated, fully synapsed stretches of synaptonemal complexes attached to the nuclear envelope were clearly evident in all control samples of pachytene spermatocytes and oocytes. Remarkably, oocytes and spermatocytes from both
To define the percentage of attached telomeres in Sun1 2/2(Dex10-11) spermatocytes we quantified the number of attached and non-attached telomeres in 3 dimensionally preserved nuclei of cells, simultaneously labeled for the nuclear lamina, the synaptonemal complexes and telomeres. To evaluate further whether the absence of SUN1 impacts telomere attachment in a stage dependent manner during meiotic progression, we additionally quantified and compared telomere attachment in spermatocytes at early leptonema, zygonema and at pachynema. For this we prepared tissue samples of wildtype and knockout littermates aged 12 and 14 days post partum (dpp). As in the first wave of spermatogenesis development of spermatocytes within the seminiferous tubules is nearly synchronized [28], at 12 dpp most spermatocytes within the tubules could be found at early leptonema to early zygonema. In tubules where early leptotene spermatocytes predominated, telomere attachment was not complete in both wildtype and knockout spermatocytes, probably due to the very early meiotic stage (77.7% and 64.5% attached telomeres in wildtype and knockout, respectively; Figure 1 D; Figure S1). In tubules where early zygotene spermatocytes were accumulated all wildtype spermatocytes showed complete telomere attachment, whereas in knockout zygotene spermatocytes not more than 71.2% of all telomeres appeared to be NE-attached (Figure 1 D9; Figure S1). We observed similar rates of telomere attachment in spermatocytes of 14 dpp mice, where pachytene stages predominated. Here, wildtype spermatocytes again showed complete attachment of all telomeres, whereas Sun1 2/2(Dex10-11) males only showed 69.8% of telomeres attached to the NE (Figure 1 D0). These results implicate that the process of telomere attachment is induced despite SUN1 deficiency, yet full telomere attachment is never reached. Almost equivalent rates of attachment could be detected in zygotene and pachytene spermatocytes of Sun1 2/2(Dex10-11) mice, suggesting that once telomeres succeed to attach they maintain their association with the NE throughout prophase I, even in the absence of SUN1. This indicates that attachment of telomeres to the NE without SUN1 is stable enough to withstand potential mechanical forces generated by the chromatin or cytoskeleton. The unexpected, relatively large proportion of telomeres that, without SUN1, are still capable of stably attaching to the NE clearly points towards the existence of a partially redundant and SUN1-independent attachment mechanism.
KASH5 localizes to NE-associated telomeres in SUN1 deficient meiocytes Very recently, it has been described that meiotic tethering of telomeres to the cytoskeleton is mediated by the novel meiosisspecific KASH-protein KASH5 [23,24]. To clarify whether KASH5 is also involved in the attachment of telomeres in SUN1 deficient meiocytes, we conducted immunofluorescence experiments labeling KASH5 and SYCP3, a major component of the lateral elements of synaptonemal complexes [29], in wildtype and SUN1 knockout spermatocytes. Consistent with earlier reports [23,24], strong KASH5 foci at the ends of synaptonemal complexes were detected in all wildtype pachytene spermatocytes (Figure 2 A), labeling telomeres attached to the NE. However, in contradiction to earlier reports [23,24], in our hands KASH5 foci were also consistently present in Sun1 2/2(Dex10-11) spermatocytes in several independent experiments and different animals tested (Figure 2 A9, A0). Although significantly weaker than in the wildtype tissue, the KASH5 signals in SUN1 deficient meiocytes nevertheless showed a wildtype-like distribution. KASH5 in the SUN1 deficient spermatocytes was found to be localized just at those ends of synaptonemal complexes that are in close contact with the NE. These experiments again corroborate that in the absence of SUN1 the remaining NE-associated telomeres are indeed attached to the NE. Beyond this, the attached telomeres are connected to the cytoskeleton through a linkage that involves KASH5.
Even in the absence of SUN1, SUN2 co-localizes with KASH5 at the sites of telomere attachment In an earlier publication [22] we were able to demonstrate that SUN2 is expressed throughout meiotic prophase I, where it co- Figure 2. KASH5 localization in SUN1 deficient males. Representative spermatocytes in paraffin sections of Sun1 +/+(Dex10-11) and Sun1 2/2(Dex10-11) testis stained for SYCP3 and KASH5. In the wildtype (A) the expected KASH5 localization at the distal ends of synaptonemal complex axes can clearly be observed. In Sun1 2/2(Dex10-11) spermatocytes (A9-A0) the KASH5 signal, although weaker, is also clearly detectable. As seen in the wildtype, distinct KASH5 foci also co-localize with the ends of synaptonemal complex axes. Scale bars 10 mm. doi:10.1371/journal.pgen.1004099.g002 localizes with attached telomeres in wildtype mice. Therefore, it is tempting to speculate that telomere attachment in the absence of SUN1 is mediated by SUN2. To follow up on this, we generated SUN2 specific antibodies and used these in co-immunolocalisation experiments together with antibodies against SYCP3. Consistent with our previous results, our newly generated antibodies produced the already reported SUN2 foci at the end of synaptonemal complex axes in both wildtype spermatocytes and oocytes (Figure 3 A, B; [22]). Similar to the wildtype situation, SUN2 foci of comparable intensities were also present in spermatocytes and oocytes of different meiotic prophase stages from Sun1 2/2(Dex10-11) mice (Figure 3 A9, A0, B9, B0). This again demonstrates that SUN2 is indeed located at meiotic telomeres. As SUN2 is the only SUN domain protein expressed in Sun1 2/2 meiocytes, it appears likely that it is in fact SUN2 that mediates the observed telomere attachment in the SUN1 deficient mice. To further investigate attachment of telomeres in the Sun1 2/2(Dex10-11) mice, in particular with regard to possible KASH protein partners, we conducted co-immunostaining experiments using KASH5 and SUN2 antibodies on paraffin testis sections from mice of different ages (12 dpp and adult) (Figure 3 C, C9). Clearly, as anticipated for a functional meiotic LINC-complex, the KASH5 and SUN2 foci in the Sun1 2/2(Dex10-11) spermatocytes co-localized, labeling those telomeres that are attached to the NE in the absence of SUN1. In summary, these results indicate that the SUN2 localization to meiotic telomeres can occur independently of SUN1, which is in accordance with the previous reports of unchanged SUN2 localization in somatic nuclei of Sun1 2/2 mice [26]. Furthermore, by means of the results presented here, SUN2 appears to be, at least to some extent, sufficient for meiotic telomere attachment to the NE. Regarding its possible interaction with KASH5, yeasttwo-hybrid studies have previously shown that the KASH domain of KASH5 in effect is able to interact with both the C-terminal domain of SUN1 as well as of SUN2 [23]. This, in combination with our results, leads us to the conclusion that SUN2 may also form functional meiotic LINC complexes with KASH5 in vivo, which, at least in the absence of SUN1, is able to tether meiotic telomeres to the NE.
In a recent crystallography study investigating LINC complex structure, SUN and KASH domains were shown to interact as two sets of trimeric protein complexes [30]. Furthermore, several groups have proposed SUN1 and SUN2 to form heteromultimeric complexes [31,32]. Taking into account that SUN2 is expressed during meiosis (present study, [22]), sharing its localization with SUN1 and KASH5, it is tempting to speculate that during wildtype meiosis SUN1 and SUN2 assembly heterotrimeric complexes that interact with KASH5 to form meiotic LINC complexes required for efficiently tethering telomeres to the NE. In the absence of SUN1, such LINC complexes may only be composed of SUN2 and KASH5, still tethering telomeres to the NE, yet in a less effective manner than a complete heterotrimeric SUN1/SUN2-KASH5 complex. This could then explain the only partially disturbed telomere attachment observed in both SUN1 deficient mouse models. In addition, our results presented here suggest at least partial redundancy between SUN1 and SUN2 in meiotic telomere attachment, consistent with what has been reported for nuclear anchorage in somatic cells [25,26].
NE-attached telomeres are still capable of forming bouquet-like clusters in SUN1 deficient meiocytes
Prophase I of meiosis is not only characterized by the stable association of telomeres with the NE, but also by directed telomere-led chromatin movements leading to the formation and release of the bouquet stage [8]. Because SUN1 seems to be, at least partially, dispensable for the formation of a meiotic LINC complex per se, we asked whether those telomeres, which attach to the NE despite the absence of SUN1, are still able to move along and to cluster within the NE. To analyze the distribution of the attached telomeres in the Sun1 2/2(Dex10-11) mice, we used KASH5 and SYCP3 antibodies for labeling attached telomeres in relation to synaptonemal complexes in spermatocytes of wildtype and knockout siblings at 12 dpp (Figure 4). At this age, leptotene/ zygotene stages showing clustered telomere patterns normally predominate within the synchronously maturing tubules. To define KASH5 distribution within the NE, we performed 3D reconstructions of single spermatocyte nuclei of wildtype (n = 50 cells) and knockout (n = 64 cells) mice. Spermatocytes showing typically clustered KASH5 patterns resembling bouquet-like conformations of the attached telomeres could be detected in both wildtype and SUN1 knockout siblings (Figure 4 and Supplementary Video S1). Further quantifications with respect to the appearance of clustered versus non-clustered KASH5 patterns revealed that at 12 dpp bouquet frequencies were similar and statistically indifferent between wildtype and Sun1 2/2(Dex10-11) siblings (70% and 79.6%, respectively; p-value 0.23 Pearson's chi square test). These analyses demonstrated that the remaining attached telomeres in SUN1 deficient males in fact are able to form bouquet-like clustered telomere patterns and that this is not a rare event but occurs at similar rates as in the wildtype siblings. It is noteworthy, that we never observed a real clustering of the internal non-attached telomeres in Sun1 deficient spermatocytes. Taken together, we conclude from this that telomeres need to be attached to the NE, likely connected to the cytoskeleton, to form bouquet-like clusters. In Smc1ß 2/2 mice [33], another knockout mouse model where telomere attachment is partially disrupted, bouquet formation of attached telomeres was observed in knockout spermatocytes as well, although at reduced levels compared to the wildtype. Regarding this study and our results, it seems conceivable that completed telomere attachment per se is not an essential prerequisite for telomere clustering. Rather, any telomere which is attached to the NE by a LINC complex has the competence to move within the NE and to proceed to cluster formation.
A subset of chromosomes from SUN1 deficient oocytes proceeds to cross-over formation To investigate the impact of the residual telomere attachment and movement on progression of meiotic recombination events, we started a next series of experiments to analyze oocytes of wildtype and Sun1 2/2(Dex10-11) female mice aged 19.5 dpf (days post fertilization) for the appearance of late recombination events. Using antibodies against MLH1, SYCP1 and SYCP3 together on chromosome spreads allowed us to simultaneously investigate late recombination events and the state of synapsis formation. As expected, we observed the expected one to two MLH1 foci per each synapsed chromosome pair on chromosome spreads of the heterozygous control oocytes (Figure 5 A). Consistent with previous reports [11,21], oocyte spreads from littermate Sun1 2/2(Dex10-11) mice ( Figure 5B, C) showed large numbers of unpaired or incorrectly paired chromosome axes stained by SYCP3, but not by SYCP1. Despite these severe synapsis defects, MLH1 foci were not completely absent from Sun1 2/2(Dex10-11) oocyte spreads. Instead, a small number of homologous chromosomes in Sun1 2/2(Dex10-11) oocytes were apparently able to achieve intact synapsis as shown by the complete co-localization of SYCP1 and SYCP3. Distinct MLH1 foci on these fully paired homologs show that they in effect were able to recruit MLH1 to their axis, thus forming cross-over sites. These results indicate that in the absence of SUN1, the remaining attached telomeres and their directed movements within the NE are sufficient to allow at least partial pairing, synapsis and cross-over formation during later meiosis in females. Therefore, when attachment is effectually reached, this attachment per se and the following movement of the attached telomeres appear to be functional, at least to some extent, even without SUN1.
In conclusion, from our current study it has become evident, that although SUN1 is essential for the efficient attachment of telomeres to the NE, SUN2 also appears to be involved in the tethering of meiotic telomeres to the NE. In the absence of SUN1, an unexpectedly large proportion of telomeres are still able to attach to the NE and, beyond this, are also able to move within the NE, forming bouquet-like clustered telomere patterns. This suggests that in the SUN1 deficient background some of the telomeres not only succeed to establish a tight connection to the NE, but even become linked to the cytoskeletal motor system. Consistent with this, in the SUN1 deficient meiocytes we found KASH5, which interacts with cytoplasmic dynein-dynactin [23,24], co-localizing with SUN2 at sites where telomeres are in contact with the NE. In a very recent study, Horn and colleagues [24] have shown that in mice deficient for KASH5, homolog pairing, synapsis and recombination is severely disturbed. In addition, they never observed clustering of SUN1 foci in KASH5 deficient cells, indicating that KASH5 as the ONM component of meiotic LINC complexes is required for transferring forces to move the INM located SUN proteins and therewith the attached telomeres [24]. Remarkably, the meiotic phenotype observed in the Kash5-null mice appeared much more dramatic than the phenotype induced by SUN1 deficiency. As shown by Horn and colleagues Kash5-null spermatocytes overtly never reach full synapsis not even of single pairs of homologous chromosomes, while in a considerable proportion of Sun1-null spermatocytes full synaptic pairing of at least a subset of homologs could be observed [11,24]. This is consistent with our results demonstrating that attached telomeres in SUN1 deficient mice in effect are able to cluster, most likely mediated by a restricted LINC complex formed by KASH5 and SUN2, hence supporting synapsis and recombination. To date, no mammalian model has been described where meiotic telomere attachment is completely lost. Instead there are a -11) and Sun1 2/2(Dex10-11) mice labeled by KASH5 and SYCP3. As expected non-clustered (A) and clustered (A9) telomere patterns are observed in wildtype spermatocytes. Similar non-clustered (A0) as well as clustered (A90-A00) telomere patterns could also be found in SUN1 deficient spermatocytes. All scale bars 5 mm. doi:10.1371/journal.pgen.1004099.g004 number of phenotypes with more or less severe partial telomere attachment defects, similar to the Sun1 2/2 phenotype described here [33,34]. This is unlike the situation in yeast, for example, where bqt4 has been identified as a key player without which no meiotic telomeres attach to the NE at all [15]. The meiotic telomere attachment in mammals, however, seems to be regulated by a more complex, partially redundant network of factors, of which some of the central players await identification in the near future.
Ethics statement
All animal care and experiments were conducted in accordance with the guidelines provided by the German Animal Welfare Act (German Ministry of Agriculture, Health and Economic Cooperation). Animal housing and breeding at the University of Würzburg was approved by the regulatory agency of the city of Würzburg (Reference ABD/OA/Tr; according to 111/1 No. 1 of the German Animal Welfare Act). All aspects of the mouse work were carried out following strict guidelines to ensure careful, consistent and ethical handling of mice.
Animals and tissue preparations
Tissues used in this study were derived from wildtype, heterozygous and knockout littermates of either of the two currently existing SUN1 deficient mouse strains, Sun1 (Dex10- 13) and Sun1 (Dex10-11) [11,21]. For immunofluorescence studies testes and ovaries from wildtype, heterozygous and SUN1 knockout progeny of the Sun1 (Dex10-11) strain were fixed for 3 hrs in 1% PBSbuffered formaldehyde (pH 7.4). Tissues were then dehydrated in an increasing ethanol series, infiltrated with paraffin wax at 58uC overnight and embedded in fresh paraffin wax as described in Link et al. [13]. For EM analysis we prepared tissue material from wildtype, heterozygous and SUN1 deficient mice from both SUN1 deficient mouse strains, the Sun1 (Dex10-13) and Sun1 (Dex10-11) strain, according to the protocol described below.
Antibodies
For the generation of SUN2 specific antibodies, a His-tagged SUN2 fusion construct (amino acids 248-469 of the SUN2 protein) was expressed in E. coli RosettaBlue (Novagen, Darmstadt, Germany) and purified through Ni-NTA agarose columns (Qiagen, Düsseldorf, Germany). This peptide was used for immunization of a guinea pig (Seqlab, Göttingen, Germany). The serum obtained was affinity purified against the SUN2 antigen coupled to a HiTrap NHS-activated HP column (GE Healthcare, Munich, Germany). Similarly, for the generation of a KASH5 specific antibody, a His-tagged KASH5-fusion construct (amino acids 421-612) was expressed and purified as described above. This peptide was used for immunization of a rabbit and the serum obtained was purified using a KASH5 antigen coupled HiTrap NHS-activated HP column. Further primary antibodies used in this study were: goat anti-Lamin B antibody (Santa Cruz Biotechnology, Heidelberg, Germany), rabbit anti-SYCP3 antibody (anti-Scp3; Novus Biologicals, Littleton, CO), guinea pig anti-SUN1 antibody [27] and mouse anti-KASH5 [23]. For TeloFISH analyses we further used monoclonal mouse antidigoxigenin antibodies (Roche, Mannheim, Germany). Corresponding secondary antibodies used for this study were: Cy2 anti-mouse, texas red anti-mouse, alexa647 anti-rabbit, texas red anti-rabbit, Cy2 anti-guinea pig and texas red anti-goat; all obtained from Dianova (Hamburg, Germany) and used as suggested by the manufacturer.
Immunohistochemistry
Double-label immunofluorescence analyses were carried out on paraffin sections of testis or ovary tissue (3-7 mm) as described in Figure 5. Meiotic recombination in SUN1 deficient oocytes. Representative chromosome spreads of oocytes from 19.5 dpf littermate Sun1 +/+(Dex10-11) and Sun1 2/2(Dex10-11) females labeled with anti-SYCP3, anti-SYCP1 and anti-MLH1 antibodies. Complete pairing of all homologous chromosomes as judged by the co-localization of SYCP3 and SYCP1 is observed in heterozygous control pachytene oocytes (A). As expected the homolog pairs exhibit 1-2 distinct MLH1 foci each. In SUN1 deficient pachytene-like oocytes (B, C) only some chromosome stretches and few homologous chromosomes are fully paired. Frequent defects in synapsis formation and many univalent chromosomes can be detected, labeled only by SYCP3. However, distinct MLH1 foci can be observed where SYCP3 and SYCP1 co-localize, (arrowheads in B and C). See also inset in B; magnified by a factor of 2. Scale bars 10 mm. doi:10.1371/journal.pgen.1004099.g005 [13,27]. Paraffin sections were prepared for immunofluorescence by first removing the paraffin by two consecutive incubations of 10 min each in Roti-Histol (Carl Roth, Karlsruhe, Germany). Then the tissue sections were rehydrated in a decreasing ethanol series. Subsequently, antigen retrieval was conducted by incubating the slides in antigen unmasking solution (Vector laboratories, Burlingame, CA) at 125uC and 1.5 bar for 7-20 min. After permeabilization of the tissue in PBS containing 0.1% Triton X-100 for 10 min and washing in PBS, slides were blocked for 30 min in blocking solution (5% milk, 5% FCS, 1 mM PMSF; pH 7.4 in PBS). After incubation with the first primary antibody either for 2 hrs at room temperature or overnight at 4uC, slides were washed in PBS and again blocked in blocking solution before incubating the samples with the second primary antibody for another 2 hrs at room temperature. Following two washing steps (10 min each) in PBS and reblocking for 30 min in blocking solution slides were incubated with the appropriate secondary antibodies. DNA was counterstained using Hoechst 33258 (Sigma-Aldrich, Munich, Germany).
Telomere fluorescence in-situ hybridization (TeloFISH)
To label telomeres and selected proteins simultaneously, we combined telomere fluorescence in situ hybridization (TeloFISH) with immunofluorescence protocols on paraffin sections as described previously [13]. Paraffin sections were rehydrated and antigen retrieval was conducted as described above. Prior to TeloFISH, cells were permeabilized with PBS/0.1% Triton X-100 for 10 min. After rinsing in 26 SSC (0.3M NaCl, 0.03M Nacitrate; pH 7.4) cells were denatured at 95uC for 20 min in 40 ml of hybridization solution (30% formamide, 10% dextrane sulphate, 250 mg/ml E. coli DNA in 26 SSC) supplemented with 10 pmol digoxigenin-labeled (TTAGGG) 7 /(CCCTAA) 7 oligomeres. Hybridization was performed at 37uC overnight in a humid chamber. Slides were washed two times in 26SSC at 37uC for 10 min each and blocked with 0.5% blocking-reagent (Roche, Mannheim, Germany) in TBS (150 mM NaCl, 10 mM Tris/HCl; pH 7.4). Samples were incubated with mouse anti-digoxigenin antibodies (Roche, Mannheim, Germany) according to the manufacturer's protocol and bound antibodies detected with Cy2-conjugated anti-mouse secondary antibodies. Following the TeloFISH procedure, samples were prepared for immunofluorescence by blocking with PBT (0.15% BSA, 0.1% Tween 20 in PBS, pH 7.4). Slides were incubated with the first primary antibody overnight, washed two times in PBS for 10 min each and incubated with the corresponding secondary antibody as described above. Finally, slides were again washed in PBS before incubating with the second primary antibody. After repeated washing in PBS samples once again were exposed to the corresponding secondary antibodies. DNA was counterstained using Hoechst 33258 (Sigma-Aldrich, Munich, Germany).
Electron microscopy
For electron microscopy, fresh tissue from testis and ovary was prepared as described in [22]. The tissues were fixed in 2.5% buffered glutaraldehyde solution (2.5% glutaraldehyde, 50 mM KCl, 2.5 mM MgCl, 50 mM cacodylate; pH 7.2) for 45 min and washed in cacodylate buffer (50 mM cacodylate, pH 7.2). This was followed by incubation in 2% osmium tetroxide in 50 mM cacodylate at 0uC. The samples were then washed several times in water at 4uC and contrasted using 0.5% uranyl acetate in water at 4uC overnight. Subsequently, the tissues were dehydrated in an increasing ethanol series and incubated three times in propylene oxide for 30 min. Finally, the samples were embedded in epon for ultrathin sectioning.
Microscopy and image analysis
Fluorescence images were acquired using a confocal laser scanning microscope (Leica TCS-SP2; Leica, Mannheim, Germany) equipped with a 63x/1.40 HCX PL APO lbd.BL oilimmersion objective. Images shown are pseudo colored by the Leica TCS-SP2 confocal software and are calculated maximum projections of sequential single sections. These were processed using Adobe Photoshop (Adobe Systems). 3D reconstructions, as well as analysis and quantification of telomere attachment and clustering were conducted using the ImageJ software (version 1.42q; http://rsbweb.nih.gov/ij). Figure S1 Meiotic telomere attachment in early leptotene and zygotene spermatocytes. Representative spermatocytes in paraffin sections of Sun1 +/+(Dex10-11) and Sun1 2/2(Dex10-11) 12 dpp testis tissue labeled by TeloFISH in combination with anti-lamin B and anti-SYCP3 antibodies. In early leptotene spermatocytes full telomere attachment is not yet reached, even in the wildtype. Some internal telomere signals are still detectable in the wildtype spermatocytes, probably due to the early meiotic stage. Comparable stages of spermatocytes from knockout mice also show reduced telomere attachment compared to later meiotic stages (see Figure 1D). During zygotene, as judged by SYCP3 staining, all telomeres in wildtype spermatocytes are attached to the NE as no internal telomere signals are detected anymore. In spermatocytes from knockout tissue of comparable stages, internal telomere signals are still visible, yet more telomeres are attached than in earlier meiotic stages (see Figure 1D9). Scale bars 5 mm.
(TIF)
Video S1 3-dimensional reconstructions of entire spermatocyte nuclei showing clustered and non-clustered telomere patterns. Representative spermatocytes of paraffin testis sections of Sun1 (Dex10-11) wildtype and knockout males labeled by KASH5 (green) and SYCP3 (red). Non-clustered KASH5 foci, marking telomeres attached to the NE, in pachytene cells and clustered KASH5 foci representing the earlier bouquet stage can clearly be observed in wildtype spermatocytes. In SUN1 deficient spermatocytes, non-clustered and clustered patterns of KASH5 foci can also be observed. Here, clustered KASH5 foci also represent bouquetlike formations of successfully attached telomeres. Scale bars 5 mm. (AVI) | 2016-05-12T22:15:10.714Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "35f6d29b9c9eb279d482b270cd795548f2274187",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004099&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35f6d29b9c9eb279d482b270cd795548f2274187",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
80302610 | pes2o/s2orc | v3-fos-license | Difference in Articular Degeneration Depending on the Type of Sport
Objective To determine whether type-II collagen degradation is determined by the type of sport. Carboxy-terminal telepoptide of type-II collagen (CTX-II), a serum biomarker of collagen degradation, was measured in athletes who play different sports, and was compared with matched controls. Methods The sample size consisted of 70 female participants aged between 18 and 25 years, 15 of whom were members of a soccer team, 10 of a futsal (a variant of association football played on a hard court) team, 10 of a handball team, 18 of a volleyball team, and 7 of a swimming team. A total of 9 age- and sex-matched individuals with sedentary lifestyles were included in the control group. 3-mL blood samples were collected from each participant, and they were analyzed using an enzyme-linked immunosorbent assay (ELISA). Results A comparison of the CTX-II concentrations of the players of different sports with those of the control group resulted in the following p -values: volleyball ( p = 0.21); soccer ( p = 0.91); handball ( p = 0.13); futsal ( p = 0.02); and swimming ( p = 0.0015). Therefore, in the investigated population, futsal represented the highest risk for type-II collagen degradation and, consequently, for articular cartilage degradation, whereas swimming was a protective factor for the articular cartilage. No statistically significant difference was found in the body mass index among the groups. Conclusion Futsal players are exposed to greater articular degradation, while swimmers exhibited less cartilage degradation compared with the control group in the study population, suggesting that strengthening the periarticular muscles and aerobic exercise in low-load environments has a positive effect on the articular cartilage.
Introduction
The articular cartilage is a specialized avascular, aneural tissue that covers the bony parts of the diarthrodialjoints. Its function is to facilitate smooth motion via its low frictional coefficient, in addition to absorbing shock and supporting load in several planes. 1,2 The preservation of the articular cartilage depends on maintaining the integrity of its molecular structure. 1 The main macromolecules that comprise cartilage are collagen and proteoglycans; during the course of life, the activity of the chondral cells is determined by several autocrine and non-autocrine factors that result in the maintenance or destruction of articular homeostasis. Osteoarthrosis is a common condition in the elderly, but it can also affect young people who are subjected to excessive articular loads. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] In adults, the activation of the articular function within the physiological limits of load and work frequency is crucial to maintain joint health; [3][4][5][6]9,13 therefore, joints working below the required levels are also at risk of degenerating. 10 Work frequency and articular overload are important factors in articular destruction, which is characterized by damage to cartilaginous tissue. 2 Indeed, excessive exposure to overload leads to early articular wear, [2][3][4][5][6]9,10,13 which is then perpetuated by an inflammatory cascade that affects all of the articular tissue. 16 Therefore, high-performance athletes subjected to excessive training loads over a short period of time are at a higher risk of developing articular damage; the knees, in particular, tend to be overexposed in most sports. As a function of the constant demand for maximal performance, early articular wear and functional disabilities might result in the end of an athletic career.
Most patients are diagnosed in the advanced stages of arthrosis, when a series of metabolic events has already occurred, and the process is likely past the point at which pharmacological and surgical interventions are truly effective. 18 The products of collagen matrix degradation might serve as useful markers of the severity and progression of arthrosis. Such products might be measured in the synovial fluid, blood or urine, and thus supply important information for the early diagnosis of arthrosis.
Regarding Articular Collagen
Collagen represents 50 to 60% of the dry weight of the cartilage. Its fibers form a dense network that gives shape to this tissue. The most important mechanical properties of collagen are resistance and resilience, which are transmitted to the cartilage. 1 Type-II collagen is specific to cartilage, and represents nearly 98% of the total collagen content in this tissue. 1 Type-II collagen is the largest macromolecule that composes the cartilage, and it consists of three identical polypeptide chains arranged in a triple helix. Each such chain is synthesized as a pro-chain that contains large pro-peptides at its ends, which are separated from the central part by telopeptides. During the maturation of type-II collagen molecules, proteases
Palavras-chave
► cartilagem ► biomarcadores ► artrose ► atletas cleave the pro-peptides; thus, the mature structure consists of the central part of the triple helix and telopeptides.
When the articular components degrade, they are expelled from their source tissue and are measured most accurately in the articular fluid. However, when studying osteoarthritis, the measurement of biomarkers in the blood and urine is made using less aggressive methods that are still effective and precise. 1 The carboxy-terminal telopeptide of type-II collagen (CTX-II) is a biomarker of articular degradation. The use of CTX-II as a biomarker, in addition to its direct relationship with the radiological grade of disease, clinical scores and severity of cartilaginous lesions, is well established in the literature. 2,[18][19][20][21][22][23][24][25][26][27][28] Thus, the measurement of CTX-II appears to be an effective method to investigate the turnover of type-II collagen.
The present study aimed to detect the early degradation of articular type-II collagen by measuring CTX-II levels in the blood of athletes performing different sports and comparing them with those of a control group. We sought to establish whether sport participation is a risk factor for early articular degradation in our country, and which sport was the most harmful to articular type-II collagen in the investigated population.
Materials and Methods
The present study was performed at the Sports Trauma Group of the Department of Orthopedics and Traumatology of our institution. The study was approved by the institutional ethics committee (number 244/11), and signed informed consent was obtained from every participant.
The study population consisted only of females aged between 18 and 25 years, who were members of competitive sports teams, including soccer, futsal (a variant of association football played on a hard court), handball, volleyball and swimming, at Clube Atlético São José, which is a partner of the Sports Trauma Group of our institution. The participants had to have between 5 and 8 years of training in the sport, and none of the athletes participated in other sports.
Patients with articular pain undergoing treatment, history of articular orthopedic surgery or of untreated articular pain due to any cause were excluded from the study, and all individuals in this group were female.
A total of 9 individuals with sedentary lifestyles who met the inclusion criteria formed the control group. None of them had ever participated in sports in a competitive way.
A total of 3 mL of blood was collected from each participant by simple venous puncture of the non-dominant upper limb using a vacuum collection kit, in which there is contact of the sample with ethylenediaminetetraacetic acid (EDTA), which does not affect the result, and the body mass index (BMI) was calculated as the weight divided by the height squared of each person.
The blood was centrifuged, and the plasma was stored at -80C until all of the blood samples were ready for analysis, in a total of 23 days, because the test was performed in the same day. The samples were analyzed to detect human CTX-II using enzyme-linked immunosorbent assays (ELISAs; Hu CTX-II kit, Cusabio Biotech, Houston, TX, US). This test has 100% of specificity for human CTX-II without any crossreactions, and a minimum level of detection of 0.2 ng/mL, which is informed by the manufacturer.
The results were compared using the Student t-test with a 95% confidence interval (95%CI), and values of p < 0.05 were considered statistically significant.
The average age for the control group was of 22.6 years, the mean BMI was of 22.3, and the mean concentration of CTX-II was of 0.453 ng/mL (►Table 1).
Among the 18 volleyball players, the average age was of 18.3 years, the mean BMI was of 22.33, and the mean CTX-II was of 0.429 ng/mL; this team trained 36 hours per week (►Table 2).
Among the 15 soccer players, the average age was of 22.36 years, the mean BMI was of 22.06, and the mean CTX-II was of 0.456 ng/mL; this team trained 15 hours per week (►Table 3).
Among the futsal players, the average age was of 18.5 years, the mean BMI was of 22.21, and the mean CTX-II was of 0.542 ng/mL; this team trained 15 hours per week (►Table 4).
Among the handball players, the average age was of 18.9 years, the mean BMI was of 22.88, and the mean CTX-II was of 0.416 ng/mL; this team trained 25 hours per week (►Table 5).
Among the swimmers, the average age was of 18.9 years, the mean BMI was of 20.71, and the mean CTX-II was of 0.373; this team trained 12 hours per week (►Table 6). Thus, futsal represented the highest risk for type-II collagen degradation and consequently for articular cartilage degradation, whereas swimming was a protective factor for the articular cartilage in the investigated population. Discussion Currently, prevention is emphasized over treatment; thus, the early diagnosis of osteoarthrosis is of paramount importance. In the case of athletes, early diagnosis is even more important because their occupation depends directly on the health of their joints. Because the first cleavage of articular type-II collagen catalyzed by collagenases releases the CTX-II epitope, which can be measured in the blood, urine and synovial fluid, it is a powerful tool for the early diagnosis of articular wear.
Type-II collagen is present in every synovial joint; however, the concentration of CTX-II is higher in patients with knee and hip arthrosis compared with the overall population. 27 CTX-II is a biomarker for cartilage destruction, and an increased level is a positive predictor for articular space reduction. 27,28 When we measured the concentration of CTX-II in athletes who play different sports, we sought to detect whether the risk of overload and early articular destruction would be higher for any particular sport. Although the sample size for each sport might appear small, these are closed teams subjected to the same training load and, in theory, to the same articular overload.
In our study population, the volleyball, handball and soccer players did not exhibit higher type-II collagen degradation, although they participated intensively in high-impact activities. Perhaps this finding is explained by the fact that the study population was very young, and, for these three modalities, either the terrain or the shoes are able to absorb the shock.
Type-II collagen degradation was highest among the futsal players, and was significantly different than that of the control group (►Fig. 4) using a 95%CI. This discrepancy was likely because this sport is played on a rigid floor, requires shoes that do not absorb shock, and has frequent shifts in direction overload on the joints, especially the knees (repeated pivot movements).
Another remarkable finding is that the swimming team had significantly lower CTX-II concentrations compared with the control group (►Fig. 5). In other words, swimming was a protective factor for the articular collagen in the investigated population. This finding is likely due to a combination of factors known to protect the joints, including aerobic exercise, low-impact activity and strengthening of the periarticular muscles. 17 Conclusion Therefore, we conclude that, in the investigated population, professional futsal training is a risk factor for type-II collagen degradation and, thus, for articular cartilage degradation. In contrast, swimming is a protective factor for the joints in this same population, resulting in less articular collagen degradation.
Such data suggests that some sports can indeed lead to early articular degradation. Sports trauma physicians must consider this fact, and clinical protocols aimed at joint protection must be studied and applied.
However, the findings of the present study indicate that some sports may provide joint protection, suggesting an important tool to prevent joint erosion, and mixed training may be highly beneficial for professional teams.
Further studies on this subject must be performed with larger samples, and they should be aimed at controlling articular destruction using pharmacological and non-pharmacological means.
In addition, the use of biomarkers of joint destruction is an important tool for the early detection of joint overload. Those working in sports trauma need to diagnose joint lesions early to improve the lives of athletes.
Conflicts of Interest
The authors have none to declare. | 2019-03-17T13:11:29.963Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "d7999442b62d7f03d8f26a9cd67fa46777c20261",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1016/j.rboe.2018.02.012.pdf",
"oa_status": "GOLD",
"pdf_src": "Thieme",
"pdf_hash": "917db0b47a23afd6d429d65ff7ce04f18ed42922",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219546311 | pes2o/s2orc | v3-fos-license | 2018
OBJECTIVES/SPECIFIC AIMS: Translating conventional and regenerative medicine strategies from the research laboratory into the clinic is a complex process that can delay bringing novel therapies to the patient. Navigating the increasingly complex regulation surrounding cell-based and combination product technologies is a major challenge for the translational biomedical scientist. To this end, Mayo Clinic created a new position, the “Translational Integrator,” as part of the cGMP Biomaterials Facility in the Center for Regenerative Medicine. METHODS/STUDY POPULATION: The Translational Integrator educates investigators about FDA standards and regulatory pathways; determines where the product is on the translational spectrum; works to understand the science behind the product; determines what additional studies may be needed; supports investigators in preparing for FDA communications and submissions; and educates researchers about institutional resources and funding mechanisms needed to move their product into manufacturing and trials. A primary objective is to meet investigators at an early stage in product development to avoid conducting potentially redundant work to meet regulatory requirements. RESULTS/ANTICIPATED RESULTS: Robust training in clinical and translational research methodology enables the integrator to facilitate the collaboration necessary between investigators, clinicians, institutional resources, regulators and funders to move products towards FDA IND/IDE approval and first-in-human trials. It is an iterative process using technology/translational readiness criteria, project management and review by subject matter experts that is highly interactive and customized to each project. Current projects include topics in orthopedic surgery and ENT. In creating and refining this position, several key lessons have been learned. DISCUSSION/SIGNIFICANCE OF IMPACT: First, the Translational Integrator must undergo constant reflection and assessment of investigator needs, which requires flexibility and understanding that their role may change in the context of each product. Second, the support that the Translational Integrator provides can shift the mindset of the investigator from being averse to engaging in the translational process to eager to move their product forward. Finally, for the investigator who does not personally want to move their work into first-in-human trials, establishing connections to intellectual property generation and licensing may support movement of their findings into patients.
We believe the industry is poised to achieve higher levels of activity in 2019 as it works through near-term logistical challenges in North American unconventional basins, navigates end-of-year budget constraints, and sanctions more offshore projects. During the third quarter we saw rising demand for conductor pipe connections-a leading indicator of future offshore wells-as well as increased inquiries around offshore rig reactivations, pointing to more offshore activity ahead. We also see pockets of demand strengthening in certain international land markets, as operators respond to generally higher commodity prices. In the meantime, we continue to develop and deliver technology that helps lower the industry's marginal production costs, and position our business as a leading innovator and provider of critical well construction tools. National Oilwell Varco is well-positioned to capitalize on the opportunities that lie ahead."
Wellbore Technologies
Wellbore Technologies generated revenues of $847 million in the third quarter of 2018, an increase of seven percent from the second quarter of 2018 and an increase of 22 percent from the third quarter of 2017. The segment realized meaningful growth for the second consecutive quarter as domestic revenue outpaced the percentage growth in the U.S. rig count, and international operations capitalized on an increasing number of opportunities associated with the emerging recovery in the Eastern Hemisphere. Operating leverage was limited to four percent mostly due to higher steel and labor costs, which outpaced the segment's price increases. Operating profit was $40 million, or 4.7 percent of sales. Adjusted EBITDA increased two percent sequentially and 44 percent from the prior year to $135 million, or 15.9 percent of sales.
Completion & Production Solutions
Completion & Production Solutions generated revenues of $735 million in the third quarter of 2018, a decrease of $3 million from the second quarter of 2018 and an increase of eight percent from the third quarter of 2017. Slowing demand for pressure pumping equipment in North America and sharper-thananticipated declines in offshore-focused businesses more than offset strong growth in demand for land production equipment. Operating profit was $46 million, or 6.3 percent of sales. Adjusted EBITDA increased five percent sequentially and two percent from the prior year to $99 million, or 13.5 percent of sales.
New orders booked during the quarter were $372 million, representing a book-to-bill of 85 percent when compared to the $439 million of orders shipped from backlog. Backlog for capital equipment orders for Completion & Production Solutions at September 30, 2018 was $880 million.
Rig Technologies
Rig Technologies generated revenues of $637 million in the third quarter of 2018, a decrease of two percent from the second quarter of 2018 and an increase of 25 percent from the third quarter of 2017. Improving aftermarket sales and better progress on offshore projects did not fully offset lower land rig sales from inventory. Operating profit was $58 million, or 9.1 percent of sales. Adjusted EBITDA decreased seven percent sequentially and increased 95 percent from the prior year to $78 million, or 12.2 percent of sales.
New orders booked during the quarter totaled $151 million, representing a book-to-bill of 59 percent when compared to the $256 million of orders shipped from backlog. At September 30, 2018, backlog for capital equipment orders for Rig Technologies was $3.40 billion.
Other Corporate Items
Revenue eliminations decreased $11 million sequentially due to a reduction in intersegment sales. This decrease, along with lower compensation and third-party service expenses, resulted in a $17 million reduction in eliminations and corporate costs.
Cash flow provided by operations for the third quarter of 2018 was $190 million. As of September 30, 2018, the Company had $1.3 billion in cash and cash equivalents, total debt of $2.7 billion and $3.0 billion available on its revolving credit facility.
Significant Events and Achievements
NOV completed the first commercial field trial of the Vector™ SelectShift™ downhole adjustable motor in West Texas, where the tool successfully reached section total depth. This brings the field trial tally up to 13 in total, including seven internal trials on test wells in Navasota, five runs in the Bakken, and this one in West Texas. The SelectShift tool has drilled over 45,000 ft to date, with more than 500 drilling and circulating hours and over 100 bend angle shifts downhole. Customers are embracing the new technology after seeing significant drilling improvements when drilling in straight mode versus bent mode, including substantial ROP increases and reductions in torque and vibration.
NOV's highly engineered drill bits with 3D shaped cutter technology helped a prominent operator in the Permian Basin drill their wells 6.5 days faster. The shaped cutters provided a 14% increase in ROP in the 12¼in. intermediate, a 44% increase in ROP in the 8¾-in. intermediate, and a 47% increase in ROP in the 6-in. horizontal intervals. The operator also realized a more than 200% improvement in footage per 6-in. bit, allowing them to drill their 6-in. horizontal sections with 1.8 bits per well on average compared to 5.8 bits per well with other products, a savings of four bit trips per well.
NOV recently completed several successful installations of its packer-setting system, which features the latest product from its d-Solve™ dissolvable platform, the i-Seat ball, with a major North Sea operator. The integrated system reduced the necessary amount of rig time by six days on average versus traditional packersetting operations, and it eliminated the cost and risk associated with the wireline and tractor run involved in setting production packers and removing the equipment prior to well startup.
NOV achieved several wins in its directional measurement and steerable technologies business. In Russia, a customer used the VectorZIEL rotary steerable system (RSS) to drill a 1,610-ft long horizontal section, with the tool maintaining target inclination and azimuth within 0.3° and 2.5°, respectively, across the entire section. After this successful field trial, the two tools used to conduct the field trial were purchased, with additional tools sales expected next quarter. The Company also received, in Russia, the first order for its symmetric propagation resistivity LWD tool, which provides high-quality recorded and real-time resistivity data.
NOV, in consortium with Subsea 7, was awarded an engineering, procurement, construction, and installation (EPCI) contract by Tullow Oil. NOV will provide Tullow with an oil offloading system using its buoy turret loading (BTL) system, which will be retrofitted to the Kwame Nkrumah FPSO located in the Jubilee field offshore Ghana. The BTL offshore loading terminal, which is designed for deepwater applications requiring large and frequent offloading operations, will be moored in 800 m of water, weigh approximately 900 tons, and have an offloading capacity to transfer 1 million barrel parcels of oil within 27 hours.
NOV's drill bits helped a major operator in Oman set a new drilling record in their 12¼-in. section. The 12¼-in. TK66 drill bit with ION™ 3D cutters achieved a normalized average ROP through the section of 28.8 m/hr, more than a meter per hour ahead of the closest competitor bit and previous record holder. The operator noted that the new performance record was the result of continuous improvement in drill bit design, effective after-action review and learning implementation, superior support and follow-up from field engineers and office staff, consistent application of recommended drilling parameters and practices, and open communication between NOV and the customer.
NOV customers continue to see the benefits of using Agitator™ systems in conjunction with an RSS in global operations, including in technically challenging laterals greater than 10,000 ft. On a project in the Middle East, a customer using a 6¾-in. Agitator system in their RSS bottomhole assembly (BHA) recorded a 66% reduction in vibration, significantly reducing their risk of tool failure. In Asia, an operator ran a 4¾-in. Agitator system with an international directional company's RSS and MWD systems due to consistently experiencing severe stick-slip activity and associated directional control challenges. The Agitator system reduced the stick-slip shock count and severity levels induced by BHA/string interaction with the borehole by over 50%, improving directional performance and borehole quality. The market for NOV's Agitator system is expanding, including various applications in the Permian, as service companies seek efficiency gains, reduced equipment damage, and improved geosteering/directional control.
NOV's eVolve™ optimization and automation services continues to deliver value for customers in North
American land drilling projects, recently completing the ninth successful well for a major independent operator in the Permian running drilling automation services. The project has so far reduced total bit runs and overall failures per well. Encouraged by the cost and time savings delivered by these performance improvements, the operator has extended the eVolve contract to an additional rig.
NOV's MPowerD™ managed pressure drilling (MPD) group delivered an integrated MPowerD MPD control system on a Cyberbase drilling control system for a deepwater drillship. NOV and the drilling contractor worked together very closely to introduce the system and fully embed MPD into the drilling controls network, making this the first completely integrated MPD control system installed on an offshore drilling rig. Integration of MPD controls into the Cyberbase system will enable a step-change in MPD efficiency and safety of operations for the drilling contractor, and the company can now offer MPD as an integrated service to their clients. The drilling contractor recently placed an order for a second system, further reinforcing their commitment to a long-term MPD strategy. In addition, NOV booked 10 land MPD projects for a large independent operator in the Mid Continent.
Third Quarter Earnings Conference Call NOV will hold a conference call to discuss its third quarter 2018 results on October 26, 2018 at 10:00 AM Central Time (11:00 AM Eastern Time). The call will be broadcast simultaneously at www.nov.com/investors. A replay will be available on the website for 30 days.
About NOV
National Oilwell Varco (NYSE: NOV) is a leading provider of technology, equipment, and services to the global oil and gas industry that supports customers' full-field drilling, completion, and production needs. Since 1862, NOV has pioneered innovations that improve the cost-effectiveness, efficiency, safety, and environmental impact of oil and gas operations. NOV powers the industry that powers the world.
Visit www.nov.com for more information.
Cautionary Statement for the Purpose of the "Safe Harbor" Provisions of the Private Securities Litigation Reform Act of 1995
Statements made in this press release that are forward-looking in nature are intended to be "forward-looking statements" within the meaning of Section 21E of the Securities Exchange Act of 1934 and may involve risks and uncertainties. These statements may differ materially from the actual future events or results. Readers are referred to documents filed by National Oilwell Varco with the Securities and Exchange Commission, including the Annual Report on Form 10-K, which identify significant risk factors which could cause actual results to differ from those contained in the forward-looking statements.
Certain prior period amounts have been reclassified in this press release to be consistent with current period presentation. | 2019-05-07T13:10:39.513Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "ae6acaf95bb79d1e5acdf3116d289d3c153983f7",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/2525AD5BED151A6DF05DE79788AA7955/S2059866117001546a.pdf/div-class-title-2018-div.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e5016aabf99d554b82a8078150bf12a05b8a806e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
212544225 | pes2o/s2orc | v3-fos-license | PRODUCT DEVELOPMENT SEASONING OF MADURA SATAY
Purpose of Study: Madura is one area in East Java province which has a wide range of local advantages such as Madura satay, Madura herbal, salt, corn, handicraft sickles, and others. Madura Satay is one of the advantages of a locally owned Madura island which is well known by the people of Indonesia and even to foreign countries. The seasoning of Madura satay is manufactured directly by the original Maduran people, so that consumers difficult to find its specific spices in the market. Therefore, there is a need for research on the development of the seasoning products of Madura satay instant. Methodology: The method used in this research is the method of Quality Function Deployment. Result: Based on the results of data analysis and processing, it can be concluded that there are eight consumer needs instant seasoning products of Madura satay, namely; product texture should be soft, there are a rancid taste, taste sweet spicy seasoning, spice brown-golden color, weight 500 grams of product, packaging using clear plastic material, the packaging must be attractive, low prices. Implications/Applications: The level of customer satisfaction, there are two attributes of customer satisfaction that still need to be improved, namely the product weight of 500 grams and packaging using clear plastic material.
INTRODUCTION
Madura is one area in East Java province which has a wide range of local advantages such as Madura satay, Madura herbs, salt, corn, handicraft sickles, and others. Maduras name itself is popular due to its local food namely Sate Madura or Madura Satay which makes the name itself well known by the people of Indonesia and even to some foreign countries. Madura satay can be found in almost all regions of Indonesia especially in big cities like Jakarta, Bandung, and Surabaya. Madura satay usually made of chicken, but besides the chicken satay as the main ingredient, there is Madura satay with beef and also lamb. Marinade is a final ground mixture of peanut paste and an onion.
Satay is very easy to be made, because the meat only needs to be grilled by using charcoal and pierced it with bamboo skewers and smears the meat with soy sauce. Satay ripe will taste delicious when it mixed with seasoning that consists of peanut sauce and also lontong or rice.
EXPERIMENTAL DETAILS
The products were analyzed in this study is the Madura satay. The data required in the study include primary data and secondary data. The primary data obtained from interviews and questionnaires to respondents. Secondary data were obtained from literature searches regarding Quality Function Deployment (QFD) method and the results of previous studies by Costa et al. (2000). The new product can be assembled product, service, or software QFD that can also be applied to the development of food products (Benner et al., 2011). Data was collected through interviews, questionnaires, and literature study. The data contains the steps in the application of QFD method according to Cohen (1995). Data analysis aims to look at the attributes of consumer whether the quality needs to be improved or maintained (Borisova and Parnikova, 2016;Husein, 2000).
RESULTS AND DISCUSSION
Satay is a traditional Indonesian food which usually made of chicken or beef served with a variety of spices depending on the variation satay recipe. Satay then burned on hot coals until cooked while inverted -turning and spread with a little cooking oil or coconut milk. Satay known from Java, and can be found in any area in Indonesia and has been considered as one of the national dishes of Indonesia. Indonesia is a country of origin of the satay, and this dish is known widely in almost all regions in Indonesia and is regarded as the national cuisine and one of Indonesia's best dishes. Satay is a very popular dish in Indonesia and combines with so many traditions and cultures, it produces plenty types of Satay such as in one of the states in Indonesia named Ponorogo, there is Ponorogo Satay. Recipes and how to manufacture it depends on the region itself. Almost any kind of meat can be made as satay. As an origin of Satay, Indonesia has a rich variety type of satay recipes. Usually it will be served together with satay sauce, which the sauce can be a seasoning sauce, peanut sauce, or the other, usually accompanied by slices of red onion and cucumber, and cayenne pepper. Eat with warm rice, rice cakes, or diamond rice. Its variations are usually named based on the place of origin of the satay recipe, meat type, material, or process of manufacture. Some types of satay typical of the region in Indonesia, Madura satay, Padang satay ,Ponorogo satay, and bunny satay.
The seasoning of Madura satay instant is a product development created to facilitate the general public in order to be easy to enjoy at an affordable price and available in various regions. Madura satay instant seasoning made from peanuts, garlic, onion, nutmeg, salt and soy sauce, which is then processed into a paste and packaged.
The initial step in the development of Madura product instant seasoning satay is identifying consumers needs. This stage is to identify the consumer needs of the types of products that have been determined to find out how the needs and desires of consumers with a variable interest rate to design products that will be developed (Suleri and Cavagnaro, 2016;Ulrich and Eppinger, 2011;Yazdekhasti et al., 2015). The data used in identifying the needs of these consumers are provided through direct interview to thirty respondents. There are eight attributes of consumer needs, as follows in the Table 1:
Level of Consumer Interests
Level of consumer's analysis was conducted to determine consumers' interests of each attribute of consumer needs that have accumulated (Walpole et al., 2016) so that it can be traced for which part is the most important aspect for the consumers (Bernasconi and Rodrguez-Ponce, 2018;Keinonen and Takala, 2006). Data rate of consumers' interests obtained from questionnaires of 100 respondents, with 8 attributes assessed will be based on five levels, namely the interests ranging from very unimportant (value 1) to very important (score of 5). From there, the rating for each of the attribution can be seen and the producer might be able to notice which part that the consumers care the most.
Based on the above data it can be seen that the highest rating in terms of product is not about the rancid taste as it only shows with 1 rating, compared to the packaging itself shows with an 8 rating, and followed with the products weight, where it clearly refer to the manufacturer of product.
Validity Test
Validity test is done by using correlation technique product moment (Husein, 2000). If the r >rxy is considered as valid, whereas if the rxy< r then it is considered as not valid. Figures criticism on the correlation table value r with a significance level of 5% and the number of respondents a hundred people was 0.197.
Based on the results of the above calculation is said to be valid if the product moment (rxy) greater of r table (0.197).
Reliability Test
Reliability tests performed using formula Cronbach's Alpha. If rα > r then the questionnaire is reliable, whereas if rα < r then the questionnaire is not reliable.
Analysis Of Customers Satisfaction
Analysis of consumers satisfaction conducted to determine how satisfied are they toward the product (Goetsch and Davis, 2000). Data on the level of customer satisfaction with products such as Madura satay instant seasoning is obtained from the results of interviews from 30 respondents. Here is an example calculation level of customer satisfaction with the given product by using the formula: Σx CLS = N The higher the value of the scene in the attribute of the needs of consumers, the higher the level of customers satisfaction in the attribute of the needs of consumers. Here is the result of the calculation of the consumer satisfaction level for each attribute using consumer needs (Benner et al., 2003;Zare and Rajaeepur, 2013) of Madura satay instant seasoning products:
Comparative Rate Analysis with Competitor Products
This analysis is used for comparing the product with other competitors. By using this method, the position of the product and competitors can be counted. This analysis was conducted by an interview with 30 respondents to determine the level of comparison between Madura Satay instant seasoning and competitor product, which is Pecel instant seasoning. Here is the level of comparison between Madura satay instant seasoning and Pecel instant. Based on the comparison that has been done, it can be seen that the Madura Satay instant seasoning is superior to Pecel instant seasoning except for the packaging, which is interesting and rancid taste has a very low difference.
Satisfaction
The texture of the product should be
Target (Goal)
Target to be achieved from this research is to know the product's interest by consumers, along with the attributes and important technical requirements that must be considered for the product in the form of Madura satay instant seasoning to meet what the customers expect. The target value is obtained by using the highest value of the comparison between the level of consumer's interest and the level of customer's satisfaction for each attribute of consumer needs. The following is the target value for each attribute of the consumer needs of the Madura satay instant seasoning.
Calculation of Improvement Ration (IR)
The calculation of improvement ratio is evaluated against each attribute of consumer needs, so it can be known at which attribute that needs to be repair or improve its quality. The following is the calculation of the value of improvement ratio (IR) by using the formula:
IR =
Target CLS If the value of IR> 1, it is necessary to make improvements or quality improvement for the attribute. The following is the value of the improvement ratio (IR) for each attribute of consumer needs of the Madura satay instant seasoning. Determination of sales points done to find out how the role of each attribute of consumer needs to the sale of Madura satay instant seasoning. The point of sale is done by interviewing 30 respondents. The result of the interview is then calculated an average for each attribute of consumer needs. The higher point value of sales of an attribute of consumer needs, the higher the role of attribute of consumer needs to the Madura satay instant seasoning.
Based on the calculations that have been done can be seen that the point of sale that has the most important role is the attribute of the consumer needs a smooth texture and there is no rancid taste on instant spice products Madura.
Making Correlation Matrix
Making correlation matrix aims to know the relationship between technical requirements and attribute of consumer needs of instant spice products. The value of the relationship on HOQ was obtained from interviews and discussions with resource | 2019-05-16T13:05:45.675Z | 2014-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "31a544262fc551d98a19d04c3b113963a0cad689",
"oa_license": null,
"oa_url": "https://doi.org/10.18510/hssr.2019.7320",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f128e440b785e10111e7978689bb17edeb52c02d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business",
"Sociology"
]
} |
219942570 | pes2o/s2orc | v3-fos-license | Antidepressant-relevant behavioral and synaptic molecular effects of long-term fasudil treatment in chronically stressed male rats
Several lines of evidence suggest that antidepressant drugs may act by modulating neuroplasticity pathways in key brain areas like the hippocampus. We have reported that chronic treatment with fasudil, a Rho-associated protein kinase inhibitor, prevents both chronic stress-induced depressive-like behavior and morphological changes in CA1 area. Here, we examined the ability of fasudil to (i) prevent stress-altered behaviors, (ii) influence the levels/phosphorylation of glutamatergic receptors and (iii) modulate signaling pathways relevant to antidepressant actions. 89 adult male Sprague-Dawley rats received intraperitoneal fasudil injections (10 mg/kg/day) or saline vehicle for 18 days. Some of these animals were daily restraint-stressed from day 5–18 (2.5 h/day). 24 hr after treatments, rats were either evaluated for behavioral tests (active avoidance, anxiety-like behavior and object location) or euthanized for western blot analyses of hippocampal whole extract and synaptoneurosome-enriched fractions. We report that fasudil prevents stress-induced impairments in active avoidance, anxiety-like behavior and novel location preference, with no effect in unstressed rats. Chronic stress reduced phosphorylations of ERK-2 and CREB, and decreased levels of GluA1 and GluN2A in whole hippocampus, without any effect of fasudil. However, fasudil decreased synaptic GluA1 Ser831 phosphorylation in stressed animals. Additionally, fasudil prevented stress-decreased phosphorylation of GSK-3β at Ser9, in parallel with an activation of the mTORC1/4E-BP1 axis, both in hippocampal synaptoneurosomes, suggesting the activation of the AKT pathway. Our study provides evidence that chronic fasudil treatment prevents chronic stress-altered behaviors, which correlated with molecular modifications of antidepressant-relevant signaling pathways in hippocampal synaptoneurosomes.
Introduction
Stressful life experiences target the human brain and trigger the release of different stress mediators as adaptive responses, with consequent synaptic remodeling in several brain areas (McEwen et al., 2016). In contrast, chronic exposure to stressful experiences may produce maladaptive responses and increase the vulnerability of individuals to develop mental disorders, such as the highly comorbid depressive and anxiety disorders (Craske et al., 2017;Kendler et al., 1999;Mineka et al., 1998). Indeed, these disorders seem to share affected brain areas, including the hippocampus, amygdala and prefrontal cortex (PFC) (Craske et al., 2017;McEwen et al., 2016;Otte et al., 2016;Price and Drevets, 2010).
Several lines of evidence have revealed a strong association between alterations in mood and memory with a reduction of hippocampal volume (Malykhin and Coupland, 2015). These observations are in agreement with preclinical studies revealing that chronically stressed rodents have reduced hippocampal volume as a consequence of neuronal atrophy, dendritic arbor simplification (Pinto et al., 2015) and dendritic spine loss in CA1 pyramidal neurons (Castañeda et al., 2015). Functionally, hippocampal-dependent memory is affected by chronic stress in male rats (Luine, 2002). These evidences suggest that chronic stress triggers maladaptive plasticity of hippocampal synapses, which may participate in the pathology underlying depressive and anxiety disorders. Indeed, considering that the hippocampus is composed of intrinsic and extrinsic glutamatergic pathways, a dysfunctional glutamatergic synapse hypothesis for depression has arisen (Sanacora et al., 2012;Thompson et al., 2015). Glutamatergic receptors involved in fast excitatory neurotransmission include α-amino-3-hydroxyl-5-methyl -4-isoxazole-propionate (AMPA) and N-methyl-D-aspartate (NMDA) receptors (AMPARs, NMDARs) (Traynelis et al., 2010). In the adult rat hippocampus, AMPARs are composed of GluA1-3 subunits (Wenthold et al., 1996). GluA1 can be phosphorylated at Ser831 and Ser845, which enhances AMPAR trafficking/channel conductance and AMPAR membrane insertion, respectively (Derkach et al., 2007). Additionally, GluA2 presence renders the AMPAR impermeable to Ca 2þ (Derkach et al., 2007). Interestingly, changes in GluA1-phosphorylation and GluA2-presence modify synaptic efficacy (Derkach et al., 2007). On the other hand, NMDARs are composed of two GluN1 subunits and mainly two GluN2A or GluN2B subunits, for which the presence of GluN2A -in contrast to GluN2B-enhances open probability and channel deactivation, and lowers calcium conductance of the NMDAR (Sanz-Clemente et al., 2013). Interestingly, various reports in rodents have demonstrated that chronic stress disturbs glutamatergic neurotransmission in the hippocampus (Kallarackal et al., 2013;Marrocco et al., 2014;Marsden, 2013;Sanacora et al., 2012;Tornese et al., 2019). On the other hand, antidepressants have also been shown to modify glutamatergic components in the hippocampus (Amidfar et al., 2019;Marrocco et al., 2014;Marsden, 2013;Martinez-Turrillas et al., 2002;Pittaluga et al., 2007;Tornese et al., 2019;Van Dyke et al., 2019;Zanos et al., 2016). For instance, chronic treatment with the selective serotonin-reuptake inhibitor (SSRI) fluoxetine or the selective noradrenaline-reuptake inhibitor reboxetine, increased the levels of GluA1 and GluA2 in the hippocampus (Barbon et al., 2011;Van Dyke et al., 2019). However, most studies assessing antidepressants effects are usually conducted in unstressed animals and only a few have evaluated their effects in a stress context. One report has shown that chronic stress increases GluN2A and GluN2B levels in the ventral hippocampus, while chronic treatment with the serotonin-noradrenaline reuptake inhibitor duloxetine restores only GluN2A levels in stressed rats (Calabrese et al., 2012). Further studies addressing antidepressant drugs in a stress context are mandatory to provide a higher translational value to new therapeutic agents against psychiatric disorders.
We have previously reported that chronic treatment with fasudil, a Rho-associated protein kinase (ROCK) inhibitor, exerts several effects in chronically stressed rats, including antidepressant-like actions in the forced swimming test (FST) and prevention of stress-induced dendritic CA1 spine loss . More recently, it has been described that fasudil also has antidepressant-like effects in unstressed adolescent mice (Shapiro et al., 2019) and physically stressed adult mice (Nakatake et al., 2019), according to FST analyses. Importantly, fasudil may impact antidepressant-relevant signaling pathways. For example, ROCK acts as a negative regulator of the AKT-mTORC1 pathway at two different levels. First, ROCK phosphorylates and activates PTEN, a negative regulator of AKT (Koch et al., 2018). Secondly, ROCK is capable of interacting with TSC2, which leads to the inhibition of mTORC1 (Koch et al., 2018). These findings suggest that ROCK inhibition by fasudil may enhance AKT-mTORC1 signaling.
As an extension to these findings, here we explored further antidepressant-relevant behavioral and molecular effects of fasudil, by employing a rat chronic restraint stress model -useful for studying depressive-like behaviors and antidepressants (Bravo et al., 2009;Ulloa et al., 2010)-to delve into the preventive effects of fasudil against chronic stress. We aimed to determine whether stress-triggered behav ioral alterations could be prevented by concomitant chronic fasudil treatment, and whether these changes correlate with molecular modi fications of hippocampal glutamatergic components and antidepressant -relevant signaling pathways.
Treatments
Chronic stress and fasudil treatments were performed as described . Briefly, 89 adult male Sprague-Dawley rats randomly received one of the following treatments: (i) unstressed animals injected intraperitoneally every day for 18 days with saline (0.9% NaCl; Control) or with 10 mg/kg fasudil (LC Laboratories, Woburn, MA, USA; Fasudil), or (ii) stressed rats treated every day with saline (Stress) or 10 mg/kg fasudil (Stress-Fasudil) for 18 days, but daily submitted to restraint stress as described , from the 5th day of injections for 14 consecutive days. Three different cohorts of animals were used: two for behavioral testing and one for molecular analyses, in order to avoid any biases induced by behavioral tests exposure. Fig. S1 provides detailed timelines for each cohort. Efforts were made to minimize both the number of animals and their suffering. All procedures were approved by the Ethical Committee of the Faculty of Chemical and Pharmaceutical Sciences, Universidad de Chile, in compliance with the National Institute of Health Guide for Care and Use of Laboratory Animals (NIH Publication, 8th edition, 2011). Female subjects were excluded from the present study since our model of chronic restraint stress failed to trigger significant behavioral alterations (data not shown) and we focused primarily in the preventive effects of fasudil against chronic stress.
Active avoidance conditioning (AAC)
This test was performed as described (Bravo et al., 2009;Castañeda et al., 2015;Ulloa et al., 2010). Briefly, 24 h after the last treatment of the first cohort of animals ( Fig. S1A), each rat was individually placed in a two-way shuttle box (Lafayette Instrument Co., Lafayette, IN, USA). After habituation for 5 min, rats were subjected to 5 sets of 10 trials each (intertrial interval: 30 s). Each trial consisted in the presentation of a tone (2800 Hz) and following 5 s an electric foot shock (0.20 mA) was overlapped to the chamber where the animal was located. Each shock lasted until the animal escaped onto the opposite chamber (maximum shock duration: 10 s). A conditioned avoidance response (CAR) was defined as the crossing onto the opposite chamber within the first 5 s of the tone, before the shock was applied. If no escape response to the shock occurred throughout shock duration, both the shock and tone were discontinued and an escape failure (EF) was registered.
Object location test (OLT)
Rats of a second cohort were habituated to a 60 � 60 cm square arena in two daily sessions of 10 min, during days 17 and 18 of injections (Fig. S1B). 24 hr after the last treatment, the acquisition and testing phases were performed as described (Aguayo et al., 2018b). Each animal was allowed to explore two identical objects located in adjacent corners for 3 min. After a delay of 5 min, one object was placed in its original position and the other in one diagonal corner. The time that the rat explored each object was registered during 3 min and a discrimination index (DI) was calculated as the difference between times spent exploring the object in the novel location and the object in the familiar location, as a percentage of total exploration time.
Elevated plus maze (EPM)
One hour after assessing the OLT activity, each rat was tested in the EPM (
Western blot of synaptoneurosome-enriched and whole homogenate fractions from rat hippocampus
24 h after treatments, animals of a third cohort not tested for behavioral analyses (Fig. S1C) were euthanized to obtain protein extracts from hippocampal whole homogenates and synaptoneurosomeenriched fractions, as we have reported (Aguayo et al., 2018a). Western blot was performed as described (Aguayo et al., 2018b(Aguayo et al., , 2018a.
Briefly, 30 μg or 15 μg of protein of whole hippocampal homogenates or synaptoneurosomes, respectively, were resolved in 10% SDS-polyacrylamide gels. Proteins were then electroblotted onto 0.2 μm nitrocellulose or PVDF membranes. Membranes were finally processed for western blot, according to the conditions depicted in Table S1. Blots were then incubated with the appropriate HRP-conjugated secondary antibody: anti-rabbit IgG (Cell Signaling Technology, cat #7074) or anti-mouse IgG (Merck, cat #402335). Membranes were developed by incubation with enhanced chemiluminescent substrate (EZ-ECL, Biological Industries, Israel) and imaged with Syngene (Cambridge, UK). Bands were quantified with ImageJ (https://imagej.nih.gov/ij), relativized to an internal control sample (loaded equally across all gels) and normalized to β-actin immunoreactivity as a loading control.
Statistical analyses
Statistical analyses were performed using Prism 8.0.1 (GraphPad Software Inc., San Diego, CA, USA). Data are expressed as mean � standard error of the mean (SEM). The number of acquired CARs along time points (i.e., trial sets) was analyzed by three-way analysis of variance (ANOVA) with matching by trial set, followed by Tukey's multiple comparisons test. The effects of fasudil, stress and their interaction (stress � fasudil) in other parameters were evaluated by two-way ANOVA and differences between groups were determined with Tukey's test. Differences between two groups that differed only by one factor were analyzed with the Mann-Whitney U test, where indicated. Sample sizes (n) are indicated beneath figure legends. Full detail of test statistics and sample sizes for figures in the main text and for figures in the supplementary material are shown in Table S2 and Table S3, respectively.
Chronic fasudil treatment prevents stress-induced impairments in active avoidance conditioning, anxiety-like behavior and novelty preference
Given that chronic restraint stress impairs active avoidance conditioning (Bravo et al., 2009;Castañeda et al., 2015), we evaluated whether this impairment was sensitive to fasudil. We found a significant stress � fasudil � trial set interaction (p ¼ 0.0239; Table S2) and post-hoc comparisons showed that from the 3rd trial set forward, control rats started to acquire conditioned responses (CARs), while saline-treated stressed rats were unable to achieve conditioned avoidance (Fig. 1A). Interestingly, fasudil treatment completely avoided the stress-provoked impairment in CARs acquisition (Fig. 1A). When the overall CARs were considered, a significant stress � fasudil interaction was found (p ¼ 0.0041; Table S2) and saline-treated stressed rats showed reduced percentage of CARs vs controls, which fasudil completely prevented (Fig. 1B). Accordingly, the percentage of escape failures (EFs) also displayed a significant stress � fasudil interaction (p ¼ 0.0014; Table S2) and were significantly increased in saline-treated stressed rats, compared to controls (Fig. 1C), an effect also fully prevented by fasudil (Fig. 1C). Fasudil did not exert any effect on CARs and EFs in unstressed animals ( Fig. 1B and C). These results indicate that fasudil was able to completely prevent chronic stress-induced impairments of AAC, but without any effect in unstressed rats.
Since chronic stress may cause anxiety disorders in humans (Craske et al., 2017) and anxiety-like behaviors in rodents (Bondi et al., 2008), we used the EPM test to explore whether fasudil had any effects in anxiety-like behavior in both unstressed and stressed animals. We found that the percentage of time spent in the open arms exhibited a significant stress � fasudil interaction (p ¼ 0.0047; Table S2) and was decreased in saline-treated stressed rats, compared to controls, which could be partially prevented by fasudil (~54% of control group; Fig. 1D). Similarly, the percentage of entries intro open arms showed a significant stress � fasudil interaction (p ¼ 0.0047; Table S2), for which saline-treated stressed rats displayed less percentage of entries into open arms vs controls, while fasudil partially prevented this effect (~56% of control group; Fig. 1E). Notably, fasudil did not evoke any effect in unstressed animals ( Fig. 1D and E). These findings suggest that chronic stress induces an anxiogenic-like effect on rats, which could be partially prevented by fasudil, while fasudil has not neither anxiogenic-nor anxiolytic-like effects in unstressed animals.
It is well known that chronic stress impairs hippocampal-dependent memory in male rats (Luine, 2002;Pinto et al., 2015). Therefore, we determined whether this impairment was sensitive to fasudil using the object location test (OLT), which relies strongly on the hippocampus (Mumby et al., 2002). The OLT measures the ability of rats to distinguish between a novel location and familiar location of an object. A significant stress � fasudil interaction was found for the discrimination index (p ¼ 0.0047; Table S2), which was significantly reduced in saline-treated stressed rats compared to controls, and chronic fasudil treatment fully prevented this reduction, with no effect in unstressed rats (Fig. 1F). Total exploration time was not affected by treatments ( Fig. 1G; Table S2). These results indicate that chronic stress induced an impairment in novel location preference, which was completely prevented by fasudil.
Overall, our behavioral analyses provide evidence that fasudil was able to fully prevent the chronic stress-induced impairments in active avoidance and novel location preference, while it only partially prevented the stress-increased anxiety-like behavior.
Chronic stress decreases activating phosphorylations of ERK-2 and CREB, while fasudil only partly prevents the reduction in CREB phosphorylation
in the hippocampus (Dwivedi and Zhang, 2016;Laifenfeld et al., 2005), we then evaluated whether our model recapitulates these observations and if fasudil could prevent altered ERK-CREB signaling in hippocampal homogenates. We found a significant main effect of stress in pThr185/Tyr187 ERK-2 (p ¼ 0.0014; Table S2). Indeed, these activating phosphorylations were reduced by stress, but with no preventive effect of fasudil (Fig. 2C). In contrast, we found a stress � fasudil interaction (p ¼ 0.0091; Table S2) for the CREB-activating phosphorylation at Ser133, and post-hoc analysis showed that pSer133-CREB levels were decreased in stressed rats by 40% compared to controls (Fig. 2D). Notably, fasudil The number of conditioned avoidance responses (CARs) and escape failures (EFs) were determined in the active avoidance conditioning test. (A) Number of acquired CARs along each trial set (3-way ANOVA followed by Tukey's test: ###p < 0.001 and ####p < 0.0001 vs the 1st trial set of control saline; ***p < 0.001 and ****p < 0.0001 vs control saline; þþþþ p < 0.0001, þþ p < 0.01 and þ p < 0.05 vs stress-saline; points represent mean � SEM; n ¼ 5-6 per condition). Graphs representing the overall percentage of (B) CARs and (C) EFs (n ¼ 5-6 per condition). The percentage of (D) time spent in open arms and (E) entries into open arms, regarding total time and entries, were measured in the elevated plus maze (n ¼ 7-11 per condition. (F) The discrimination index, a measure of novelty preference, calculated in the object location test. (G) Total exploration time of rats during the object location test (n ¼ 7-10 per condition). (B-G) Graph bars represent mean � standard error of the mean and were analyzed by two-way ANOVA. Multiple comparisons were calculated with Tukey's test: ****p < 0.0001, ***p < 0.001, **p < 0.01 and * p < 0.05. Table S2 shows detailed sample sizes and test statistics. Representative western blots of phosphorylated and total CREB levels. (E) Ser133 phosphorylation of CREB was reduced in chronically stressed rats, while fasudil did not fully prevent this decrease. (F) Total CREB levels were not affected by treatments. Bars represent mean � SEM. Data were analyzed by two-way ANOVA followed by Tukey's post-hoc test. *p < 0.05, **p < 0.01. Two-tailed Mann-Whitney test: # p < 0.05. Table S2 shows detailed sample sizes and two-way ANOVA details. n ¼ 5-6 per condition. reduced pSer133-CREB levels in unstressed rats (two-tailed Mann-Whitney test: U ¼ 4, p ¼ 0.0260; Fig. 2D). Total ERK-2 and CREB levels were not modified by treatments ( Fig. 2E and F). Overall, these results suggest that the ERK-CREB pathway is impaired in the hippocampus of male rats following our chronic restraint stress model, although with no significant improvement elicited by fasudil.
Chronic stress decreased GluA1 and GluN2A levels regardless of fasudil treatment in whole hippocampal homogenate
To explore whether chronic stress and fasudil had any impact on glutamatergic components in the hippocampus, we evaluated AMPAR and NMDAR subunits -and some of GluA1 functional phosphorylationsin whole hippocampal homogenates. Our rationale was that fasudilnormalized behaviors, which are integrated or dependent on hippocampal circuitry, may involve molecular changes of hippocampal glutamatergic components. We observed that GluA1 levels were decreased by stress in hippocampal homogenates, despite fasudil treatment (main effect of stress, p ¼ 0.0053; Fig. 3A, Table S2). On the contrary, neither GluA1 phosphorylations at Ser845 and Ser831 nor GluA2 levels were modified by treatments in hippocampal homogenates (Fig. 3B, C, D, respectively). Regarding NMDAR subunits, GluN1 and GluN2B levels were unchanged by stress and fasudil in hippocampal homogenates ( Fig. 3E and G). However, chronic stress triggered a reduction of GluN2A levels (main effect of stress, p ¼ 0.0003; Fig. 3F, Table S2), which was insensitive to fasudil treatment.
Chronic fasudil treatment increases synaptic GluA1 and GluA2 in unstressed rats, while it reduces synaptic GluA1 Ser831 phosphorylation in stressed rats
This far we have noted noxious effects of chronic stress in some glutamatergic components and ERK-CREB signaling in hippocampal homogenates, without major intervening effects of fasudil. To further address whether fasudil triggers molecular changes in the hippocampus -that may be in agreement with the observed behavioral outcomes-we used a well-characterized synaptoneurosome-enriched fraction (Aguayo et al., 2018b(Aguayo et al., , 2018a to analyze more precisely the variation of synapse-located proteins. Despite that neither fasudil nor chronic stress exerted main variance effects for GluA1 synaptic levels, a pairwise comparison revealed that fasudil increased synaptic GluA1 levels by 40% in unstressed rats vs controls (two-tailed Mann-Whitney test: U ¼ 4, p ¼ 0.0260; Fig. 4A), while GluA1 phosphorylation at Ser845 was unaffected by treatments (Fig. 4B). Interestingly, phosphorylation of GluA1 at Ser831 displayed a stress � fasudil interaction (p ¼ 0.0245; Table S2) and was significantly decreased by 40% in fasudil-treated stressed rats vs controls (Fig. 4C). Additionally, GluA2 levels also displayed a significant stress � fasudil interaction (p ¼ 0.0101; Table S2) and were increased by fasudil by 60% in unstressed animals, vs controls (Fig. 4D). Regarding NMDAR subunits in synaptoneurosomes, we observed that GluN1, GluN2A and GluN2B levels were insensitive to treatments (Fig. 4E-G). These results suggest that fasudil triggers particular changes in glutamatergic components in hippocampal synaptoneurosomes depending on the presence/absence of a stressful context.
Since we have reported that fasudil prevents stress-induced dendritic spine loss in the CA1 subfield of the hippocampus (García-Rojo et al., 2017), we decided to evaluate modifications in the levels of key pre-and post-synaptic scaffolding proteins. The levels of presynaptic protein synaptophysin, and postsynaptic markers PSD-95 and Homer-1, were not affected by treatments in hippocampal synaptoneurosomes (Fig. S2).
Fasudil modulates antidepressant-relevant signaling pathways in the hippocampus
The changes observed in hippocampal glutamatergic components may involve modifications of hippocampal synaptic plasticity. Hence, we wondered whether antidepressant-relevant pathways involved in synaptic plasticity could be sensitive to stress and fasudil in hippocampal synaptoneurosomes. We found that phosphorylation of AKT at Ser473 was not affected by treatments (Fig. 5A). However, AKTmediated GSK-3β phosphorylation at Ser9 was decreased in salinetreated stressed rats, which was prevented by fasudil treatment (main effect of fasudil: p ¼ 0.0119; Fig. 5B). Additionally, phosphorylation of the AKT effector mTOR at Ser2448 was increased by fasudil in both unstressed and stressed rats (main effect of fasudil: p ¼ 0.0210; Fig. 5C). Accordingly, mTORC1-mediated phosphorylations of the eukaryotic translation initiation factor 4E-binding protein (4E-BP1) at Thr37/46 were increased by fasudil in both unstressed and stressed rats (main effect of fasudil: p ¼ 0.0359; Fig. 5E). However, the mTORC1-mediated phosphorylation of S6 kinase (S6K) at Thr389 was not modified by treatments (Fig. 5D). Total protein levels of the studied phosphorylations remained unchanged by treatments (Fig. S3).
Since mTORC1/4E-BP1 signaling controls local protein synthesis at the synapses, we evaluated the synaptic levels of LIM domain kinase-1 (LIMK-1), an actin-remodeling protein that can be synthesized de novo in the post-synapse (Schratt et al., 2006). We found that LIMK-1 levels were sensitive to chronic stress (main effect of stress: p ¼ 0.0114), with a close fasudil effect (p ¼ 0.1012), and post-hoc analysis showed increased LIMK-1 in hippocampal synaptoneurosomes of fasudil-treated stressed rats (Fig. 5F). Overall, these results suggest that fasudil regulates antidepressant-relevant signaling pathways that may explain its actions on behavior.
Discussion
Our observations have proven that fasudil prevents noxious effects of chronic stress in associative learning, anxiety-like behavior and novel location preference, accompanied with particular molecular modifications in hippocampal synaptic fraction. Some of these molecular changes were produced by fasudil either in stressed or unstressed animals, while others were observed in both groups.
It is well-described that several stress paradigms trigger profound effects in behaviors related to emotionality, learning and memory (McEwen et al., 2016). In the present study, we evaluated animals in a shuttle-box AAC paradigm, which provides a measure of associative learning and memory. Some studies have evaluated the effects of drugs with antidepressant-like effect in this kind of active avoidance. For example, in unstressed animals, acute administration of different tricyclic antidepressants (TCAs) or the SSRI fluoxetine impaired avoidance behavior (Lucki and Nobler, 1985). Likewise, chronic treatment with the SSRI sertraline in unstressed rats also impaired active avoidance (Ulloa et al., 2010); however, chronic treatment with the monoamine oxidase inhibitor (MAOI) moclobemide enhanced the acquisition of CARs in active avoidance in unstressed animals (Getova et al., 2003). These findings suggest that no singular effect of antidepressants is observed in AAC; nevertheless, fasudil had no effect in AAC in unstressed rats. In this and previous studies (Bravo et al., 2009;Castañeda et al., 2015), we have found that chronic stress impairs the acquisition of CARs and increases EFs. Interestingly, fasudil completely prevented these stress-affected parameters, similarly to the TCA desipramine (Bravo et al., 2009), but contrarily to sertraline (Ulloa et al., 2010).
Several antidepressants are useful for the treatment of anxiety disorders (Ravindran and Stein, 2010). A recent report revealed that oral administration of 10 mg/kg/day fasudil for 30 days had an anxiogenic-like effect in elderly mice tested in the EPM (Greathouse et al., 2019), a well validated test for studying anxiety-modulating drugs (Pellow and File, 1986). However, we did not observe any effect of fasudil in unstressed rats, which could be related to the duration of fasudil treatment; i.e., 18 days (this study) vs 30 days (Greathouse et al., 2019). Analogously, another report showed that acute fasudil treatment in unstressed adult mice did not affect latency to approach food in the Table S2 shows sample sizes and two-way ANOVA details. n ¼ 5-6 per condition. and (G) GluN2B were not modified by treatments. Data were analyzed by two-way ANOVA, followed by Tukey's post-hoc test. *p < 0.05, **p < 0.01. Two-tailed Mann-Whitney test: # p < 0.05. Table S2 shows detailed sample sizes and two-way ANOVA details. n ¼ 4-6 per condition.
novelty-suppressed feeding test (Shapiro et al., 2019), which relies on drugs with anxiolytic-like effects (Ramaker and Dulawa, 2017). Nonetheless, here we demonstrate that chronic fasudil treatment partially averted the chronic stress-induced anxiety-like behavior in the EPM. Interestingly, similar observations were described by Bondi et al. using chronic desipramine treatment, which could prevent stress-increased anxiety-like behavior (Bondi et al., 2008).
Both associative memory and anxiety-like behavior may comprise hippocampal circuitries, especially those involving the ventral hippocampus (Bannerman et al., 2003;Moustafa et al., 2013;Wang et al., 2015). However, the dorsal hippocampus is also sensitive to chronic stress (Luine, 2002) and antidepressants (Liu et al., 2012). Therefore, we also evaluated animals in the OLT, which depends strongly on dorsal hippocampal encoding, consolidation and retrieval (Mumby et al., 2002). Some studies have evaluated the effect of antidepressants in this test. For instance, treatment of unstressed adult rats with fluoxetine for 7 days enhanced animal performance in the OLT (Casarotto et al., 2020). Another report showed that after 30 days of a single dose fasudil (10 mg/kg, ip.) in male rats had no effect in the discrimination index in novel object and location tests (He et al., 2017). Similarly, we did not observe any effect in OLT performance by chronic fasudil treatment in unstressed rats, suggesting that in our model, fasudil may not have a cognitive-enhancing effect on hippocampal-dependent behavior in unstressed rats. In contrast, it is well known that chronic stress impairs novelty preference in male rats (Luine, 2002). In our study, fasudil completely prevented stress-induced loss in novel location preference, suggesting that fasudil has a hippocampal-protective effect against chronic stress-induced impairment on this behavior. Similarly, chronic treatment with fluoxetine reversed the loss of novel location preference induced by chronic corticosterone administration (Orrico-Sanchez et al., 2019), a model that partly recapitulates depressive-like behaviors (Sterner and Kalynchuk, 2010;Ulloa et al., 2010). Interestingly, chronic fasudil treatment has also been described to improve hippocampal-dependent learning and memory following various kind of insults, such as cerebral ischemia (Yan et al., 2015), Parkinson's disease (Tatenhorst et al., 2016) and Alzheimer's disease (Yu et al., 2018) animal models.
Both clinical and preclinical studies have shown that ERK signaling is impaired in the hippocampus of suicide subjects (Dwivedi et al., 2006) and stressed animals (Dwivedi and Zhang, 2016). Therefore, we phospho-Ser9 GSK-3β, (C) phospho-Ser2448 mTOR, (D) phospho-Thr389 S6K, (E) phospho-Thr37/46 4E-BP1 and (F) total LIMK-1 levels. Bars represent mean � SEM. Data were analyzed by two-way ANOVA followed by Tukey's post-hoc test (*p < 0.05). Two-tailed Mann-Whitney test: ## p < 0.01. Table S2 shows detailed sample sizes and two-way ANOVA details. n ¼ 4-6 per condition. explored whether ERK and CREB phosphorylations were affected by chronic stress and fasudil in whole hippocampal homogenate. As expected, we found that ERK-2 activating phosphorylations were downregulated by chronic stress, despite fasudil treatment. The ERK-2 downstream effector ribosomal protein S6 kinase/p90RSK can phosphorylate CREB at Ser133, enhancing its transcriptional activity (Finkbeiner et al., 1997). We observed that CREB Ser133 phosphorylation was decreased in saline-treated stressed rats, which is in compliance with another report showing that chronic stress decreases pSer133-CREB in the hippocampus (Laifenfeld et al., 2005). However, fasudil-treated stressed rats displayed pSer133-CREB levels that were not different from neither controls nor stressed rats. This suggest that fasudil may partially prevent the stress-induced decrease in pSer133-CREB. Although this proposal does not seem to comply with the fact that fasudil did not prevent the stress-induced decrease in pThr185/Tyr187 ERK-2, several other protein kinases acting independently of ERK-2 are able to phosphorylate CREB at that same position (Ehrlich and Josselyn, 2016), which may be enhanced by fasudil.
Structural plasticity plays a pivotal role in brain limbic areas and is demonstrated to be impaired in mood disorders (Pittenger and Duman, 2008;Price and Drevets, 2010). We have previously reported that chronic stress decreases dendritic spine density of hippocampal CA1, which was prevented by chronic fasudil treatment . In the present study, there were no variations in the levels of synaptic scaffolding proteins synaptophysin, PSD-95 and Homer. This agrees with in vitro evidence, where PSD-95 dendritic levels remain unchanged after spine loss (Woods et al., 2011). However, these findings associated with dendritic spine remodeling may involve modifications in glutamatergic receptors abundance. Indeed, we observed that GluA1 and GluN2A levels were reduced by chronic stress in whole hippocampal homogenates. However, these changes were insensitive to fasudil treatment. To this extent, our results indicate that the changes in glutamatergic components in whole hippocampal homogenate seems to be dissociated from the behavioral effects of chronic stress and fasudil treatments. Therefore, we next explored whether synapse-located glutamatergic components and/or antidepressant-relevant signaling pathways are somehow associated to the behavioral outcomes. In fact, we observed reduced pSer831-GluA1 levels in synaptoneurosomes of fasudil-treated stressed rats, a modification related to decreased AMPAR-mediated conductance (Derkach et al., 2007) and synaptic depotentiation (Lee et al., 2000).
Several antidepressants modulate signaling pathways in key brain areas such as the hippocampus (Duman and Voleti, 2012). For instance, antidepressants activate the AKT pathway in rat primary hippocampal neurons in a dose-response manner (Park et al., 2014). Although we did not find variations in pSer473-AKT, we found significant effects of fasudil in two AKT downstream nodes: GSK-3β and mTOR. More precisely, we report that fasudil prevented the chronic stress-induced decrease in the inhibitory phosphorylation of GSK-3β at Ser9. This agrees with other reports, where increased GSK-3β activity is associated with chronic stress and depressive-like behavior (Liu et al., 2012). Conversely, several antidepressants and mood stabilizers inhibit GSK-3β activity by increasing Ser9 phosphorylation (Beurel et al., 2011;Gould and Manji, 2005;Li et al., 2004). On the other hand, fasudil increased mTOR Ser2448 phosphorylation -which is required for mTOR kinase activity (Nav� e et al., 1999)-in both unstressed and stressed rats. Even though both GSK-3β and mTOR phosphorylations were dissociated from Ser473 phosphorylation of AKT, this may be accounted by the fact that pSer473 of AKT is dispensable for GSK-3β and mTORC1 modulation (Jacinto et al., 2006) and its only essential to achieve full AKT kinase activity. Regardless of the phosphorylation status of AKT, our results strongly suggest the activation of the AKT pathway by fasudil.
Interestingly, evidence supports that mTORC1 signaling underlies the long-lasting antidepressant-like actions of ketamine (Li et al., 2010). This is in compliance with the fact that ROCK acts as an inhibitor of mTORC1, either by activating PTEN or interacting with the tuberous sclerosis complex 2 (TSC2) (Koch et al., 2018). Hence, ROCK inhibition by fasudil may increase mTORC1 signaling, as displayed by our results. Downstream of mTORC1, we found that pThr389 S6K was unaltered by chronic stress and/or fasudil, but mTORC1-dependent phosphorylations of 4E-BP1 were increased by fasudil in hippocampal synaptoneurosomes from both stressed and unstressed animals. Since phosphorylation of 4E-BP1 at Thr37/46 relieves translational repression (Gingras et al., 1999), our findings suggest that fasudil may enhance mTORC1-dependent protein synthesis at the synapse. Interestingly, we detected that fasudil upregulates synaptic LIMK-1 levels in stressed rats. Despite that fasudil-treated unstressed rats also showed increased mTOR and 4E-BP1 phosphorylations, but invariant LIMK-1 levels, other stress-sensitive post-transcriptional mechanisms may be involved in regulating synaptic LIMK-1 levels. The findings described in our study, combined with previous evidence, propose that fasudil may activate the AKT pathway, which in turn may lead to neuroplastic changes in the hippocampus, that might translate onto some of the observed behavioral effects. Despite that our study has focused on the hippocampal synapse, combined effects of fasudil in other circuits and brain areas relevant to antidepressant actions (Gould et al., 2019), might account for its preventive activity against chronic stress in behaviors. Indeed, chronic stress also affects brain areas other than the hippocampus, such as the amygdala, prefrontal cortex and ventral striatum (Pittenger and Duman, 2008). Moreover, these brain areas are also relevant for associative learning and anxiety-like behavior (Price and Drevets, 2010;Ramirez et al., 2015). Since fasudil only partially prevented the stress-induced anxiety-like behavior, some brain areas may be more sensitive to fasudil than others. Further studies are needed to assess if these brain areas display similar sensitivity to fasudil under a chronic stress paradigm. In a similar vein, the lack of a topographical perspective certainly establishes a limitation to the present study, since distinct hippocampal synapses may be differentially sensitive to chronic stress and antidepressant treatment (Kallarackal et al., 2013;Van Dyke et al., 2019). In fact, this may explain why some molecular changes -especially those related with glutamatergic components-do not correlate with the behavioral outcomes.
Unlike many studies, our report focused on exploring the behavioral and molecular effects of fasudil under a chronic stress paradigm, serving as a model that recapitulates some psychiatric conditions (Nestler and Hyman, 2010). Even though fasudil did not produce any behavioral effects in unstressed rats, it triggered some molecular modifications. Fasudil increased synaptic GluA1 and GluA2 levels, with no effect in synaptic GluA1 phosphorylations and NMDA receptor subunits. Since we did not observe this increase in GluA1 and GluA2 in whole homogenates, we speculate that fasudil may somehow trigger trafficking/synthesis of AMPARs to the synapses. This increase in synaptic GluA1 and sometimes GluA2 has been observed with several slow- (Martinez-Turrillas et al., 2002;Van Dyke et al., 2019) and fast-acting antidepressants; for example, acute treatment with ketamine or (2R, 6R)-HNK increases both GluA1 and GluA2 levels in hippocampal synaptoneurosomes (Zanos et al., 2016). Altogether, these evidences suggest that antidepressants increase AMPAR abundance in the hippocampal synapses.
Since fasudil might inhibit other protein kinases, at least in vitro (Ono-Saito et al., 1999), the observed effects may not be exclusively mediated by ROCK inhibition. However, fasudil is believed to act as a prodrug, since it is readily metabolized to hydroxyfasudil, a more potent and selective ROCK inhibitor (Koch et al., 2018). In line with this, we have previously established that our model of fasudil treatment effectively inhibits phosphorylation of myosin phosphatase target subunit 1 (MYPT1) -an exclusive ROCK target-which was increased by chronic stress in the hippocampus . Also, a recent report revealed that ventromedial PFC-targeted ROCK-2 silencing -the major ROCK isoform found in the brain-reduced immobility in the FST in adolescent mice, similar to fasudil (Shapiro et al., 2019). Whether the preventive behavioral and molecular effects of fasudil against chronic stress rely upon ROCK inhibition remains elusive. Nonetheless, these and our findings provide ROCK as an interesting target for the treatment of stress-related disorders.
Conclusion
Altogether, our results indicate that fasudil prevents behavioral impairments induced by chronic stress. These protective effects of fasudil were dissociated from its effects on glutamatergic components in the hippocampus, but they were correlated with an activation of the AKT pathway in hippocampal synaptoneurosomes. Further studies addressing the specificity and mechanism of fasudil are needed to understand how this drug may regulate neural circuits that are involved in chronic stress-induced altered behaviors.
Declaration of competing interest
None. | 2020-06-14T14:15:20.039Z | 2020-06-13T00:00:00.000 | {
"year": 2020,
"sha1": "884017a755012c5f85697cdc884fc39139ba13b2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ynstr.2020.100234",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74dde145eab677fd0c069755f880509186aa68f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54890815 | pes2o/s2orc | v3-fos-license | Examination and Evaluation of the Distinguishing Features of Human Resource Management in Europe : A Study Based on Certain German and British Companies
This paper studies the distinguishing features of HRM in Europe by focusing on firms in Germany and UK. The study offers a contrast and explanation about how European firms develop and use their HRM policies, such as different cultural predispositions, employee resourcing, training and development as well as pay. According to the empirical analysis using STATA on 1882 active publicly listed companies in the UK and Germany, the effects of HRM on operating revenue per employee were exhibited in detail. Longer power distance led to more operating revenue per employee except those in banks. Furthermore, individualism guaranteed more operating revenue, but only in Germany and in the industrial companies. On the contrary, a negative coefficient was identified between operating revenue and short-run orientation especially in industrial companies. Additionally, there was no obvious relationship between operating revenue and the gender of the chairman/president.
Introduction
The history of the European Community in attempting to form a hyper country institution is one of adequate, but complex recognition of differences among nations and religions.As one of the key characters, human resource management (HRM) in Europe is also a comprehensive one.As Neo-corporatist ideology goes, German economic management is consistent with a high level of employee skill and is founded on highly development of national infrastructures and social welfare (Hollinshead, 2010).Using the United Kingdom as a benchmark, Croucher and Rizov (2012) found that calculative HRM is indeed more damaging to union influence than collaborative, although to a much lesser extent than in the United Kingdom.Pulignano, Doerflinger, and De Franceschi (2016) stated whether and how union power to shape flexibility and security policies is affected by national institutions in UK and Germany.Pudelko (2006) empirically studied the 'balanced', 'moderate' and offers a contrast and explanation about how European firms go with their HRM policies.I collected data of 1882 publicly listed companies in both UK and Germany by the nearest acquirable date in 2015, which were still active and data available in Osiris, and described and numerically analysed the effect of HRM features on company performance.
Cultural Predisposion
Cultural features of different nationalities are of key importance for identifying human behaviour and management action enforcement (Torrington & Hall, 1991).Even the e-loyalty formation process differs across cultures, and even between similar cultures (Gracia, Ariño, & Blasco, 2015).As an overview, the core cultural predispositions are shown as Hofstede's framework in Table 1.To some extent, average percentage of managers of all the employees in companies of the UK and Germany explains the difference of power distance.18.24% of German employees are managers, which is a relatively low level and shows the egalitarianism of German companies.While the figure in the UK is 32.83% (Osiris, 2015).
Uncertainty avoidance can be an analysis at the country level.The UK had -5.7% budget balance of its GDP, while Germany had 0.3% of that (Osiris, 2015).The figures show that Germans have more uncertainty avoidance than British.
Both corporate groups in the UK and Germany have many companies included, which guarantees a high individualism of corporation management and development in the two countries.According to the data in Osiris (2015) of 1882 publicly listed companies in the UK and Germany, there is an average of 87.08 companies in a corporate group.The figure in the UK is 75.50, which is lower than 106.75 in Germany.More individualised companies can work well with a relatively short power distance and egalitarianism in Germany.
Both British and German companies have high masculinity, and the British are even more masculine.According to the analysis of listed companies in UK and Germany, only 4.41% of company presidents or chairmen are female, while the proportion in Germany is a little higher at 5.83% (Osiris, 2015).Woman managers are hardly to be in a relatively high position in a certain company and they are usually in charge of the human resource office, finance & accounting office and legal office (Osiris, 2015).
To attract more investors in the short run, the companies in the UK tend to get a higher evaluation in an independent industrial group, and 62.16% of UK companies stay in A or A+.While the figure in Germany is only 26.91%.Furthermore, the average percentage of current market capitation in total assets of the UK companies is 209.58%, and that of German companies is 157.45% (Osiris, 2015).
Political and Legal
In Europe, legally backed systems of employee communication are common and the establishment of workers' councils is required by law, which is even more extensive in Germany where four fifth of the workforce are union representatives (Brewster, 2007).For example, any important decision of pay, working hours, performance measurement and lay-offs needs to be discussed in works councils obeying the Works Constitution Act in Germany (Hollinshead, 2010).
On the other hand, legislation exists to promote equality between men and women in the scale of Europe, yet inequalities persist (Bloisi, 2007).In Germany, the inequalities in genders are obvious that women work in hierarchically low positions, have lower chances for personal development in companies and get a lower income (Domsch & Harms, 1997), and the same thing happens in the UK.
Conditioning
Family background, religions, and education systems differ in countries in Europe, and people share similar features of conditions which tend to deal with staff in the same approach, that is to say, conditions shape human behaviour and expectations (Torrington & Hall, 1991).In the UK, alone from a short run perspective, employers treat employees as disposable resources, or even as liabilities, so in this condition, individual performance is managed carefully and training is seen as an overhead with low priority when competition is serious, which makes a high flow speed of the labour force market (Hollinshead, 2010).On the other hand, German companies tend to pursue long run performance and invest in product and process innovation and enduring assets which can achieve a competitive advantage (Marginson, 2004).They treat employees as enduring assets and train them to nurture the internal labour markets.Furthermore, firms offering long-term-employment (LTE) contracts make greater use of a wide array of the hypothesized complementary practices relating to training, compensation, information-sharing, job design, employee-customer interactions, and responses to declines in the demand for labour (Gramm & Schnell, 2013).As a result, employees' motivation and commitment is high, which is related to the high quality of products and services while the risk and cost can be high as well (Hollinshead, 2010).
Furthermore, the format of organisations is also influenced by the conditions which can affect the human resource management to some extent.In Germany, most major companies are owned by a tight network of several powerful banks, and their relationship and ownership are deeply involved in the management of companies to achieve high profit faced with low competitiveness (Randlesome, 1994).However, the pattern in the UK is one of mixed and diverse development, which produce a sense of disappointment in the limited and uneven progress of human resource management in the UK (Guest, 1992).
Employee Resourcing
Flexibility in labour patterns is now widely accepted in Europe, though it involves some terminological problems which to some extent are linked with the high unemployment nowadays (Brewster, 2007).As both Germany and the UK companies use more spatial and temporal flexible working practices, the differences still exist (Papalexandris, Apospori, & Nikandrou, 2005).
Recruitment Strategy
The lack of staff or the replacement of existing members triggers the recruitment process, which involves managing the vacancies, job analysis and selection under the legal context (Bloisi, 2007).In Germany, the placement is strictly controlled by the Federal Department of Employment and a placement service owned by larger firms, so recruitment of more than 20 staff needs the agreement of works councils (Hollinshead, 2010).However, it is a totally different story in the UK whose companies have much freedom to employ labour.Furthermore, for the short run pursuing, one third or more British companies have over 5 percent of the workforce on temporary contracts and more than one quarter of the British working population have part time jobs (Brewster, 2007).
Selection Methods
Public-sector organisations and assessment centres have grown in popularity in the UK, and the latter is especially used for graduate and management selection (Bloisi, 2007).The CIPD recruitment survey showed that 33 percent of sampled organisations had used public-sector organisations and 38 percent had used assessment centre before while in large companies, the latter figure reached 95.2 percent (CIPD, 2015).The methods are similar in German companies who also use the practices that are popular in the UK, however, companies' recruitment focuses more on internal labour markets, which is quite different to that in the UK whose companies focus more on the external labour market (Hollinshead, 2010).However, nearly all the managers in a certain company are recruited from the company itself both in the UK and Germany (Osiris, 2015).Furthermore, companies in the UK always want to select a chairman or a president among the former chairmen of the board (89.86%) while companies in Germany like to assign a chairman or a president from the supervisory board (73.54%).(Osiris, 2015).
Training and Development
Training is an important part in German employees' careers, and all medium and large companies participate in the country's dual system of initial vocational training which can last for three years with abundant contents (Hollinshead, 2010).Whereas few employers in the UK hold training central to their business development in their company strategy, and this may influence their potential international competitiveness and economic performance of the UK (Torrington & Hall, 1991).On the other hand, senior managers in the UK can to some extent receive more training (Hollinshead, 2010).Moreover, the difference between the time involved in the two countries becomes even larger since 1995 for German companies are more influenced by works councils while conditions in the UK remain stable (Papalexandris et al., 2005).
Payment
In Germany, payment and increased rates need to be agreed among representatives of employers and employees at industry level, as well as other aspects such as welfare, but the unequal pay between men and women commonly exists despite legislation for there is no legal provision of minimum pay (Hollinshead, 2010).For the UK, employers have considerable latitude to formulate pay policies with little restriction and the hourly workers are paid least in Europe, at the same time, unequal pay is also apparent with the increased workforce participation of women (Hollinshead, 2010).However, other than payment, managerial support influences the employees' emotions as well as increasing active resistance behaviour (Gunkel, Schlaegel, Rossteutscher, & Wolff, 2015).
Method Description
Puck , Mohr, and Holtbrügge (2006) studied the relationship between national culture and the use of web-related management techniques based on Hofstede's 4-Dimensions model of culture.In addition, Slavich et al. (2014) stated the existence of imbalanced and differently attractive brand units' images might weaken or remove the effectiveness of corporate HRM practices in keeping internal and external turnover rates low.In this survey, I identify the main HRM factors and the impact of HRM on companies' performance in both UK and Germany.In order to assess the impact of HRM strategies on corporate earnings, a number of econometric models was built.
The analysis was performed using the STATA statistical package.
Sample and Data Collection
As a source of statistical information for the study, primary sources were used -annual reports of companies in the UK and Germany (since the selected companies are public, their statements are published and are publicly available).Statistics are based on the basic performance indicators of all the active UK and German companies, such as operating revenue, total assets, number of employees, number of managers and so on (closing date on December, 31 2014).
HRM strategy of the companies comes down to number of companies in corporate group, number of managers per thousand employees, gender of president/chairman as well as percentage of current market capitation in total assets.
For the descriptive statistics of the statistical data see Table 2, and the description of variables see Table 3.
Assumptions
Immediately prior to the modelling, the following assumptions were made: (A1) Larger number of managers per thousand employees leads to a greater revenue as long power distance guarantees an efficient management of a company as well as due to a better encouragement on employees.
Hypothesis 1: Raising number of managers per thousand employees increases operating revenue.
(A2) The larger number of companies in corporate groups leads to a greater operating revenue as individualism makes a company function well in a more serious competitively business world and, therefore, guarantees a higher operating revenue.
Hypothesis 2: The number of companies in corporate group increases operating revenue.
(A3) The gender of president/chairman doesn't lead to a greater operating revenue in the short term, for female could do the same as male in modern society, though less trust or self-confidence are fixed on female.
Hypothesis 3: The gender of president/chairman neither increases nor decreases operating revenue.
(A4) The percentage of current market capitation in total assets negatively affects operating revenue, for short-term orientation of a company hardly make a company function well in short run and, therefore, a company may generate more operating revenue with a lower percentage of current market capitation in total assets.
Hypothesis 4: The percentage of current market capitation in total assets decreases operating revenue.
Regression Results
Model (1) was a regression of the operating revenue per employee logarithm on characteristics.In order to lower down the variance inflation factor (VIF), gdp, pop and intres were dropped in Model (2).In addition, robust of vce was used to eliminate heteroscedasticity.Let us compare the two models containing the transformed (logarithmic) variables.Different from Model (2), Model (3) included the number of managers per employee logarithm.As can be seen in Table 4, the variable reflecting power distance became a value variable in the Model (3) as opposed to Model (2), thus improving the accuracy of the model.The coefficients of the variables also changed, but their values remained the same.Therefore, the hypothesis about the relevance of both models was not rejected.Furthermore, Model (3) proved to be better with a higher F-statistics and R 2 .
However, regression analysis showed that return on shareholders' funds or return on total assets had little impact on operating revenue, i.e. company success was not determined by shareholders' income, and sometimes paying less to shareholders would yield greater results.
Regression analysis of the Model (3) showed a 0.2 coefficient for the number of managers per employee logarithm.This meant that increasing number of managers per employee by 1% will lead to a 0.759% increase in operating revenue per employee.The 0.000221 coefficient of the number of companies in corporate group variable suggested that increasing number of companies in corporate group by 1 increased operating revenue by (e 0.000221 -1)*100%=0.02%.The model also showed that there was a positive correlation between operating revenue per thousand employees and current market capitation, 1 thousand € of which increased operating revenue by approximately (e 0.00977 -1)*100%=0.98%.In addition, profit margin also positively affected operating revenue by (e 0.0038699 -1)*100%=0.39%.
On the contrary, a negative coefficient was identified between operating revenue and the percentage of current market capitation in total assets, 1% of which decreased operating revenue by approximately (e -0.0002074 -1)*100%=-0.02%.Furthermore, there was no obvious relationship between operating revenue and the gender of chairman/president.
Thus, the assumptions presented in section 4.1 were in accordance with econometric studies of the three models.Companies in the UK and Germany had different HRM conditions in most of the aspects.Model (4) and Model (5) shown in Table 5 were the regressions of companies in UK and Germany separately, and tried to check if the effects of HRM in the two different countries told a different story.
Evaluation in Different Scenarios
Regression analysis of Model ( 4) and ( 5) still showed a significant positive effect of the number of managers per employee logarithm on operating revenue in both of the countries, as well as current market capitation and profit margin.While the number of managers no longer affected the operating revenue in the UK.
In addition, a negative coefficient was still identified between operating revenue and the percentage of current market capitation in total assets in both the two countries, and there was still no obvious relationship between operating revenue and the gender of chairman/president in both countries.
All the companies could be divided into banks and industrial companies, regressions of the two scenarios were exhibited in Table 6.According to the regression analysis of Model ( 6) and ( 7), all the HRM factors except gender of chairman/president significantly affected operating revenues of industrial companies, while none of them had any effect on operating revenues of banks.That is to say, HRM factors had significant effects on industrial companies but had no relationship with operating revenues in banks.
Conclusion
European countries share many similarities in their activities in HRM, which contribute to the forming of the European Union and the integration among these countries.In relation to this approach, common policies and legal rules have been established to discipline all members of the European Union.At the same time, however, cultural and historical factors in each country form the differences in adapting these rules in the aspects of employee resourcing, training and development, pay and so on, which especially exists in the UK and Germany who are the typical Coordinated Market Economy (CME) and Liberal Market Economy (LME) respectively.
According to the empirical analysis, longer power distance led to more operating revenue per employee except that in banks.Furthermore, individualism guaranteed more operating revenue, but only in Germany and the industrial companies.On the contrary, a negative coefficient was identified between operating revenue and the short-run orientation especially in industrial companies.Additionally, there was no obvious relationship between operating revenue and the gender of chairman/president.Therefore, understanding the core features of HRM within EU and the effects of them on operating revenue is important to develop business and search for partnership in this area.
Table 1 .
Cultural predisposition of Germany and the UK
Table 2 .
Descriptive statistics of variables
Table 4 .
Linear regression models
Table 5 .
Linear regression models of UK and Germany *Statistically significant at the .05level; ** at the .01level; *** at the .001level.
Table 6 .
Linear regression models of bank and industry *Statistically significant at the .05level; ** at the .01level; *** at the .001level. | 2018-12-12T23:43:26.562Z | 2016-07-20T00:00:00.000 | {
"year": 2016,
"sha1": "80c052a7a1a613966cd7c5f4ccedfe7a43f3c5f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/ijbm.v11n8p176",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "80c052a7a1a613966cd7c5f4ccedfe7a43f3c5f7",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
84845591 | pes2o/s2orc | v3-fos-license | Aggressive Local Search for Constrained Optimal Control Problems with Many Local Minima
This paper is concerned with numerically finding a global solution of constrained optimal control problems with many local minima. The focus is on the optimal decentralized control (ODC) problem, whose feasible set is recently shown to have an exponential number of connected components and consequently an exponential number of local minima. The rich literature of numerical algorithms for nonlinear optimization suggests that if a local search algorithm is initialized in an arbitrary connected component of the feasible set, it would search only within that component and find a stationary point there. This is based on the fact that numerical algorithms are designed to generate a sequence of points (via searching for descent directions and adjusting the step size), whose corresponding continuous path is trapped in a single connected component. In contrast with this perception rooted in convex optimization, we numerically illustrate that local search methods for non-convex constrained optimization can obliviously jump between different connected components to converge to a global minimum, via an aggressive step size adjustment using backtracking and the Armijio rule. To support the observations, we prove that from almost every arbitrary point in any connected component of the feasible set, it is possible to generate a sequence of points using local search to jump to different components and converge to a global solution. However, due to the NP-hardness of the problem, such fine-tuning of the parameters of a local search algorithm may need prior knowledge or be time consuming. This paper offers the first result on escaping non-global local solutions of constrained optimal control problems with complicated feasible sets.
Yuhao Ding, Han Feng and Javad Lavaei
Abstract-This paper is concerned with numerically finding a global solution of constrained optimal control problems with many local minima. The focus is on the optimal decentralized control (ODC) problem, whose feasible set is recently shown to have an exponential number of connected components and consequently an exponential number of local minima. The rich literature of numerical algorithms for nonlinear optimization suggests that if a local search algorithm is initialized in an arbitrary connected component of the feasible set, it would search only within that component and find a stationary point there. This is based on the fact that numerical algorithms are designed to generate a sequence of points (via searching for descent directions and adjusting the step size), whose corresponding continuous path is trapped in a single connected component. In contrast with this perception rooted in convex optimization, we numerically illustrate that local search methods for non-convex constrained optimization can obliviously jump between different connected components to converge to a global minimum, via an aggressive step size adjustment using backtracking and the Armijio rule. To support the observations, we prove that from almost every arbitrary point in any connected component of the feasible set, it is possible to generate a sequence of points using local search to jump to different components and converge to a global solution. However, due to the NP-hardness of the problem, such fine-tuning of the parameters of a local search algorithm may need prior knowledge or be time consuming. This paper offers the first result on escaping non-global local solutions of constrained optimal control problems with complicated feasible sets.
I. INTRODUCTION
The linear-quadratic regulator (LQR) optimal control problem has been extensively studied in the past century [1], [2]. A renewed interest in this classical topic is partially driven by tools in machine learning, where the successful applications of general optimization methods call for new theoretical analyses [3], [4]. The behavior of more complex methods like policy gradient in reinforcement learning [5] can also be understood in their application to linear-quadratic problems. They serve as a suitable baseline, because they admit well-known linear optimal solutions given by the Riccati equations [6] and an elegant parametrization of all sub-optimal solutions [7]. Both properties, however, break down when we impose structures such as locality and delay on the controller [8].
The problem of finding an optimal controller subject to structural constraints is known as the optimal decentralized control (ODC) problem. ODC has been proved to be NPhard [9], and an extensive research effort has been devoted to identifying structures or approximations that bypass the worsecase exponential complexity. It is known that the existence {yuhao_ding, han_feng, lavaei}@berkeley.edu of stabilizing dynamic structured feedback is captured by the notion of decentralized fixed modes [10]. When the system is spatially invariant [11], hierarchical [12], positive [13], or quadratic invariant [14], ODC has a convex formulation. A System Level Approach [15] also convexifies ODC at the expense of working with a series of impulse response matrices. Various approximation [16], [17], [18] and convex relaxation techniques [19], [20], [21] also exist in the literature.
On the algorithmic side, nonlinear programming methods have been applied to instances of ODC to promote sparsity in controllers [22], or to approximate the optimal solution with prior constraints [23], [24], [25]. Early works have been summarized in the survey [26], where various convergence rates have been discussed in the centralized controller case. In the decentralized case, the control literature lacks strategies to escape saddle points, or a guarantee of no spurious local optimum, or even an efficient initialization strategy that promotes convergence to a globally optimal solution. Those considerations in contrast have been extensively analyzed for many unconstrained problems in statistical learning [27], [28], [29]. We also mention that an interesting continuation method with risk-averse objective has been touched upon in [30].
An often-overlooked aspect in ODC is that its feasible set can be disconnected. In fact, even for the simplest chain structure, the number of connected components may grow exponentially in the order of the system [31]. This means that since methods based on feasible-direction local search almost always assume connectivity in the underlying feasible set, they may not be effective for finding a globally optimal solution to a general optimal decentralized control problem (because there are an exponential number of connected components, which implies at least the same number of local solutions and initializations in order to start in the correct connected component). More precisely, each connected component has a local solution and, therefore, feasible-direction local search methods should know which connected component has the global solution in order to start the iterations within that component. Note that such numerical algorithm generates a path from the initial point to the final stationary point being found by the algorithm. For convergence analysis, this path is often considered to be the discrete samples of a continuous path that is trapped within a single connected component where the algorithm is initialized.
In this work, we show that numerical optimization algorithms are oblivious to the geometry of the feasible set and the discrete path of iterative points could potentially jump between connected components without realizing the existence of discontinuity. We also study a potentially infeasible-direction local search method, named augmented Lagrangian [25], for which the structural constraints can be violated at the be-ginning of the iterations but will be satisfied asymptotically as the number of iterations increases. This paper shows empirically that for constrained optimal control problems with many local minima, jumping between components is likely with random initializations and aggressive step-size rules. This allows jumps from the sub-optimal component to the globallyoptimal component and vice versa. Moreover, we prove that a succession of jumps to the globally optimal component with descent directions is possible for almost all initializations. The phenomenon of jumping between connected components is appealing, though finding the correct step size could be challenging in general, due to the NP hardness of the problem.
In summary, this work shows that unlike convex optimization where a small step size is used, an aggressive step size is the only viable method for escaping non-global local minima created by the discontinuity of the feasible set.
A road-map for the remainder of the paper is as follows. Notations and problem formulations are given in Section II. Section III gives an overview of two common local search algorithms, whose empirical performances are compared in Section IV. Section V proves that almost all initializations can be connected to the globally optimal component via descent directions. Concluding remarks are drawn in Section VI.
II. PROBLEM FORMULATION AND PRELIMINARIES
Consider the linear time-invariant (LTI) systeṁ with an unknown initial state x(0) = x 0 , where x 0 is treated as a random variable with a zero mean and the positivedefinite covariance matrix D 0 . Consider also the quadratic performance measure where the matrix R 1 R 12 R 12 R 2 is positive smei-definite and R 2 is positive definite (the symbol E{·} denotes the expectation operator). We focus on the static case where the control input u(t) is to be determined by a static output-feedback law u(t) = −Ky(t). The objective is to design a decentralized controller K that belongs to a linear subspace S ⊆ R m×p , which models a user-defined decentralized control structure (note that m and p denote the dimensions of the input and output vectors, respectively). Let M denote the set of matrices K for which all eigenvalues of A − BKC are in the open left-half plane. The constrained optimal control problem of minimizing J(K) over the feasible set M∩S is named optimal decentralized control (ODC) and can be formulated as where the matrix P (K) denotes the closed-loop observability Gramian which can be equivalently obtained by solving the Lyapunov equation In optimization (P 1 ), since the open set M is a connected but a highly sophisticated set that cannot be efficiently characterized by algebraic equations, we regard it as the domain of the definition of the objective function J(K). In contrast, even though the constraint K ∈ S causes the ODC to become NPhard, it is a simple convex set and therefore we keep it as an explicit constraint in the problem. To handle the constraint K ∈ S, one can impose it as a hard constraint or a soft constraint through a penalty function. To explain the latter approach, let h : R m×p → R be an arbitrary penalty function with the following properties: Given a large positive constant c, the unconstrained counterpart of optimization (P 1 ) is It is known that, under mild conditions, (P 1 ) can be used to find local minima of (P 1 ) precisely for certain types of non-differentiable penalty functions (e.g., 1-norm penalty) and approximately with arbitrarily small errors for almost all differentiable penalty functions (e.g., quadratic penalty). To solve ODC numerically, we make the assumption that an initial feasible controller K 0 is available. This means the availability of a decentralized stabilizing controller K 0 ∈ M ∩ S for (P 1 ) and a centralized stabilizing controller K 0 ∈ M for (P 1 ). Any descent algorithm generates a sequence of controllers K 0 , K 1 , K 2 , . . .. The main difference between (P 1 ) and (P 1 ) is whether the constraint K ∈ S should be satisfied for all points of the sequence or only at its limit. The limit point of the sequence, if exists, could be a saddle point or a local minimum of the corresponding optimization problem. The work [32] states that, under some conditions, the gradient descent algorithm with a random initialization and sufficiently small constant step sizes does not become stuck in a saddle point almost surely. However, since it is important to find a global solution of ODC, a question arises as to how many local minima (P 1 ) or (P 1 ) has. The following result is a by-product of our recent work [31].
Lemma 1. Suppose that C has full row rank and is positive definite. There are instances of the ODC problem for which the constrained optimization problem (P 1 ) and its penalized counterpart (P 1 ) both have an exponential number of local minima (with respect to n) if c is sufficiently large.
Proof. Consider any instance of the class of ODC problems given in [31] with the property that the feasible set of the problem has an exponential number of connected components. Due to the coercive property proven in Lemma 3 (stated later in the paper), each connected component must have a local minimum. Therefore, (P 1 ) has an exponential number of local minima. Let O denote the set of all local minima in any arbitrary connected component of the feasible set of ODC, and O( ) ⊆ R m×p be the set of all points in the feasible set of (P 1 ) that are at most away from O, for any given > 0. If (P 1 ) is numerically solved using gradient descent for an initial point in O( ), it follows from the proof technique given in [33] that the algorithm will converge to a local minimum that is in the interior of O( ) and approaches O as c goes to infinity. This implies that (P 1 ) has at least one local minimum corresponding to the set O. Therefore, (P 1 ) has an exponential number of local minima.
It should be noted that the work [3] shows that if c = 0, then (P 1 ) has a single local solution (which should be global as well). However, the above result indicates the complexity added by softly penalizing the sparsity pattern of the controller.
A. Summary of Contribution
Given the existence of many local minima for ODC in general, it is important to understand how effective local search algorithms are. These algorithms often have two parameters to design at every iteration: (i) descent direction, (ii) step size. Rooted in convex optimization, there is a large literature on how to design these two parameters to guarantee convergence to a solution. In particular, the existing solvers often use the backtracking technique to design a step size, which starts with a guess for the step size and then reduces it by a constant factor iteratively until an appropriate value is found. The initial guess for backtracking is often considered small since a large number does not offer any major benefits for convex problems. In this work, we show the contrary and prove that a large initial step for backtracking, which we name aggressive local search, has the ability to skip local minima by jumping from one connected component of the feasible set to another one. We first numerically illustrate this idea and then theoretically show that there exist values for the parameters (i) and (ii) of local search to guarantee convergence to a global solution from almost every feasible point. This positive result implies that local search does not necessarily become stuck even with a bad initialization, but finding the right parameters (i) and (ii) would be difficult in the worst case due to the NP-hardness of the problem.
B. Algebraic Characterization of Structural Constraints
Similar to [22] and [25], we introduce a structural identity matrix I S of the linear subspace S to algebraically characterize the structural constraint K ∈ S. The (i, j)-entry of I S is defined as Let I c S := 1 − I S be the structural identity of the complementary subspace S c , where 1 is the matrix with all its entries equal to one. One can write where denotes the entry-wise multiplication of matrices. The Formulation (P 1 ) can be written as
C. Inverse Optimal Control
It is known in the context of inverse optimal control [34] that any static state-feedback gain K opt is the unique minimizer of some quadratic performance measure (2) for all initial states. One such measure is Accordingly, we write R 1 and R 12 as If K opt ∈ S, the construction in (6) ensures that K opt is the globally optimal controller in both the decentralized and the centralized settings. We will use this fact to conduct case studies later in this paper.
III. LOCAL SEARCH ALGORITHMS
In this section, we give an overview of two optimization frameworks that have been applied to instances of ODC to deal with structural constraints: the projection-based method [22] and the augmented Lagrangian method [25].
We first derive the objective function's first and second derivatives. This will lead to the necessary optimality condition that can be exploited to develop local search algorithms to solve the ODC problem. Applying the standard techniques [23], [35] to the LTI system (1) with the quadratic performance measure (2), we obtain the first-and second-order derivatives of J as follows.
Proposition 1. The gradient of J is given by where L and P are the controllability and observability Gramians of the closed-loop system, given by and Proposition 2. The second-order approximation of J is determined by andL andP are the solutions of the following Lyapunov equations: Note that the notation ·, · used above is the inner product operator. To solve the constrained optimal control problem numerically, we can start with an initial stabilizing K 0 ∈ S and generate a descent stabilizing sequence {K i } using the update K i+1 = K i + s iK i , whereK i ∈ S is a descent direction determined by the first-order and possibly the second-order information, and s i is the step size.
A. Projection-based method
Since the structural constraint K I c S = 0 is linear, we can project the gradient ∇J(K) and H J (K,K) onto the linear subspace S to guarantee the satisfaction of the structural constraints. The projected gradient of J can be expressed as Then, given L and P , the first-order optimality condition ∇J(K) I S = 0 is a linear equation involving an entry-wise product. Based on the first-order condition (12), the alternating method (the so-called Anderson-Moore or A-M method) [2] can be employed. Starting with a decentralized stabilizing controller K ∈ S, this method alternates between solving the two Lyapunov equations (F ON − L) and (F ON − P ), and solving the linear equation (12). It is shown in [35] that the difference between two consecutive steps K i+1 − K i is a descent direction and therefore the alternating method will converge to a stationary point of (P 2 ). The advantage of this algorithm lies in its fast convergence compared to the gradient method [26], [35]. We next consider the second-order information. With the structural constraint K ∈ S, the second-order approximation of J can be expressed as where ∇J(K) and H J (K,K) are defined in (7) and (9). Based on the second-order information and its corresponding necessary optimality condition, Newton's method can be applied to determine the descent direction by minimizing the second-order approximation (13) of the objective function with the structural constraints. To avoid inverting the large Hessian matrix explicitly, the conjugate gradient method can be employed to compute the Newton direction [36,Chapter 5]. Both descent directions described above can be combined with line search methods. The commonly applied backtracking with Armijo rule selects s i as the largest number in {s,sβ,sβ 2 , ...} such that K i + s iK i is stabilizing and where α, β ∈ (0, 1) ands is the initial step step size. Selecting a large value fors corresponds to aggressive local search.
B. Augmented Lagrangian method
Instead of forcing the sparsity constraint by projecting ∇J(K) and H J (K,K) onto the subspace S, the augmented Lagrangian method [25] minimizes a sequence of unstructured problems. The augmented Lagrangian function for (P 2 ) is given by where the penalty weight c is a positive scalar, · is the Frobenius norm, and the Lagrangian multiplier V ∈ S c together with a local minimum of (P 2 ) is assumed to satisfy the second-order sufficient optimality conditions. The augmented Lagrangian method starts from an initial estimate of the Lagrangian multiplier V 0 , and then alternates between minimizing L c (K, V i ) with respect to the unstructured K for fixed V i : and updating the Lagrangian multiplier: To ensure convergence and avoid the ill-conditioning in minimizing L c (K, V ), a practical scheme is to update the penalty weight as c i+1 = γc i with γ > 1 until it reaches a certain threshold value τ . The augmented Lagrangian method terminates as soon as K I c S < is reached. Similar with the projection-based method, we can use the alternating method or Newton's method combined with the Armijo rule to solve the unconstrained augmented Lagrangian function L c (K, V i ). The gradient of L c (K, V i ) can be expressed as Then, given L and P , the first-order optimality condition L c (K, V i ) = 0 is a linear equation involving an entry-wise product. Based on the first-order condition (12), the alternating method solves the two Lyapunov equations (F ON − L) and (F ON − P ), and then solves the linear equation (16). It is proven in [25] that the difference between two consecutive steps K i+1 − K i is also a descent direction for the unconstrained augmented Lagrangian function, thereby ensuring the convergence to a stationary point of L c (K). Newton's method can also be applied to minimize L c (K) since it is well-suited for ill-conditioned L c (K) when the penalty weight c becomes large [37, Section 5.2]. To do so, we only need to minimize the second-order approximation of L c (K): where H L (K,K) is 2 (RKC −B P )LC +(R 2 KC −R 12 −B P )LC +cK, and then use the conjugate gradient method to compute the Newton direction.
IV. CASE STUDIES
In this section, we test the methods of Section III on examples in [31], where the feasible set of the constrained optimal control problem has an exponential number of connected components and consequently an exponential number of local minima. Consider the LTI system in (1) such that A is of the form where > 0, It is proven in [31] that for a small enough ≥ 0, the set has at least F n connected components, where F 0 = 1, F 1 = 1, F i+2 = F i+1 + F i for i = 0, 1, . . . is the Fibonacci sequence. Note that F n grows exponentially in n.
A. Performance of projection-based method
Although the observations to be discussed next are also valid for large values of n, we restrict the simulations to n = 3 so that the results can be visualized. Consider the third-order system (n = 3) with f 1 = −1, f 2 = h 2 = 10, f 3 = h 3 = 1 and where K c is the optimal centralized controller, and R 1 and R 12 are accordingly computed by (6). Assume that the set S consists of only purely diagonal matrices, meaning that a decentralized controller is to be designed. The feasible set of the ODC problem has 3 connected components with no margin between the closures of the components (as shown in Fig. 1). The parameters of the Armijo rule are set as s = 1, β = 0.5, α = 10 −2 and the stopping criterion is ∇J(K) I S < 10 −3 . The initial points are randomly sampled among the structured stabilizing controllers. (40,40,40) Kc 1) Jumping bewteen connected components: Some of the convergence results from random initializations are summarized in Table I and some of the trajectories are plotted in Fig. 1. We use D j (x) to denote the diagonal matrix where the vectorized diagonal elements are the vector x and the subscript j is the index of the connected component that the corresponding feedback gain belongs to. For example, D 1 (40,40,40) represents the diagonal feedback gain with the diagonal entries 40, 40, 40 in the connected component 1. We also use the notation K + to denote any locally optimal solution and K j to denote any locally optimal solution in the component j. In this example, we have K 1 = K c , K 2 = D 2 (6.06, −3.16, −0.63) and K 3 = D 3 (6.48, 6.46, 3.02). Note that K c is by design the best centralized controller and since it is already diagonal, it is the globally optimal solution of ODC.
From Table I, we can see that a jump can occur from the globally optimal component to the sub-optimal component and vice versa. Therefore, on the one hand, the projection-based method can not guarantee the convergence to the globally optimal solution even if initialized in the the globally optimal component. On the other hand, even if initialized in the suboptimal component, the projection-based local search is still likely to find the globally optimal solution by jumping to the globally optimal component. This observation also supports the conclusion in Section V that except for a set of measure zero, all initial points of the decentralized LQR problem can be connected to the globally optimal decentralized controller via a path that involves only descent directions.
2) Strict separation and exponential number of connected components: We next consider the same system in (20) but with = 0.05. In this case, the connected components will be strictly separated (as shown in Fig. 2). As increases, the disconnected components become more separated [31]. Although the jump between the connected components still occurs, the projection-based local search methods is more likely to become stuck in the connected component that contains the initial point since the step size is not adaptively designed. Table II compares the number of jumps from the sub-optimality components to the globally optimality component in 10, 000 random initialization trials for different values of . With the slightly abuse of notation, we use D 2 and D 3 to denote the component 2 and 3 that are the sub-optimal components.
We comment that in this example, as the dimension n increases, the number of connected components increases exponentially. Therefore, the likelihood of jumping from the sub-optimal component to the globally optimal component is slim with a small step size.
3) Aggressive step size: Table III compares the number of jumps from the sub-optimal components D 2 and D 3 to the globally optimal component in 10, 000 random initialization trials for different initial step sizess and β. The trajectories initialized at the same points but with the different step size parameters are plotted in Fig. 1 and Fig. 2. From the above numerical results, we can see that the projection-based local search method would fail if the step size is restricted to be small in the backtracking of the Armijo rule. In convex optimization as well as those nonlinear problems with a connected feasible region, the common practice is to consider the step size in descent algorithms to be small to guarantee the convergence to a locally/globally optimal solution. Note that the upper bound on the step size is often considered to be less than a constant factor of the inverse of the Lipschitz constant of the objective function [37]. But, using a small step size in the above example, we almost always obtain a nonglobal local solution whenever we initialize the algorithm in a sub-optimal component. However, since numerical algorithms are oblivious to the geometry of the feasible sets, they can be deceived by selecting large step sizes to make them jump from one connected component to another one without realizing discontinuity.
B. Performance of augmented Lagrangian method
We consider the third-order system given in (18) and (19) with where R 1 and R 12 are accordingly computed by (6). Here, the parameters of the augmented Lagrangian method are set as V 0 = 0, c 0 = 10, γ = 3, τ = 10 5 and the parameters of the Armijo rule are set as s 0 = 1, β = 0.5, α = 10 −2 .
The stopping criterion for the augmented Lagrangian method is K I S < 10 −4 and the stopping criterion for minimizing the unconstrained augmented Lagrangian function is ∇L c (K) < 10 −2 .
Since it is generally NP-hard to solve the ODC problem [8], [38], there is no efficient method to find a globally optimal decentralized controller with guarantees. However, from the fact that a random initialization in our simulation yields at least 2 local solutions with different objective values, we can still conclude that the augmented Lagrangian method with the random initialization fails to find the globally optimal decentralized controller. Some of the convergence results are summarized in Table IV. Here, local optimal solutions are K 1 = D 1 (6.31, 6.10, 3. 1) Locally strong convexity and aggressive step size: Table V compares the number of convergences to K 1 , K 2 and K 3 respectively in 700 random initialization 1 trials for different initial penalty weight c 0 .
Penalty methods like the augmented Lagrangian method, which allow the violation of the structure constraints during the iterations, seem more likely to overcome the discontinuity of the feasible region than the projection-based method. However, the locally strong convexity introduced by the augmented Lagrangian function makes the local search more sensitive to the initial point. That is, as the initial penalty weight c 0 increases, the locally strong convexity near the subspace I S also increases, which tends to attract the initial point to its closet local solution. Therefore, for these problems with a disconnected feasible region, the augmented Lagrangian method is not robust and can easily become stuck in a local solution. To overcome the locally strong convexity associated with the sub-optimal component, an aggressive step size is desirable.
Alternating
Newton V. PATH TO THE GLOBALLY OPTIMAL SOLUTION In this section, we show that except for a set of measure zero, all initial stabilizing points of the decentralized LQR problem can be connected to the globally optimal decentralized controller via a path that involves only descent directions. The proof requires the result below on convergence to local minimizers. Given a twice continuously differentiable function J(K), its stationary point K + solves ∇J(K + ) = 0. The function J is said to satisfy strict saddle property [39] if each critical point K + of J is either a local minimizer or a "strict saddle", that is, ∇ 2 J(K + ) has at least one strictly negative eigenvalue. Lemma 2 ([32]). If f : R d → R is twice continuously differentiable and satisfies the strict saddle property, then gradient descent with a random initialization and sufficiently small constant step sizes converges to a local minimizer or negative infinity almost surely.
To apply the lemma above, we first show that the LQR problem has a certain structure that disallows the locally optimal stabilizing K to have arbitrary magnitude.
Lemma 3. Consider the decentralized LQR problem in (P 1 ). Suppose that C has full row rank, is positive definite, and K ∈ S is stabilizing. Then, J(K) → ∞ whenever K 2 → ∞ or when K approaches the boundary of the set of stabilizing controllers. Therefore, any descent method yields a bounded sequence of stabilizing controllers.
Proof. We have When K is stabilizing, P (K) is well-defined. As K approaches a finite K † on the boundary of the set of stabilizing controllers, we show that P (K) 2 → ∞. By assumption, the symmetric matrixR(K) in the integral is positive definite, because it can be written aŝ Therefore, its minimum eigenvalue λ min (R(K † )) > 0, and when K is close to K † ,R(K) 1 2 λ min (R(K † ))I. We make the estimate where spabs(·) denotes the spectral abscissa (maximum real part of the eigenvalues). The estimate above shows that trace(P (K)) → ∞ as K approaches K † from the stabilizing set, hence J(K) = trace(P (K)D 0 ) ≥ trace(P (K))λ min (D 0 ) also approaches infinity.
In case K 2 → ∞ from the stabilizing set, we use the fact that P (K) is the unique solution to the equation It follows from the triangle inequality that where σ min (C) is the minimum singular value of C. Therefore, .
also approaches infinity.
Lemma 3 guarantees existence of a locally optimal decentralized controller in any connected component of the stabilizing set. Next, we show that around any strict local minimum of J(K), there exists a controller from which a descent direction point towards a neighborhood of the globally optimal controller. Lemma 4. Suppose that K + is a strict local minimum of J(K), and K + is not equal to a globally optimal solution K * . Then, for all δ 0 > 0, there exist a stabilizingK + with K + − K + ≤ δ 0 and a number δ 1 > 0, such that for all stabilizingK * with K * − K * ≤ δ 1 , the directionK * −K + is a descent direction atK 0 .
Remark 1. When ∇J(K) is smooth, the claim of Lemma 4 can be strengthened. At a non-degenerate zero K + of the vector field ∇J(K), the index of the vector field is nonzero.
Within the set of stabilizing controllers, a suitably small lowerlevel set of J around K + can be regarded as a manifold X with boundary; its Gauss map to the unit sphere will have a non-zero degree [40, §6]. Sard's theorem [40] implies that almost all directions in the unit sphere are achievable by some gradient of J at the boundary of X. When X is small, the direction K * − K + is not so different from K * − K; hence almost all points in a neighborhood of K * can be made arbitrarily close to some ray K − α∇J(K), α > 0 with a suitable K ∈ X.
Theorem 1. Consider a decentralized LQR problem with the same assumption as in Lemma 3. Suppose that J(K) satisfies the strict-saddle property and its local minima are all strict. Then, except for a set of measure zero, from every initial stabilizing controller, there is a path to a globally optimal decentralized controller that involves only descent directions.
Proof. Denote by K * a globally optimal decentralized controller. Suppose that the initialization is at a point K 0 . If ∇J(K 0 ) = 0, we initialize at a local minimum and can never escape. This scenario occurs on a measure-zero set since by assumption the local minima are isolated. When ∇J(K 0 ) = 0, there are two cases: (1) if K * −K, ∇J(K) < 0, then K * −K is a descent direction, and it is possible to jump to the globally optimal solution in one step; (2) If K * − K, ∇J(K) ≥ 0, the global optimal solution is on the other side of the local gradient. We will prove that it is still possible turn around near a locally optimal controller, which exists in any connected components due to Lemma 3. From Lemma 2, for almost any initial point K 0 , gradient descent and a small enough step size is able to come arbitrarily close to some local minimum K + . Since K + is a strict local minimum, there is some > 0 such that when K −K + ≤ , we have ∇ 2 J(K) 0, which means that J is strongly convex in this -neighborhood of K + . Suppose that at n-th iteration, we are at some point K n in this -neighborhood. It followings from strong convexity that −∇J(K n ), K + −K n = ∇J(K + )−∇J(K n ), K + −K n >0 which means that K + − K n is a descent direction at K n . By continuity, there is a small 0 < δ < such that K − K n is a descent direction whenever K − K + ≤ δ, Applying Lemma 4 with this δ 0 = δ to obtainK + andK * , andK * may be selected so close to K * that gradient descent initialized at K * converges to K * . Assigning K n+1 =K + and K n+2 = K * , with only descent directions we can connect K 0 to a neighborhood of K * where gradient descent converges.
VI. CONCLUSIONS
We studied the numerical behavior of local search methods when they are applied to the optimal decentralized control (ODC) problem with a disconnected feasible set. We found different behaviors between projection-based and augmented Lagrangian methods when there are multiple local minima. Moreover, we proved that a succession of jumps to the globally optimal component with descent directions is possible for almost all initializations. The existence of a path that involves many connected components provides a theoretical way to escape local minima that are created by the discontinuity of the feasible set of constrained optimal control problems. It should be noted that our existence result does not directly imply an algorithm that decides the best opportunity to take a jump to the optimal component. Using prior information about the optimal component to identify the best jump strategy is a promising direction of future research. | 2019-03-20T17:42:43.000Z | 2019-03-20T00:00:00.000 | {
"year": 2019,
"sha1": "aacbf2e763adb189c691724dac997c8bbe499086",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aacbf2e763adb189c691724dac997c8bbe499086",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.